Jan 30 14:02:30.182098 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 30 14:02:30.182144 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025 Jan 30 14:02:30.182169 kernel: KASLR disabled due to lack of seed Jan 30 14:02:30.182186 kernel: efi: EFI v2.7 by EDK II Jan 30 14:02:30.182202 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 30 14:02:30.182217 kernel: ACPI: Early table checksum verification disabled Jan 30 14:02:30.182236 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 30 14:02:30.182312 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 30 14:02:30.182330 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 30 14:02:30.182347 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 30 14:02:30.182370 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 30 14:02:30.182386 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 30 14:02:30.182402 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 30 14:02:30.182418 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 30 14:02:30.182437 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 30 14:02:30.182458 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 30 14:02:30.182475 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 30 14:02:30.182491 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 30 14:02:30.182508 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 30 14:02:30.182524 kernel: printk: bootconsole [uart0] enabled Jan 30 14:02:30.182541 kernel: NUMA: Failed to initialise from firmware Jan 30 14:02:30.182558 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 14:02:30.182575 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 30 14:02:30.182591 kernel: Zone ranges: Jan 30 14:02:30.182607 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 30 14:02:30.182623 kernel: DMA32 empty Jan 30 14:02:30.182644 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 30 14:02:30.182661 kernel: Movable zone start for each node Jan 30 14:02:30.182677 kernel: Early memory node ranges Jan 30 14:02:30.182703 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 30 14:02:30.182747 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 30 14:02:30.182765 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 30 14:02:30.182782 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 30 14:02:30.182798 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 30 14:02:30.182815 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 30 14:02:30.182832 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 30 14:02:30.182848 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 30 14:02:30.182865 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 30 14:02:30.182887 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 30 14:02:30.182905 kernel: psci: probing for conduit method from ACPI. Jan 30 14:02:30.182929 kernel: psci: PSCIv1.0 detected in firmware. Jan 30 14:02:30.182947 kernel: psci: Using standard PSCI v0.2 function IDs Jan 30 14:02:30.182964 kernel: psci: Trusted OS migration not required Jan 30 14:02:30.182985 kernel: psci: SMC Calling Convention v1.1 Jan 30 14:02:30.183003 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 30 14:02:30.183020 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 30 14:02:30.183037 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 30 14:02:30.183055 kernel: Detected PIPT I-cache on CPU0 Jan 30 14:02:30.183072 kernel: CPU features: detected: GIC system register CPU interface Jan 30 14:02:30.183089 kernel: CPU features: detected: Spectre-v2 Jan 30 14:02:30.183106 kernel: CPU features: detected: Spectre-v3a Jan 30 14:02:30.183123 kernel: CPU features: detected: Spectre-BHB Jan 30 14:02:30.183140 kernel: CPU features: detected: ARM erratum 1742098 Jan 30 14:02:30.183158 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 30 14:02:30.183179 kernel: alternatives: applying boot alternatives Jan 30 14:02:30.183199 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:02:30.183218 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:02:30.183235 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:02:30.183687 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:02:30.183710 kernel: Fallback order for Node 0: 0 Jan 30 14:02:30.183728 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 30 14:02:30.183745 kernel: Policy zone: Normal Jan 30 14:02:30.183763 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:02:30.183780 kernel: software IO TLB: area num 2. Jan 30 14:02:30.183798 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 30 14:02:30.183824 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 30 14:02:30.183842 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 14:02:30.183859 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:02:30.183878 kernel: rcu: RCU event tracing is enabled. Jan 30 14:02:30.183896 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 14:02:30.183913 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:02:30.183931 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:02:30.183949 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:02:30.183966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 14:02:30.183984 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 30 14:02:30.184001 kernel: GICv3: 96 SPIs implemented Jan 30 14:02:30.184023 kernel: GICv3: 0 Extended SPIs implemented Jan 30 14:02:30.184042 kernel: Root IRQ handler: gic_handle_irq Jan 30 14:02:30.184065 kernel: GICv3: GICv3 features: 16 PPIs Jan 30 14:02:30.184082 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 30 14:02:30.184099 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 30 14:02:30.184117 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 30 14:02:30.184135 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 30 14:02:30.184152 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 30 14:02:30.184170 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 30 14:02:30.184187 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 30 14:02:30.184204 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:02:30.184222 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 30 14:02:30.184268 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 30 14:02:30.185313 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 30 14:02:30.185342 kernel: Console: colour dummy device 80x25 Jan 30 14:02:30.185361 kernel: printk: console [tty1] enabled Jan 30 14:02:30.185378 kernel: ACPI: Core revision 20230628 Jan 30 14:02:30.185397 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 30 14:02:30.185414 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:02:30.185432 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:02:30.185450 kernel: landlock: Up and running. Jan 30 14:02:30.185477 kernel: SELinux: Initializing. Jan 30 14:02:30.185495 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:02:30.185513 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:02:30.185531 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:02:30.185548 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 14:02:30.185566 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:02:30.185584 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:02:30.185602 kernel: Platform MSI: ITS@0x10080000 domain created Jan 30 14:02:30.185620 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 30 14:02:30.185641 kernel: Remapping and enabling EFI services. Jan 30 14:02:30.185659 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:02:30.185677 kernel: Detected PIPT I-cache on CPU1 Jan 30 14:02:30.185694 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 30 14:02:30.185712 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 30 14:02:30.185730 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 30 14:02:30.185747 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 14:02:30.185765 kernel: SMP: Total of 2 processors activated. Jan 30 14:02:30.185782 kernel: CPU features: detected: 32-bit EL0 Support Jan 30 14:02:30.185804 kernel: CPU features: detected: 32-bit EL1 Support Jan 30 14:02:30.185822 kernel: CPU features: detected: CRC32 instructions Jan 30 14:02:30.185839 kernel: CPU: All CPU(s) started at EL1 Jan 30 14:02:30.185869 kernel: alternatives: applying system-wide alternatives Jan 30 14:02:30.185891 kernel: devtmpfs: initialized Jan 30 14:02:30.185910 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:02:30.185928 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 14:02:30.185946 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:02:30.185964 kernel: SMBIOS 3.0.0 present. Jan 30 14:02:30.185983 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 30 14:02:30.186006 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:02:30.186024 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 30 14:02:30.186043 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 30 14:02:30.186062 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 30 14:02:30.186080 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:02:30.186098 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Jan 30 14:02:30.186117 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:02:30.186139 kernel: cpuidle: using governor menu Jan 30 14:02:30.186158 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 30 14:02:30.186176 kernel: ASID allocator initialised with 65536 entries Jan 30 14:02:30.186194 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:02:30.186212 kernel: Serial: AMBA PL011 UART driver Jan 30 14:02:30.186231 kernel: Modules: 17520 pages in range for non-PLT usage Jan 30 14:02:30.187323 kernel: Modules: 509040 pages in range for PLT usage Jan 30 14:02:30.187351 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:02:30.187370 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:02:30.187397 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 30 14:02:30.187416 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 30 14:02:30.187435 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:02:30.187453 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:02:30.187472 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 30 14:02:30.187490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 30 14:02:30.187509 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:02:30.187527 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:02:30.187545 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:02:30.187568 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:02:30.187587 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:02:30.187605 kernel: ACPI: Interpreter enabled Jan 30 14:02:30.187623 kernel: ACPI: Using GIC for interrupt routing Jan 30 14:02:30.187641 kernel: ACPI: MCFG table detected, 1 entries Jan 30 14:02:30.187659 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 30 14:02:30.187975 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:02:30.188186 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 30 14:02:30.188436 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 30 14:02:30.188643 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 30 14:02:30.188847 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 30 14:02:30.188873 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 30 14:02:30.188892 kernel: acpiphp: Slot [1] registered Jan 30 14:02:30.188910 kernel: acpiphp: Slot [2] registered Jan 30 14:02:30.188929 kernel: acpiphp: Slot [3] registered Jan 30 14:02:30.188947 kernel: acpiphp: Slot [4] registered Jan 30 14:02:30.188972 kernel: acpiphp: Slot [5] registered Jan 30 14:02:30.188991 kernel: acpiphp: Slot [6] registered Jan 30 14:02:30.189009 kernel: acpiphp: Slot [7] registered Jan 30 14:02:30.189027 kernel: acpiphp: Slot [8] registered Jan 30 14:02:30.189045 kernel: acpiphp: Slot [9] registered Jan 30 14:02:30.189063 kernel: acpiphp: Slot [10] registered Jan 30 14:02:30.189081 kernel: acpiphp: Slot [11] registered Jan 30 14:02:30.189099 kernel: acpiphp: Slot [12] registered Jan 30 14:02:30.189117 kernel: acpiphp: Slot [13] registered Jan 30 14:02:30.189136 kernel: acpiphp: Slot [14] registered Jan 30 14:02:30.189158 kernel: acpiphp: Slot [15] registered Jan 30 14:02:30.189177 kernel: acpiphp: Slot [16] registered Jan 30 14:02:30.189195 kernel: acpiphp: Slot [17] registered Jan 30 14:02:30.189213 kernel: acpiphp: Slot [18] registered Jan 30 14:02:30.189232 kernel: acpiphp: Slot [19] registered Jan 30 14:02:30.191283 kernel: acpiphp: Slot [20] registered Jan 30 14:02:30.191347 kernel: acpiphp: Slot [21] registered Jan 30 14:02:30.191368 kernel: acpiphp: Slot [22] registered Jan 30 14:02:30.191388 kernel: acpiphp: Slot [23] registered Jan 30 14:02:30.191414 kernel: acpiphp: Slot [24] registered Jan 30 14:02:30.191433 kernel: acpiphp: Slot [25] registered Jan 30 14:02:30.191451 kernel: acpiphp: Slot [26] registered Jan 30 14:02:30.191469 kernel: acpiphp: Slot [27] registered Jan 30 14:02:30.191487 kernel: acpiphp: Slot [28] registered Jan 30 14:02:30.191506 kernel: acpiphp: Slot [29] registered Jan 30 14:02:30.191524 kernel: acpiphp: Slot [30] registered Jan 30 14:02:30.191542 kernel: acpiphp: Slot [31] registered Jan 30 14:02:30.191560 kernel: PCI host bridge to bus 0000:00 Jan 30 14:02:30.191792 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 30 14:02:30.191990 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 30 14:02:30.192179 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 30 14:02:30.193757 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 30 14:02:30.194001 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 30 14:02:30.194233 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 30 14:02:30.194588 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 30 14:02:30.194817 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 30 14:02:30.195022 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 30 14:02:30.195231 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:02:30.195486 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 30 14:02:30.195696 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 30 14:02:30.195898 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 30 14:02:30.196112 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 30 14:02:30.196369 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 14:02:30.196581 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 30 14:02:30.196799 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 30 14:02:30.197010 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 30 14:02:30.197217 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 30 14:02:30.197462 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 30 14:02:30.197656 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 30 14:02:30.197837 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 30 14:02:30.198019 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 30 14:02:30.198045 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 30 14:02:30.198064 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 30 14:02:30.198083 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 30 14:02:30.198102 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 30 14:02:30.198120 kernel: iommu: Default domain type: Translated Jan 30 14:02:30.198139 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 30 14:02:30.198163 kernel: efivars: Registered efivars operations Jan 30 14:02:30.198181 kernel: vgaarb: loaded Jan 30 14:02:30.198200 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 30 14:02:30.198218 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:02:30.198237 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:02:30.198306 kernel: pnp: PnP ACPI init Jan 30 14:02:30.198528 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 30 14:02:30.198556 kernel: pnp: PnP ACPI: found 1 devices Jan 30 14:02:30.198582 kernel: NET: Registered PF_INET protocol family Jan 30 14:02:30.198602 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:02:30.198621 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:02:30.198640 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:02:30.198659 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:02:30.198679 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:02:30.198698 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:02:30.198716 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:02:30.198735 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:02:30.198759 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:02:30.198777 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:02:30.198796 kernel: kvm [1]: HYP mode not available Jan 30 14:02:30.198815 kernel: Initialise system trusted keyrings Jan 30 14:02:30.198834 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:02:30.198852 kernel: Key type asymmetric registered Jan 30 14:02:30.198870 kernel: Asymmetric key parser 'x509' registered Jan 30 14:02:30.198889 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 30 14:02:30.198908 kernel: io scheduler mq-deadline registered Jan 30 14:02:30.198931 kernel: io scheduler kyber registered Jan 30 14:02:30.198950 kernel: io scheduler bfq registered Jan 30 14:02:30.199186 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 30 14:02:30.199216 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 30 14:02:30.199236 kernel: ACPI: button: Power Button [PWRB] Jan 30 14:02:30.200329 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 30 14:02:30.200357 kernel: ACPI: button: Sleep Button [SLPB] Jan 30 14:02:30.200376 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:02:30.200405 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 30 14:02:30.200657 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 30 14:02:30.200685 kernel: printk: console [ttyS0] disabled Jan 30 14:02:30.200704 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 30 14:02:30.200723 kernel: printk: console [ttyS0] enabled Jan 30 14:02:30.200741 kernel: printk: bootconsole [uart0] disabled Jan 30 14:02:30.200759 kernel: thunder_xcv, ver 1.0 Jan 30 14:02:30.200778 kernel: thunder_bgx, ver 1.0 Jan 30 14:02:30.200795 kernel: nicpf, ver 1.0 Jan 30 14:02:30.200819 kernel: nicvf, ver 1.0 Jan 30 14:02:30.201042 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 30 14:02:30.201240 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-30T14:02:29 UTC (1738245749) Jan 30 14:02:30.204320 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 14:02:30.204342 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 30 14:02:30.204361 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 30 14:02:30.204380 kernel: watchdog: Hard watchdog permanently disabled Jan 30 14:02:30.204398 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:02:30.204429 kernel: Segment Routing with IPv6 Jan 30 14:02:30.204448 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:02:30.204467 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:02:30.204485 kernel: Key type dns_resolver registered Jan 30 14:02:30.204503 kernel: registered taskstats version 1 Jan 30 14:02:30.204522 kernel: Loading compiled-in X.509 certificates Jan 30 14:02:30.204541 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415' Jan 30 14:02:30.204559 kernel: Key type .fscrypt registered Jan 30 14:02:30.204577 kernel: Key type fscrypt-provisioning registered Jan 30 14:02:30.204600 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:02:30.204618 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:02:30.204637 kernel: ima: No architecture policies found Jan 30 14:02:30.204655 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 30 14:02:30.204673 kernel: clk: Disabling unused clocks Jan 30 14:02:30.204692 kernel: Freeing unused kernel memory: 39360K Jan 30 14:02:30.204710 kernel: Run /init as init process Jan 30 14:02:30.204728 kernel: with arguments: Jan 30 14:02:30.204746 kernel: /init Jan 30 14:02:30.204764 kernel: with environment: Jan 30 14:02:30.204787 kernel: HOME=/ Jan 30 14:02:30.204806 kernel: TERM=linux Jan 30 14:02:30.204825 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:02:30.204850 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:02:30.204874 systemd[1]: Detected virtualization amazon. Jan 30 14:02:30.204895 systemd[1]: Detected architecture arm64. Jan 30 14:02:30.204915 systemd[1]: Running in initrd. Jan 30 14:02:30.204939 systemd[1]: No hostname configured, using default hostname. Jan 30 14:02:30.204959 systemd[1]: Hostname set to . Jan 30 14:02:30.204980 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:02:30.205000 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:02:30.205021 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:02:30.205042 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:02:30.205063 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:02:30.205084 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:02:30.205111 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:02:30.205132 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:02:30.205156 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:02:30.205177 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:02:30.205198 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:02:30.205219 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:02:30.205239 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:02:30.205292 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:02:30.205323 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:02:30.205368 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:02:30.205429 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:02:30.205466 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:02:30.205493 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:02:30.205522 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:02:30.205543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:02:30.205563 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:02:30.205590 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:02:30.205610 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:02:30.205630 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:02:30.205651 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:02:30.205671 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:02:30.205691 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:02:30.205711 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:02:30.205731 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:02:30.205756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:02:30.205777 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:02:30.205797 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:02:30.205818 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:02:30.205896 systemd-journald[250]: Collecting audit messages is disabled. Jan 30 14:02:30.205946 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:02:30.205968 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:02:30.205988 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:02:30.206012 systemd-journald[250]: Journal started Jan 30 14:02:30.206050 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2397cd28ae055270ee1a9a5a04d18e) is 8.0M, max 75.3M, 67.3M free. Jan 30 14:02:30.174350 systemd-modules-load[251]: Inserted module 'overlay' Jan 30 14:02:30.217412 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:02:30.217453 kernel: Bridge firewalling registered Jan 30 14:02:30.214919 systemd-modules-load[251]: Inserted module 'br_netfilter' Jan 30 14:02:30.222747 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:02:30.226643 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:02:30.243575 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:02:30.249683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:02:30.267637 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:02:30.275532 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:02:30.291345 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:02:30.311652 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:02:30.323646 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:02:30.328753 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:02:30.344326 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:02:30.373620 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:02:30.393941 systemd-resolved[287]: Positive Trust Anchors: Jan 30 14:02:30.393979 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:02:30.394042 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:02:30.447893 dracut-cmdline[291]: dracut-dracut-053 Jan 30 14:02:30.454855 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c Jan 30 14:02:30.616287 kernel: SCSI subsystem initialized Jan 30 14:02:30.624285 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:02:30.636283 kernel: iscsi: registered transport (tcp) Jan 30 14:02:30.658731 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:02:30.658816 kernel: QLogic iSCSI HBA Driver Jan 30 14:02:30.677937 kernel: random: crng init done Jan 30 14:02:30.677440 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 30 14:02:30.679325 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:02:30.683213 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:02:30.738341 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:02:30.749591 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:02:30.790450 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:02:30.790525 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:02:30.792129 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:02:30.857305 kernel: raid6: neonx8 gen() 6747 MB/s Jan 30 14:02:30.874277 kernel: raid6: neonx4 gen() 6512 MB/s Jan 30 14:02:30.891278 kernel: raid6: neonx2 gen() 5458 MB/s Jan 30 14:02:30.908277 kernel: raid6: neonx1 gen() 3950 MB/s Jan 30 14:02:30.925277 kernel: raid6: int64x8 gen() 3829 MB/s Jan 30 14:02:30.942277 kernel: raid6: int64x4 gen() 3720 MB/s Jan 30 14:02:30.959278 kernel: raid6: int64x2 gen() 3613 MB/s Jan 30 14:02:30.976999 kernel: raid6: int64x1 gen() 2777 MB/s Jan 30 14:02:30.977041 kernel: raid6: using algorithm neonx8 gen() 6747 MB/s Jan 30 14:02:30.995011 kernel: raid6: .... xor() 4885 MB/s, rmw enabled Jan 30 14:02:30.995047 kernel: raid6: using neon recovery algorithm Jan 30 14:02:31.002281 kernel: xor: measuring software checksum speed Jan 30 14:02:31.004339 kernel: 8regs : 10025 MB/sec Jan 30 14:02:31.004371 kernel: 32regs : 11912 MB/sec Jan 30 14:02:31.005485 kernel: arm64_neon : 9519 MB/sec Jan 30 14:02:31.005517 kernel: xor: using function: 32regs (11912 MB/sec) Jan 30 14:02:31.088292 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:02:31.107410 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:02:31.117546 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:02:31.159934 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 30 14:02:31.169338 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:02:31.178498 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:02:31.216144 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jan 30 14:02:31.271039 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:02:31.280619 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:02:31.396399 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:02:31.412414 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:02:31.446458 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:02:31.451320 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:02:31.456428 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:02:31.472983 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:02:31.496625 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:02:31.535689 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:02:31.603890 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 30 14:02:31.603957 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 30 14:02:31.620935 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 30 14:02:31.621187 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 30 14:02:31.621501 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:5c:68:a7:15:3b Jan 30 14:02:31.613099 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:02:31.613625 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:02:31.616925 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:02:31.621394 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:02:31.621666 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:02:31.624427 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:02:31.641762 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:02:31.649741 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:02:31.672325 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 30 14:02:31.674298 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 30 14:02:31.685683 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 30 14:02:31.686493 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:02:31.692568 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:02:31.701838 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:02:31.701874 kernel: GPT:9289727 != 16777215 Jan 30 14:02:31.701899 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:02:31.701924 kernel: GPT:9289727 != 16777215 Jan 30 14:02:31.701947 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:02:31.701971 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:02:31.750100 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:02:31.807319 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (522) Jan 30 14:02:31.811662 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (544) Jan 30 14:02:31.900639 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 30 14:02:31.938097 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 30 14:02:31.953468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 14:02:31.967973 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 30 14:02:31.968123 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 30 14:02:31.991600 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:02:32.003199 disk-uuid[662]: Primary Header is updated. Jan 30 14:02:32.003199 disk-uuid[662]: Secondary Entries is updated. Jan 30 14:02:32.003199 disk-uuid[662]: Secondary Header is updated. Jan 30 14:02:32.012282 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:02:32.020503 kernel: GPT:disk_guids don't match. Jan 30 14:02:32.020566 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:02:32.020593 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:02:32.030283 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:02:33.031313 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 30 14:02:33.033561 disk-uuid[663]: The operation has completed successfully. Jan 30 14:02:33.201620 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:02:33.203405 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:02:33.264569 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:02:33.273458 sh[1005]: Success Jan 30 14:02:33.291310 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 30 14:02:33.398857 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:02:33.417470 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:02:33.420825 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:02:33.457678 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08 Jan 30 14:02:33.457740 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:02:33.459514 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:02:33.460739 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:02:33.461783 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:02:33.564294 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 14:02:33.587856 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:02:33.589356 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:02:33.600503 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:02:33.607530 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:02:33.642962 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:02:33.643033 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:02:33.644627 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:02:33.654981 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:02:33.673385 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:02:33.675542 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:02:33.683751 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:02:33.695661 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:02:33.778652 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:02:33.795236 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:02:33.840674 systemd-networkd[1197]: lo: Link UP Jan 30 14:02:33.840689 systemd-networkd[1197]: lo: Gained carrier Jan 30 14:02:33.846175 systemd-networkd[1197]: Enumeration completed Jan 30 14:02:33.846919 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:02:33.846926 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:02:33.847670 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:02:33.854432 systemd[1]: Reached target network.target - Network. Jan 30 14:02:33.864208 systemd-networkd[1197]: eth0: Link UP Jan 30 14:02:33.864230 systemd-networkd[1197]: eth0: Gained carrier Jan 30 14:02:33.864277 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:02:33.879337 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.23.237/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 14:02:34.026941 ignition[1126]: Ignition 2.19.0 Jan 30 14:02:34.026970 ignition[1126]: Stage: fetch-offline Jan 30 14:02:34.027532 ignition[1126]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:02:34.028532 ignition[1126]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:02:34.030149 ignition[1126]: Ignition finished successfully Jan 30 14:02:34.036100 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:02:34.046613 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 14:02:34.074067 ignition[1206]: Ignition 2.19.0 Jan 30 14:02:34.074090 ignition[1206]: Stage: fetch Jan 30 14:02:34.075421 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:02:34.075464 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:02:34.075613 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:02:34.091234 ignition[1206]: PUT result: OK Jan 30 14:02:34.094442 ignition[1206]: parsed url from cmdline: "" Jan 30 14:02:34.094457 ignition[1206]: no config URL provided Jan 30 14:02:34.094473 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:02:34.094497 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:02:34.094527 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:02:34.096061 ignition[1206]: PUT result: OK Jan 30 14:02:34.096137 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 30 14:02:34.109023 unknown[1206]: fetched base config from "system" Jan 30 14:02:34.098352 ignition[1206]: GET result: OK Jan 30 14:02:34.109041 unknown[1206]: fetched base config from "system" Jan 30 14:02:34.098490 ignition[1206]: parsing config with SHA512: a692311794ad3db04a6bd5dbff8e91e5303afeaf267a0783baafad8104c55f8cf10cd1c39f3ba75ed66b0c7dfde36bbc04d3a96b9a8a7acd0f12eef43e1377c5 Jan 30 14:02:34.109064 unknown[1206]: fetched user config from "aws" Jan 30 14:02:34.113863 ignition[1206]: fetch: fetch complete Jan 30 14:02:34.121547 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 14:02:34.113876 ignition[1206]: fetch: fetch passed Jan 30 14:02:34.113979 ignition[1206]: Ignition finished successfully Jan 30 14:02:34.141468 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:02:34.164886 ignition[1212]: Ignition 2.19.0 Jan 30 14:02:34.164914 ignition[1212]: Stage: kargs Jan 30 14:02:34.165865 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:02:34.165891 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:02:34.166041 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:02:34.167810 ignition[1212]: PUT result: OK Jan 30 14:02:34.184281 ignition[1212]: kargs: kargs passed Jan 30 14:02:34.184554 ignition[1212]: Ignition finished successfully Jan 30 14:02:34.192706 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:02:34.204552 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:02:34.238052 ignition[1218]: Ignition 2.19.0 Jan 30 14:02:34.238083 ignition[1218]: Stage: disks Jan 30 14:02:34.239206 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:02:34.239232 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:02:34.240877 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:02:34.244529 ignition[1218]: PUT result: OK Jan 30 14:02:34.250920 ignition[1218]: disks: disks passed Jan 30 14:02:34.251080 ignition[1218]: Ignition finished successfully Jan 30 14:02:34.256306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:02:34.259400 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:02:34.265526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:02:34.268080 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:02:34.269913 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:02:34.272137 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:02:34.284517 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:02:34.329736 systemd-fsck[1227]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:02:34.335776 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:02:34.350562 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:02:34.437303 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none. Jan 30 14:02:34.439510 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:02:34.442126 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:02:34.459408 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:02:34.471498 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:02:34.476959 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 14:02:34.482079 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:02:34.483231 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:02:34.491913 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:02:34.497877 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:02:34.508353 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1246) Jan 30 14:02:34.513569 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:02:34.513634 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:02:34.513661 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:02:34.529292 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:02:34.531740 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:02:34.909775 initrd-setup-root[1270]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:02:34.946513 initrd-setup-root[1277]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:02:34.955033 initrd-setup-root[1284]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:02:34.963975 initrd-setup-root[1291]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:02:35.329807 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:02:35.342407 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:02:35.351347 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:02:35.365508 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:02:35.371337 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:02:35.416347 ignition[1358]: INFO : Ignition 2.19.0 Jan 30 14:02:35.416347 ignition[1358]: INFO : Stage: mount Jan 30 14:02:35.419548 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:02:35.423823 ignition[1358]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:02:35.423823 ignition[1358]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:02:35.423823 ignition[1358]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:02:35.431145 ignition[1358]: INFO : PUT result: OK Jan 30 14:02:35.436577 ignition[1358]: INFO : mount: mount passed Jan 30 14:02:35.436577 ignition[1358]: INFO : Ignition finished successfully Jan 30 14:02:35.441813 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:02:35.453443 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:02:35.475659 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:02:35.505289 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1370) Jan 30 14:02:35.509375 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a Jan 30 14:02:35.509430 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 30 14:02:35.509458 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 30 14:02:35.515280 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 30 14:02:35.518837 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:02:35.560135 ignition[1387]: INFO : Ignition 2.19.0 Jan 30 14:02:35.560135 ignition[1387]: INFO : Stage: files Jan 30 14:02:35.563321 ignition[1387]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:02:35.563321 ignition[1387]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:02:35.567353 ignition[1387]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:02:35.570128 ignition[1387]: INFO : PUT result: OK Jan 30 14:02:35.575048 ignition[1387]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:02:35.590288 ignition[1387]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:02:35.590288 ignition[1387]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:02:35.610395 systemd-networkd[1197]: eth0: Gained IPv6LL Jan 30 14:02:35.616762 ignition[1387]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:02:35.619808 ignition[1387]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:02:35.622779 unknown[1387]: wrote ssh authorized keys file for user: core Jan 30 14:02:35.625578 ignition[1387]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:02:35.628043 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:02:35.631282 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 14:02:35.631282 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:02:35.631282 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 30 14:02:35.735337 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 14:02:35.858135 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 30 14:02:35.858135 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:02:35.865010 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 30 14:02:36.222984 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 14:02:36.569542 ignition[1387]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 30 14:02:36.569542 ignition[1387]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:02:36.575628 ignition[1387]: INFO : files: files passed Jan 30 14:02:36.575628 ignition[1387]: INFO : Ignition finished successfully Jan 30 14:02:36.586306 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:02:36.620605 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:02:36.624687 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:02:36.639600 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:02:36.642475 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:02:36.656079 initrd-setup-root-after-ignition[1415]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:02:36.656079 initrd-setup-root-after-ignition[1415]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:02:36.663184 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:02:36.669333 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:02:36.674504 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:02:36.692631 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:02:36.749346 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:02:36.749757 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:02:36.756349 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:02:36.758968 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:02:36.764784 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:02:36.773628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:02:36.808194 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:02:36.828723 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:02:36.852710 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:02:36.857348 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:02:36.861772 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:02:36.863766 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:02:36.864000 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:02:36.871165 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:02:36.874910 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:02:36.876734 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:02:36.879306 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:02:36.882609 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:02:36.885475 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:02:36.888996 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:02:36.899040 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:02:36.901074 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:02:36.903659 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:02:36.907521 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:02:36.907750 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:02:36.915912 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:02:36.918740 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:02:36.922560 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:02:36.926007 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:02:36.931432 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:02:36.931652 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:02:36.934860 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:02:36.935157 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:02:36.945473 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:02:36.945861 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:02:36.957744 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:02:36.961568 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:02:36.961844 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:02:36.971764 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:02:36.976992 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:02:36.977508 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:02:36.984526 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:02:36.984787 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:02:36.998104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:02:36.998428 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:02:37.025724 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:02:37.028131 ignition[1439]: INFO : Ignition 2.19.0 Jan 30 14:02:37.028131 ignition[1439]: INFO : Stage: umount Jan 30 14:02:37.033645 ignition[1439]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:02:37.033645 ignition[1439]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 30 14:02:37.033645 ignition[1439]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 30 14:02:37.041762 ignition[1439]: INFO : PUT result: OK Jan 30 14:02:37.039762 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:02:37.041300 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:02:37.051743 ignition[1439]: INFO : umount: umount passed Jan 30 14:02:37.054465 ignition[1439]: INFO : Ignition finished successfully Jan 30 14:02:37.056673 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:02:37.056877 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:02:37.060396 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:02:37.060566 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:02:37.063667 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:02:37.063752 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:02:37.066791 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 14:02:37.066870 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 14:02:37.068742 systemd[1]: Stopped target network.target - Network. Jan 30 14:02:37.070556 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:02:37.070639 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:02:37.072760 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:02:37.074364 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:02:37.078091 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:02:37.081586 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:02:37.084879 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:02:37.086653 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:02:37.086730 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:02:37.088505 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:02:37.088574 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:02:37.090374 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:02:37.090459 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:02:37.092267 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:02:37.092347 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:02:37.094242 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:02:37.094331 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:02:37.096490 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:02:37.098622 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:02:37.107483 systemd-networkd[1197]: eth0: DHCPv6 lease lost Jan 30 14:02:37.110678 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:02:37.110925 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:02:37.122274 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:02:37.122371 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:02:37.147034 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:02:37.164931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:02:37.165051 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:02:37.167856 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:02:37.173067 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:02:37.176719 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:02:37.198520 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:02:37.200485 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:02:37.205386 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:02:37.205505 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:02:37.209767 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:02:37.209857 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:02:37.214109 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:02:37.214212 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:02:37.221445 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:02:37.221547 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:02:37.224962 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:02:37.225053 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:02:37.245518 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:02:37.248210 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:02:37.248454 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:02:37.256201 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:02:37.256321 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:02:37.258403 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:02:37.258483 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:02:37.261260 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:02:37.261337 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:02:37.264169 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:02:37.264264 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:02:37.297534 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:02:37.298021 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:02:37.309025 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:02:37.309234 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:02:37.309997 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:02:37.326709 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:02:37.354377 systemd[1]: Switching root. Jan 30 14:02:37.393648 systemd-journald[250]: Journal stopped Jan 30 14:02:39.792164 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Jan 30 14:02:39.795371 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:02:39.795426 kernel: SELinux: policy capability open_perms=1 Jan 30 14:02:39.795457 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:02:39.795488 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:02:39.795525 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:02:39.795554 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:02:39.795584 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:02:39.795613 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:02:39.795644 kernel: audit: type=1403 audit(1738245758.157:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:02:39.795682 systemd[1]: Successfully loaded SELinux policy in 58.530ms. Jan 30 14:02:39.795731 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.577ms. Jan 30 14:02:39.795767 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:02:39.795803 systemd[1]: Detected virtualization amazon. Jan 30 14:02:39.795835 systemd[1]: Detected architecture arm64. Jan 30 14:02:39.795867 systemd[1]: Detected first boot. Jan 30 14:02:39.795908 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:02:39.795940 zram_generator::config[1499]: No configuration found. Jan 30 14:02:39.795984 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:02:39.796016 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:02:39.796049 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 30 14:02:39.796083 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:02:39.796121 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:02:39.796153 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:02:39.796186 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:02:39.796220 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:02:39.796274 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:02:39.796311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:02:39.796342 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:02:39.796373 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:02:39.796410 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:02:39.796443 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:02:39.796479 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:02:39.796512 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:02:39.796545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:02:39.796575 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:02:39.796605 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:02:39.796637 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:02:39.796671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:02:39.796707 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:02:39.796740 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:02:39.796770 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:02:39.796799 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:02:39.796832 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:02:39.796863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:02:39.796894 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:02:39.796924 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:02:39.796957 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:02:39.796990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:02:39.797020 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:02:39.797051 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:02:39.797082 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:02:39.797113 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:02:39.797147 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:02:39.797177 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:02:39.797208 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:02:39.800314 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:02:39.800375 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:02:39.800409 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:02:39.800440 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:02:39.800474 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:02:39.800506 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:02:39.800538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:02:39.800570 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:02:39.800602 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:02:39.800644 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:02:39.800676 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 14:02:39.800711 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 14:02:39.800741 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:02:39.800771 kernel: loop: module loaded Jan 30 14:02:39.800800 kernel: fuse: init (API version 7.39) Jan 30 14:02:39.800829 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:02:39.800859 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:02:39.800894 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:02:39.800924 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:02:39.800955 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:02:39.800985 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:02:39.801068 systemd-journald[1597]: Collecting audit messages is disabled. Jan 30 14:02:39.801120 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:02:39.801150 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:02:39.801184 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:02:39.801214 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:02:39.801291 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:02:39.801328 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:02:39.801361 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:02:39.801395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:02:39.801427 systemd-journald[1597]: Journal started Jan 30 14:02:39.801473 systemd-journald[1597]: Runtime Journal (/run/log/journal/ec2397cd28ae055270ee1a9a5a04d18e) is 8.0M, max 75.3M, 67.3M free. Jan 30 14:02:39.804523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:02:39.811308 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:02:39.813686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:02:39.814063 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:02:39.816999 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:02:39.817463 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:02:39.820207 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:02:39.820610 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:02:39.823381 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:02:39.826456 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:02:39.847078 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:02:39.859338 kernel: ACPI: bus type drm_connector registered Jan 30 14:02:39.860906 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:02:39.878504 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:02:39.883446 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:02:39.887447 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:02:39.896594 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:02:39.914504 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:02:39.918559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:02:39.930597 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:02:39.932711 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:02:39.940716 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:02:39.959573 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:02:39.970681 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:02:39.978121 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:02:39.981986 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:02:39.985964 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:02:39.990666 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:02:40.011681 systemd-journald[1597]: Time spent on flushing to /var/log/journal/ec2397cd28ae055270ee1a9a5a04d18e is 59.507ms for 896 entries. Jan 30 14:02:40.011681 systemd-journald[1597]: System Journal (/var/log/journal/ec2397cd28ae055270ee1a9a5a04d18e) is 8.0M, max 195.6M, 187.6M free. Jan 30 14:02:40.078514 systemd-journald[1597]: Received client request to flush runtime journal. Jan 30 14:02:40.016460 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:02:40.022558 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:02:40.085578 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:02:40.100228 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Jan 30 14:02:40.101429 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Jan 30 14:02:40.101993 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:02:40.118963 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:02:40.139613 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:02:40.153802 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:02:40.162567 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:02:40.209630 udevadm[1669]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:02:40.236470 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:02:40.247589 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:02:40.291394 systemd-tmpfiles[1673]: ACLs are not supported, ignoring. Jan 30 14:02:40.291439 systemd-tmpfiles[1673]: ACLs are not supported, ignoring. Jan 30 14:02:40.301543 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:02:41.056550 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:02:41.064647 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:02:41.124493 systemd-udevd[1679]: Using default interface naming scheme 'v255'. Jan 30 14:02:41.213761 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:02:41.228616 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:02:41.259522 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:02:41.364367 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 14:02:41.376240 (udev-worker)[1690]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:02:41.412953 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:02:41.564022 systemd-networkd[1682]: lo: Link UP Jan 30 14:02:41.564037 systemd-networkd[1682]: lo: Gained carrier Jan 30 14:02:41.566725 systemd-networkd[1682]: Enumeration completed Jan 30 14:02:41.567407 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:02:41.574060 systemd-networkd[1682]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:02:41.574081 systemd-networkd[1682]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:02:41.577521 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:02:41.583106 systemd-networkd[1682]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:02:41.583192 systemd-networkd[1682]: eth0: Link UP Jan 30 14:02:41.585403 systemd-networkd[1682]: eth0: Gained carrier Jan 30 14:02:41.585451 systemd-networkd[1682]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:02:41.600927 systemd-networkd[1682]: eth0: DHCPv4 address 172.31.23.237/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 30 14:02:41.650286 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1688) Jan 30 14:02:41.661096 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:02:41.858660 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:02:41.862013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:02:41.892080 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 30 14:02:41.901541 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:02:41.929371 lvm[1808]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:02:41.967924 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:02:41.971359 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:02:41.982572 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:02:42.002964 lvm[1811]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:02:42.041785 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:02:42.044563 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:02:42.046971 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:02:42.047027 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:02:42.049019 systemd[1]: Reached target machines.target - Containers. Jan 30 14:02:42.052803 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:02:42.066537 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:02:42.077726 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:02:42.079873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:02:42.082580 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:02:42.092506 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:02:42.100574 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:02:42.104546 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:02:42.138039 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:02:42.139425 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:02:42.152851 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:02:42.172631 kernel: loop0: detected capacity change from 0 to 114328 Jan 30 14:02:42.270292 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:02:42.287698 kernel: loop1: detected capacity change from 0 to 52536 Jan 30 14:02:42.335290 kernel: loop2: detected capacity change from 0 to 194096 Jan 30 14:02:42.459333 kernel: loop3: detected capacity change from 0 to 114432 Jan 30 14:02:42.554315 kernel: loop4: detected capacity change from 0 to 114328 Jan 30 14:02:42.567766 kernel: loop5: detected capacity change from 0 to 52536 Jan 30 14:02:42.586306 kernel: loop6: detected capacity change from 0 to 194096 Jan 30 14:02:42.618282 kernel: loop7: detected capacity change from 0 to 114432 Jan 30 14:02:42.628022 (sd-merge)[1832]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 30 14:02:42.629043 (sd-merge)[1832]: Merged extensions into '/usr'. Jan 30 14:02:42.637851 systemd[1]: Reloading requested from client PID 1819 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:02:42.637886 systemd[1]: Reloading... Jan 30 14:02:42.714796 systemd-networkd[1682]: eth0: Gained IPv6LL Jan 30 14:02:42.749297 zram_generator::config[1857]: No configuration found. Jan 30 14:02:43.035969 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:02:43.178629 systemd[1]: Reloading finished in 539 ms. Jan 30 14:02:43.206308 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:02:43.209474 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:02:43.224630 systemd[1]: Starting ensure-sysext.service... Jan 30 14:02:43.232731 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:02:43.254026 systemd[1]: Reloading requested from client PID 1920 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:02:43.254054 systemd[1]: Reloading... Jan 30 14:02:43.296612 systemd-tmpfiles[1921]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:02:43.298409 systemd-tmpfiles[1921]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:02:43.304516 systemd-tmpfiles[1921]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:02:43.305854 systemd-tmpfiles[1921]: ACLs are not supported, ignoring. Jan 30 14:02:43.306152 systemd-tmpfiles[1921]: ACLs are not supported, ignoring. Jan 30 14:02:43.314263 systemd-tmpfiles[1921]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:02:43.314284 systemd-tmpfiles[1921]: Skipping /boot Jan 30 14:02:43.339645 systemd-tmpfiles[1921]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:02:43.339810 systemd-tmpfiles[1921]: Skipping /boot Jan 30 14:02:43.439299 zram_generator::config[1957]: No configuration found. Jan 30 14:02:43.613789 ldconfig[1816]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:02:43.671154 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:02:43.812172 systemd[1]: Reloading finished in 557 ms. Jan 30 14:02:43.838186 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:02:43.850300 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:02:43.865640 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:02:43.878703 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:02:43.887549 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:02:43.900756 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:02:43.908557 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:02:43.933664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:02:43.943289 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:02:43.957328 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:02:43.970442 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:02:43.973596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:02:43.985176 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:02:43.985605 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:02:44.012947 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:02:44.016963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:02:44.018194 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:02:44.027235 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:02:44.036045 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:02:44.042557 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:02:44.043206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:02:44.056275 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:02:44.068544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:02:44.076659 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:02:44.093195 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:02:44.095358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:02:44.101508 augenrules[2050]: No rules Jan 30 14:02:44.112522 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:02:44.118662 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:02:44.142567 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:02:44.147189 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:02:44.148974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:02:44.170420 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:02:44.170962 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:02:44.175410 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:02:44.175776 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:02:44.180813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:02:44.197712 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:02:44.202695 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:02:44.204793 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:02:44.205025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:02:44.207452 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:02:44.215778 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:02:44.220801 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:02:44.221180 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:02:44.236673 systemd[1]: Finished ensure-sysext.service. Jan 30 14:02:44.249662 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:02:44.250058 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:02:44.263480 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:02:44.263562 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:02:44.299175 systemd-resolved[2016]: Positive Trust Anchors: Jan 30 14:02:44.299215 systemd-resolved[2016]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:02:44.299424 systemd-resolved[2016]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:02:44.307049 systemd-resolved[2016]: Defaulting to hostname 'linux'. Jan 30 14:02:44.310279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:02:44.312652 systemd[1]: Reached target network.target - Network. Jan 30 14:02:44.314448 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:02:44.316409 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:02:44.318492 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:02:44.320521 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:02:44.322823 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:02:44.325423 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:02:44.327708 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:02:44.329921 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:02:44.332118 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:02:44.332172 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:02:44.333757 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:02:44.336694 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:02:44.341229 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:02:44.345945 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:02:44.350048 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:02:44.352359 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:02:44.354334 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:02:44.356646 systemd[1]: System is tainted: cgroupsv1 Jan 30 14:02:44.356845 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:02:44.357016 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:02:44.365622 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:02:44.372856 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 14:02:44.385494 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:02:44.392732 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:02:44.403736 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:02:44.405665 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:02:44.415505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:44.429526 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:02:44.450546 systemd[1]: Started ntpd.service - Network Time Service. Jan 30 14:02:44.460591 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:02:44.475340 jq[2084]: false Jan 30 14:02:44.493608 extend-filesystems[2085]: Found loop4 Jan 30 14:02:44.493608 extend-filesystems[2085]: Found loop5 Jan 30 14:02:44.493608 extend-filesystems[2085]: Found loop6 Jan 30 14:02:44.493608 extend-filesystems[2085]: Found loop7 Jan 30 14:02:44.493608 extend-filesystems[2085]: Found nvme0n1 Jan 30 14:02:44.491389 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:02:44.501886 extend-filesystems[2085]: Found nvme0n1p1 Jan 30 14:02:44.501886 extend-filesystems[2085]: Found nvme0n1p2 Jan 30 14:02:44.501886 extend-filesystems[2085]: Found nvme0n1p3 Jan 30 14:02:44.506941 extend-filesystems[2085]: Found usr Jan 30 14:02:44.506941 extend-filesystems[2085]: Found nvme0n1p4 Jan 30 14:02:44.506941 extend-filesystems[2085]: Found nvme0n1p6 Jan 30 14:02:44.506941 extend-filesystems[2085]: Found nvme0n1p7 Jan 30 14:02:44.506941 extend-filesystems[2085]: Found nvme0n1p9 Jan 30 14:02:44.506941 extend-filesystems[2085]: Checking size of /dev/nvme0n1p9 Jan 30 14:02:44.524633 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 30 14:02:44.533659 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:02:44.565712 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:02:44.584517 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:02:44.589229 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:02:44.597558 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:02:44.609381 extend-filesystems[2085]: Resized partition /dev/nvme0n1p9 Jan 30 14:02:44.612643 dbus-daemon[2083]: [system] SELinux support is enabled Jan 30 14:02:44.624392 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:02:44.632776 extend-filesystems[2115]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:02:44.640500 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 30 14:02:44.622546 dbus-daemon[2083]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1682 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 30 14:02:44.640402 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:02:44.692098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:02:44.692660 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:02:44.704668 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:02:44.705200 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:02:44.712445 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 30 14:02:44.712445 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:02:44.712445 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: ---------------------------------------------------- Jan 30 14:02:44.712445 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:02:44.712445 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:02:44.712445 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: corporation. Support and training for ntp-4 are Jan 30 14:02:44.712445 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: available at https://www.nwtime.org/support Jan 30 14:02:44.712445 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: ---------------------------------------------------- Jan 30 14:02:44.707010 ntpd[2088]: ntpd 4.2.8p17@1.4004-o Wed Jan 29 09:31:57 UTC 2025 (1): Starting Jan 30 14:02:44.707057 ntpd[2088]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 30 14:02:44.707077 ntpd[2088]: ---------------------------------------------------- Jan 30 14:02:44.707097 ntpd[2088]: ntp-4 is maintained by Network Time Foundation, Jan 30 14:02:44.707117 ntpd[2088]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 30 14:02:44.707136 ntpd[2088]: corporation. Support and training for ntp-4 are Jan 30 14:02:44.707154 ntpd[2088]: available at https://www.nwtime.org/support Jan 30 14:02:44.707173 ntpd[2088]: ---------------------------------------------------- Jan 30 14:02:44.717392 ntpd[2088]: proto: precision = 0.108 usec (-23) Jan 30 14:02:44.721434 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: proto: precision = 0.108 usec (-23) Jan 30 14:02:44.721434 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: basedate set to 2025-01-17 Jan 30 14:02:44.721434 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: gps base set to 2025-01-19 (week 2350) Jan 30 14:02:44.719434 ntpd[2088]: basedate set to 2025-01-17 Jan 30 14:02:44.719470 ntpd[2088]: gps base set to 2025-01-19 (week 2350) Jan 30 14:02:44.727380 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:02:44.727901 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:02:44.733078 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:02:44.733430 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: Listen and drop on 0 v6wildcard [::]:123 Jan 30 14:02:44.734400 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:02:44.738321 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 30 14:02:44.738321 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:02:44.738321 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: Listen normally on 3 eth0 172.31.23.237:123 Jan 30 14:02:44.738321 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: Listen normally on 4 lo [::1]:123 Jan 30 14:02:44.738321 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: Listen normally on 5 eth0 [fe80::45c:68ff:fea7:153b%2]:123 Jan 30 14:02:44.738321 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: Listening on routing socket on fd #22 for interface updates Jan 30 14:02:44.735080 ntpd[2088]: Listen normally on 2 lo 127.0.0.1:123 Jan 30 14:02:44.735144 ntpd[2088]: Listen normally on 3 eth0 172.31.23.237:123 Jan 30 14:02:44.735222 ntpd[2088]: Listen normally on 4 lo [::1]:123 Jan 30 14:02:44.735329 ntpd[2088]: Listen normally on 5 eth0 [fe80::45c:68ff:fea7:153b%2]:123 Jan 30 14:02:44.735395 ntpd[2088]: Listening on routing socket on fd #22 for interface updates Jan 30 14:02:44.744964 jq[2114]: true Jan 30 14:02:44.759982 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:02:44.764134 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:02:44.764134 ntpd[2088]: 30 Jan 14:02:44 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:02:44.760039 ntpd[2088]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 30 14:02:44.775017 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 30 14:02:44.803714 extend-filesystems[2115]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 30 14:02:44.803714 extend-filesystems[2115]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 14:02:44.803714 extend-filesystems[2115]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 30 14:02:44.818842 extend-filesystems[2085]: Resized filesystem in /dev/nvme0n1p9 Jan 30 14:02:44.834415 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:02:44.834936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:02:44.838890 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:02:44.854957 (ntainerd)[2139]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:02:44.901747 dbus-daemon[2083]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 14:02:44.923352 tar[2126]: linux-arm64/helm Jan 30 14:02:44.923649 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:02:44.923696 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:02:44.934576 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 30 14:02:44.950448 jq[2137]: true Jan 30 14:02:44.936544 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:02:44.936586 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:02:44.954876 update_engine[2112]: I20250130 14:02:44.944213 2112 main.cc:92] Flatcar Update Engine starting Jan 30 14:02:44.955473 coreos-metadata[2081]: Jan 30 14:02:44.953 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 14:02:44.958112 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 30 14:02:44.969488 update_engine[2112]: I20250130 14:02:44.969058 2112 update_check_scheduler.cc:74] Next update check in 2m13s Jan 30 14:02:44.974151 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:02:44.974990 coreos-metadata[2081]: Jan 30 14:02:44.974 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 30 14:02:44.975662 coreos-metadata[2081]: Jan 30 14:02:44.975 INFO Fetch successful Jan 30 14:02:44.982270 coreos-metadata[2081]: Jan 30 14:02:44.979 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 30 14:02:44.984480 coreos-metadata[2081]: Jan 30 14:02:44.984 INFO Fetch successful Jan 30 14:02:44.984480 coreos-metadata[2081]: Jan 30 14:02:44.984 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 30 14:02:44.999471 coreos-metadata[2081]: Jan 30 14:02:44.999 INFO Fetch successful Jan 30 14:02:44.999626 coreos-metadata[2081]: Jan 30 14:02:44.999 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 30 14:02:45.003844 coreos-metadata[2081]: Jan 30 14:02:45.002 INFO Fetch successful Jan 30 14:02:45.003844 coreos-metadata[2081]: Jan 30 14:02:45.002 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 30 14:02:45.004308 coreos-metadata[2081]: Jan 30 14:02:45.004 INFO Fetch failed with 404: resource not found Jan 30 14:02:45.004396 coreos-metadata[2081]: Jan 30 14:02:45.004 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 30 14:02:45.004824 coreos-metadata[2081]: Jan 30 14:02:45.004 INFO Fetch successful Jan 30 14:02:45.004897 coreos-metadata[2081]: Jan 30 14:02:45.004 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 30 14:02:45.006094 coreos-metadata[2081]: Jan 30 14:02:45.006 INFO Fetch successful Jan 30 14:02:45.006094 coreos-metadata[2081]: Jan 30 14:02:45.006 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 30 14:02:45.007415 coreos-metadata[2081]: Jan 30 14:02:45.006 INFO Fetch successful Jan 30 14:02:45.007415 coreos-metadata[2081]: Jan 30 14:02:45.006 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 30 14:02:45.007931 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 30 14:02:45.012489 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:02:45.019563 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:02:45.031274 coreos-metadata[2081]: Jan 30 14:02:45.030 INFO Fetch successful Jan 30 14:02:45.031274 coreos-metadata[2081]: Jan 30 14:02:45.030 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 30 14:02:45.032864 coreos-metadata[2081]: Jan 30 14:02:45.032 INFO Fetch successful Jan 30 14:02:45.141856 systemd-logind[2109]: Watching system buttons on /dev/input/event0 (Power Button) Jan 30 14:02:45.143213 systemd-logind[2109]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 30 14:02:45.151488 systemd-logind[2109]: New seat seat0. Jan 30 14:02:45.168123 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:02:45.304417 amazon-ssm-agent[2166]: Initializing new seelog logger Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: New Seelog Logger Creation Complete Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: 2025/01/30 14:02:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: 2025/01/30 14:02:45 processing appconfig overrides Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO Proxy environment variables: Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: 2025/01/30 14:02:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: 2025/01/30 14:02:45 processing appconfig overrides Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: 2025/01/30 14:02:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:02:45.324774 amazon-ssm-agent[2166]: 2025/01/30 14:02:45 processing appconfig overrides Jan 30 14:02:45.325280 bash[2206]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:02:45.325134 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:02:45.336929 amazon-ssm-agent[2166]: 2025/01/30 14:02:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:02:45.336929 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 30 14:02:45.336929 amazon-ssm-agent[2166]: 2025/01/30 14:02:45 processing appconfig overrides Jan 30 14:02:45.340073 systemd[1]: Starting sshkeys.service... Jan 30 14:02:45.359979 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 14:02:45.362762 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:02:45.404868 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 30 14:02:45.410707 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO https_proxy: Jan 30 14:02:45.440037 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 30 14:02:45.513426 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2186) Jan 30 14:02:45.513523 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO http_proxy: Jan 30 14:02:45.564914 dbus-daemon[2083]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 30 14:02:45.565184 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 30 14:02:45.570038 dbus-daemon[2083]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2159 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 30 14:02:45.613886 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO no_proxy: Jan 30 14:02:45.658100 systemd[1]: Starting polkit.service - Authorization Manager... Jan 30 14:02:45.714192 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO Checking if agent identity type OnPrem can be assumed Jan 30 14:02:45.767149 containerd[2139]: time="2025-01-30T14:02:45.767000568Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 14:02:45.777214 locksmithd[2167]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:02:45.797917 polkitd[2232]: Started polkitd version 121 Jan 30 14:02:45.813873 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO Checking if agent identity type EC2 can be assumed Jan 30 14:02:45.853350 coreos-metadata[2216]: Jan 30 14:02:45.853 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 30 14:02:45.857242 polkitd[2232]: Loading rules from directory /etc/polkit-1/rules.d Jan 30 14:02:45.857419 polkitd[2232]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 30 14:02:45.859005 coreos-metadata[2216]: Jan 30 14:02:45.858 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 30 14:02:45.863082 coreos-metadata[2216]: Jan 30 14:02:45.862 INFO Fetch successful Jan 30 14:02:45.863082 coreos-metadata[2216]: Jan 30 14:02:45.862 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 30 14:02:45.863024 polkitd[2232]: Finished loading, compiling and executing 2 rules Jan 30 14:02:45.865971 coreos-metadata[2216]: Jan 30 14:02:45.865 INFO Fetch successful Jan 30 14:02:45.869510 dbus-daemon[2083]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 30 14:02:45.876297 systemd[1]: Started polkit.service - Authorization Manager. Jan 30 14:02:45.878348 polkitd[2232]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 30 14:02:45.884565 unknown[2216]: wrote ssh authorized keys file for user: core Jan 30 14:02:45.918274 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO Agent will take identity from EC2 Jan 30 14:02:45.951408 update-ssh-keys[2291]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:02:45.955478 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 30 14:02:45.974738 systemd[1]: Finished sshkeys.service. Jan 30 14:02:46.003803 systemd-hostnamed[2159]: Hostname set to (transient) Jan 30 14:02:46.003960 systemd-resolved[2016]: System hostname changed to 'ip-172-31-23-237'. Jan 30 14:02:46.013354 containerd[2139]: time="2025-01-30T14:02:46.013278646Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:02:46.017860 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:02:46.019640 containerd[2139]: time="2025-01-30T14:02:46.019497490Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:02:46.019640 containerd[2139]: time="2025-01-30T14:02:46.019566466Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:02:46.019640 containerd[2139]: time="2025-01-30T14:02:46.019602118Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:02:46.019991 containerd[2139]: time="2025-01-30T14:02:46.019893262Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:02:46.020103 containerd[2139]: time="2025-01-30T14:02:46.019938574Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:02:46.020275 containerd[2139]: time="2025-01-30T14:02:46.020185414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:02:46.020275 containerd[2139]: time="2025-01-30T14:02:46.020214958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:02:46.022051 containerd[2139]: time="2025-01-30T14:02:46.021838462Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:02:46.022051 containerd[2139]: time="2025-01-30T14:02:46.021898966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:02:46.022051 containerd[2139]: time="2025-01-30T14:02:46.021933958Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:02:46.022051 containerd[2139]: time="2025-01-30T14:02:46.021958966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:02:46.023212 containerd[2139]: time="2025-01-30T14:02:46.022135246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:02:46.023212 containerd[2139]: time="2025-01-30T14:02:46.022578454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:02:46.029218 containerd[2139]: time="2025-01-30T14:02:46.024977710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:02:46.029218 containerd[2139]: time="2025-01-30T14:02:46.027319306Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:02:46.029218 containerd[2139]: time="2025-01-30T14:02:46.027562570Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:02:46.029218 containerd[2139]: time="2025-01-30T14:02:46.027664678Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.037351714Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.037467802Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.037593334Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.037640458Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.037674994Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.037921342Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.038584786Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.038795434Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.038830906Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.038861506Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.038895358Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.038925922Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.038955358Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:02:46.040270 containerd[2139]: time="2025-01-30T14:02:46.038987362Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:02:46.040899 containerd[2139]: time="2025-01-30T14:02:46.039022978Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:02:46.040899 containerd[2139]: time="2025-01-30T14:02:46.039053266Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:02:46.040899 containerd[2139]: time="2025-01-30T14:02:46.039081646Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:02:46.040899 containerd[2139]: time="2025-01-30T14:02:46.039109306Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:02:46.040899 containerd[2139]: time="2025-01-30T14:02:46.039149806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.040899 containerd[2139]: time="2025-01-30T14:02:46.039182818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.040899 containerd[2139]: time="2025-01-30T14:02:46.039217186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044302558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044372482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044408986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044484826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044517802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044550118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044585770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044614930Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044646538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044680918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044718382Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044765530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044794102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.047386 containerd[2139]: time="2025-01-30T14:02:46.044820478Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:02:46.048068 containerd[2139]: time="2025-01-30T14:02:46.044935978Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:02:46.048068 containerd[2139]: time="2025-01-30T14:02:46.044971630Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:02:46.048068 containerd[2139]: time="2025-01-30T14:02:46.045000430Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:02:46.048068 containerd[2139]: time="2025-01-30T14:02:46.045037738Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:02:46.048068 containerd[2139]: time="2025-01-30T14:02:46.045063682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.048068 containerd[2139]: time="2025-01-30T14:02:46.045091834Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:02:46.048068 containerd[2139]: time="2025-01-30T14:02:46.045115222Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:02:46.048068 containerd[2139]: time="2025-01-30T14:02:46.045144766Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:02:46.048432 containerd[2139]: time="2025-01-30T14:02:46.045669010Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:02:46.048432 containerd[2139]: time="2025-01-30T14:02:46.045785878Z" level=info msg="Connect containerd service" Jan 30 14:02:46.048432 containerd[2139]: time="2025-01-30T14:02:46.045875038Z" level=info msg="using legacy CRI server" Jan 30 14:02:46.048432 containerd[2139]: time="2025-01-30T14:02:46.045892906Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:02:46.057497 containerd[2139]: time="2025-01-30T14:02:46.055954642Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:02:46.070891 containerd[2139]: time="2025-01-30T14:02:46.070821754Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:02:46.071490 containerd[2139]: time="2025-01-30T14:02:46.071442898Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:02:46.071592 containerd[2139]: time="2025-01-30T14:02:46.071567818Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:02:46.071713 containerd[2139]: time="2025-01-30T14:02:46.071652394Z" level=info msg="Start subscribing containerd event" Jan 30 14:02:46.071824 containerd[2139]: time="2025-01-30T14:02:46.071731870Z" level=info msg="Start recovering state" Jan 30 14:02:46.071998 containerd[2139]: time="2025-01-30T14:02:46.071871646Z" level=info msg="Start event monitor" Jan 30 14:02:46.071998 containerd[2139]: time="2025-01-30T14:02:46.071898610Z" level=info msg="Start snapshots syncer" Jan 30 14:02:46.071998 containerd[2139]: time="2025-01-30T14:02:46.071929762Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:02:46.071998 containerd[2139]: time="2025-01-30T14:02:46.071949610Z" level=info msg="Start streaming server" Jan 30 14:02:46.075556 containerd[2139]: time="2025-01-30T14:02:46.072090298Z" level=info msg="containerd successfully booted in 0.316265s" Jan 30 14:02:46.072272 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:02:46.118885 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:02:46.219817 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 30 14:02:46.323492 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 30 14:02:46.440659 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 30 14:02:46.537396 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO [amazon-ssm-agent] Starting Core Agent Jan 30 14:02:46.638136 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 30 14:02:46.739263 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO [Registrar] Starting registrar module Jan 30 14:02:46.840390 amazon-ssm-agent[2166]: 2025-01-30 14:02:45 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 30 14:02:47.019293 tar[2126]: linux-arm64/LICENSE Jan 30 14:02:47.023320 tar[2126]: linux-arm64/README.md Jan 30 14:02:47.062910 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:02:47.481230 amazon-ssm-agent[2166]: 2025-01-30 14:02:47 INFO [EC2Identity] EC2 registration was successful. Jan 30 14:02:47.512348 amazon-ssm-agent[2166]: 2025-01-30 14:02:47 INFO [CredentialRefresher] credentialRefresher has started Jan 30 14:02:47.512348 amazon-ssm-agent[2166]: 2025-01-30 14:02:47 INFO [CredentialRefresher] Starting credentials refresher loop Jan 30 14:02:47.512348 amazon-ssm-agent[2166]: 2025-01-30 14:02:47 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 30 14:02:47.582168 amazon-ssm-agent[2166]: 2025-01-30 14:02:47 INFO [CredentialRefresher] Next credential rotation will be in 32.4499928246 minutes Jan 30 14:02:47.755675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:47.770877 (kubelet)[2352]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:02:48.123809 sshd_keygen[2128]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:02:48.168741 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:02:48.185677 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:02:48.200577 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:02:48.201221 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:02:48.217751 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:02:48.238824 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:02:48.250880 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:02:48.261776 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:02:48.264552 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:02:48.266985 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:02:48.269850 systemd[1]: Startup finished in 9.497s (kernel) + 10.169s (userspace) = 19.666s. Jan 30 14:02:48.539459 amazon-ssm-agent[2166]: 2025-01-30 14:02:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 30 14:02:48.640212 amazon-ssm-agent[2166]: 2025-01-30 14:02:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2381) started Jan 30 14:02:48.740763 amazon-ssm-agent[2166]: 2025-01-30 14:02:48 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 30 14:02:49.131181 kubelet[2352]: E0130 14:02:49.131093 2352 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:02:49.136542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:02:49.136959 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:02:52.020098 systemd-resolved[2016]: Clock change detected. Flushing caches. Jan 30 14:02:52.462591 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:02:52.477905 systemd[1]: Started sshd@0-172.31.23.237:22-139.178.89.65:35496.service - OpenSSH per-connection server daemon (139.178.89.65:35496). Jan 30 14:02:52.659044 sshd[2395]: Accepted publickey for core from 139.178.89.65 port 35496 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:52.663035 sshd[2395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:52.682340 systemd-logind[2109]: New session 1 of user core. Jan 30 14:02:52.683383 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:02:52.693885 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:02:52.716353 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:02:52.727350 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:02:52.747822 (systemd)[2401]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:02:52.965942 systemd[2401]: Queued start job for default target default.target. Jan 30 14:02:52.966642 systemd[2401]: Created slice app.slice - User Application Slice. Jan 30 14:02:52.966695 systemd[2401]: Reached target paths.target - Paths. Jan 30 14:02:52.966727 systemd[2401]: Reached target timers.target - Timers. Jan 30 14:02:52.977614 systemd[2401]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:02:52.991826 systemd[2401]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:02:52.991959 systemd[2401]: Reached target sockets.target - Sockets. Jan 30 14:02:52.991992 systemd[2401]: Reached target basic.target - Basic System. Jan 30 14:02:52.992087 systemd[2401]: Reached target default.target - Main User Target. Jan 30 14:02:52.992148 systemd[2401]: Startup finished in 232ms. Jan 30 14:02:52.992855 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:02:52.999116 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:02:53.149963 systemd[1]: Started sshd@1-172.31.23.237:22-139.178.89.65:35512.service - OpenSSH per-connection server daemon (139.178.89.65:35512). Jan 30 14:02:53.324370 sshd[2413]: Accepted publickey for core from 139.178.89.65 port 35512 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:53.327027 sshd[2413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:53.334753 systemd-logind[2109]: New session 2 of user core. Jan 30 14:02:53.346054 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:02:53.475357 sshd[2413]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:53.481060 systemd-logind[2109]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:02:53.482759 systemd[1]: sshd@1-172.31.23.237:22-139.178.89.65:35512.service: Deactivated successfully. Jan 30 14:02:53.487885 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:02:53.490773 systemd-logind[2109]: Removed session 2. Jan 30 14:02:53.506046 systemd[1]: Started sshd@2-172.31.23.237:22-139.178.89.65:35526.service - OpenSSH per-connection server daemon (139.178.89.65:35526). Jan 30 14:02:53.677050 sshd[2421]: Accepted publickey for core from 139.178.89.65 port 35526 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:53.679713 sshd[2421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:53.687141 systemd-logind[2109]: New session 3 of user core. Jan 30 14:02:53.695923 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:02:53.815817 sshd[2421]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:53.822922 systemd[1]: sshd@2-172.31.23.237:22-139.178.89.65:35526.service: Deactivated successfully. Jan 30 14:02:53.824634 systemd-logind[2109]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:02:53.829178 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:02:53.830782 systemd-logind[2109]: Removed session 3. Jan 30 14:02:53.847961 systemd[1]: Started sshd@3-172.31.23.237:22-139.178.89.65:35538.service - OpenSSH per-connection server daemon (139.178.89.65:35538). Jan 30 14:02:54.010906 sshd[2429]: Accepted publickey for core from 139.178.89.65 port 35538 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:54.013031 sshd[2429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:54.020790 systemd-logind[2109]: New session 4 of user core. Jan 30 14:02:54.031046 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:02:54.156735 sshd[2429]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:54.161857 systemd[1]: sshd@3-172.31.23.237:22-139.178.89.65:35538.service: Deactivated successfully. Jan 30 14:02:54.167711 systemd-logind[2109]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:02:54.169417 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:02:54.171242 systemd-logind[2109]: Removed session 4. Jan 30 14:02:54.185978 systemd[1]: Started sshd@4-172.31.23.237:22-139.178.89.65:35552.service - OpenSSH per-connection server daemon (139.178.89.65:35552). Jan 30 14:02:54.365988 sshd[2437]: Accepted publickey for core from 139.178.89.65 port 35552 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:54.368411 sshd[2437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:54.376673 systemd-logind[2109]: New session 5 of user core. Jan 30 14:02:54.384095 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:02:54.545156 sudo[2441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 14:02:54.545840 sudo[2441]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:02:54.559166 sudo[2441]: pam_unix(sudo:session): session closed for user root Jan 30 14:02:54.582761 sshd[2437]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:54.590252 systemd[1]: sshd@4-172.31.23.237:22-139.178.89.65:35552.service: Deactivated successfully. Jan 30 14:02:54.594863 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:02:54.596205 systemd-logind[2109]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:02:54.597974 systemd-logind[2109]: Removed session 5. Jan 30 14:02:54.610996 systemd[1]: Started sshd@5-172.31.23.237:22-139.178.89.65:35554.service - OpenSSH per-connection server daemon (139.178.89.65:35554). Jan 30 14:02:54.789143 sshd[2446]: Accepted publickey for core from 139.178.89.65 port 35554 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:54.791810 sshd[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:54.799404 systemd-logind[2109]: New session 6 of user core. Jan 30 14:02:54.807071 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:02:54.912436 sudo[2451]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 14:02:54.913108 sudo[2451]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:02:54.919356 sudo[2451]: pam_unix(sudo:session): session closed for user root Jan 30 14:02:54.929183 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 14:02:54.929823 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:02:54.954954 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 14:02:54.958964 auditctl[2454]: No rules Jan 30 14:02:54.959791 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:02:54.960291 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 14:02:54.976976 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 14:02:55.017068 augenrules[2473]: No rules Jan 30 14:02:55.019176 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 14:02:55.023074 sudo[2450]: pam_unix(sudo:session): session closed for user root Jan 30 14:02:55.047871 sshd[2446]: pam_unix(sshd:session): session closed for user core Jan 30 14:02:55.056068 systemd-logind[2109]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:02:55.056976 systemd[1]: sshd@5-172.31.23.237:22-139.178.89.65:35554.service: Deactivated successfully. Jan 30 14:02:55.061793 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:02:55.063586 systemd-logind[2109]: Removed session 6. Jan 30 14:02:55.083933 systemd[1]: Started sshd@6-172.31.23.237:22-139.178.89.65:35566.service - OpenSSH per-connection server daemon (139.178.89.65:35566). Jan 30 14:02:55.252833 sshd[2482]: Accepted publickey for core from 139.178.89.65 port 35566 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:02:55.255334 sshd[2482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:02:55.263705 systemd-logind[2109]: New session 7 of user core. Jan 30 14:02:55.269961 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:02:55.376373 sudo[2486]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:02:55.377046 sudo[2486]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:02:55.953909 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:02:55.967133 (dockerd)[2501]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:02:56.440557 dockerd[2501]: time="2025-01-30T14:02:56.439514605Z" level=info msg="Starting up" Jan 30 14:02:56.857665 dockerd[2501]: time="2025-01-30T14:02:56.857517004Z" level=info msg="Loading containers: start." Jan 30 14:02:57.067518 kernel: Initializing XFRM netlink socket Jan 30 14:02:57.122593 (udev-worker)[2522]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:02:57.213231 systemd-networkd[1682]: docker0: Link UP Jan 30 14:02:57.237032 dockerd[2501]: time="2025-01-30T14:02:57.236973325Z" level=info msg="Loading containers: done." Jan 30 14:02:57.262361 dockerd[2501]: time="2025-01-30T14:02:57.262152902Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:02:57.262361 dockerd[2501]: time="2025-01-30T14:02:57.262335254Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 14:02:57.262706 dockerd[2501]: time="2025-01-30T14:02:57.262590314Z" level=info msg="Daemon has completed initialization" Jan 30 14:02:57.333777 dockerd[2501]: time="2025-01-30T14:02:57.333353498Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:02:57.334859 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:02:58.830504 containerd[2139]: time="2025-01-30T14:02:58.830400137Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 14:02:59.496407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:02:59.505854 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:02:59.542141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3559271801.mount: Deactivated successfully. Jan 30 14:02:59.876175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:02:59.891736 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:03:00.023538 kubelet[2670]: E0130 14:03:00.022668 2670 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:03:00.030856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:03:00.031221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:03:01.383601 containerd[2139]: time="2025-01-30T14:03:01.383511510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:01.385668 containerd[2139]: time="2025-01-30T14:03:01.385603170Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 30 14:03:01.389492 containerd[2139]: time="2025-01-30T14:03:01.388612110Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:01.395039 containerd[2139]: time="2025-01-30T14:03:01.394976262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:01.397671 containerd[2139]: time="2025-01-30T14:03:01.397608210Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.567104837s" Jan 30 14:03:01.397806 containerd[2139]: time="2025-01-30T14:03:01.397671498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 30 14:03:01.434860 containerd[2139]: time="2025-01-30T14:03:01.434813622Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 14:03:03.389526 containerd[2139]: time="2025-01-30T14:03:03.388715456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:03.391216 containerd[2139]: time="2025-01-30T14:03:03.391151648Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 30 14:03:03.393452 containerd[2139]: time="2025-01-30T14:03:03.393379304Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:03.400664 containerd[2139]: time="2025-01-30T14:03:03.400563308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:03.402523 containerd[2139]: time="2025-01-30T14:03:03.401866580Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.966818946s" Jan 30 14:03:03.402523 containerd[2139]: time="2025-01-30T14:03:03.401928776Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 30 14:03:03.442016 containerd[2139]: time="2025-01-30T14:03:03.441731276Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 14:03:05.267534 containerd[2139]: time="2025-01-30T14:03:05.266818113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:05.269111 containerd[2139]: time="2025-01-30T14:03:05.269043285Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 30 14:03:05.270977 containerd[2139]: time="2025-01-30T14:03:05.270894093Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:05.276642 containerd[2139]: time="2025-01-30T14:03:05.276589437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:05.279096 containerd[2139]: time="2025-01-30T14:03:05.278892309Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.837097049s" Jan 30 14:03:05.279096 containerd[2139]: time="2025-01-30T14:03:05.278952177Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 30 14:03:05.319437 containerd[2139]: time="2025-01-30T14:03:05.319141774Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 14:03:06.520602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070066834.mount: Deactivated successfully. Jan 30 14:03:07.049770 containerd[2139]: time="2025-01-30T14:03:07.049711414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:07.051783 containerd[2139]: time="2025-01-30T14:03:07.051729526Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 30 14:03:07.054513 containerd[2139]: time="2025-01-30T14:03:07.052856518Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:07.061241 containerd[2139]: time="2025-01-30T14:03:07.061175086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:07.064334 containerd[2139]: time="2025-01-30T14:03:07.064283386Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.745083184s" Jan 30 14:03:07.064523 containerd[2139]: time="2025-01-30T14:03:07.064492318Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 30 14:03:07.103887 containerd[2139]: time="2025-01-30T14:03:07.103828834Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 14:03:07.657545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196504812.mount: Deactivated successfully. Jan 30 14:03:08.838488 containerd[2139]: time="2025-01-30T14:03:08.838373631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:08.850342 containerd[2139]: time="2025-01-30T14:03:08.850281759Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 30 14:03:08.863163 containerd[2139]: time="2025-01-30T14:03:08.862708467Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:08.881845 containerd[2139]: time="2025-01-30T14:03:08.881731791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:08.884719 containerd[2139]: time="2025-01-30T14:03:08.884437887Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.780545081s" Jan 30 14:03:08.884719 containerd[2139]: time="2025-01-30T14:03:08.884523807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 30 14:03:08.927254 containerd[2139]: time="2025-01-30T14:03:08.926996859Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 14:03:09.972619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2976412763.mount: Deactivated successfully. Jan 30 14:03:09.981531 containerd[2139]: time="2025-01-30T14:03:09.981447761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:09.983127 containerd[2139]: time="2025-01-30T14:03:09.983075141Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 30 14:03:09.983714 containerd[2139]: time="2025-01-30T14:03:09.983664185Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:09.989689 containerd[2139]: time="2025-01-30T14:03:09.989593421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:09.991611 containerd[2139]: time="2025-01-30T14:03:09.991362869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 1.064310246s" Jan 30 14:03:09.991611 containerd[2139]: time="2025-01-30T14:03:09.991420145Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 30 14:03:10.033041 containerd[2139]: time="2025-01-30T14:03:10.032950213Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 14:03:10.260253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:03:10.271797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:03:10.576858 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:03:10.594033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount730248589.mount: Deactivated successfully. Jan 30 14:03:10.598694 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:03:10.698727 kubelet[2824]: E0130 14:03:10.698634 2824 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:03:10.705104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:03:10.705863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:03:13.107993 containerd[2139]: time="2025-01-30T14:03:13.107700436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:13.109357 containerd[2139]: time="2025-01-30T14:03:13.109260652Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 30 14:03:13.110921 containerd[2139]: time="2025-01-30T14:03:13.110862808Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:13.118777 containerd[2139]: time="2025-01-30T14:03:13.118586548Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:13.120929 containerd[2139]: time="2025-01-30T14:03:13.120782032Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.087778203s" Jan 30 14:03:13.120929 containerd[2139]: time="2025-01-30T14:03:13.120858820Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 30 14:03:16.325972 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 30 14:03:20.760270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 14:03:20.769799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:03:21.072889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:03:21.087166 (kubelet)[2949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:03:21.166018 kubelet[2949]: E0130 14:03:21.165909 2949 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:03:21.172022 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:03:21.172430 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:03:22.386897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:03:22.395985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:03:22.444511 systemd[1]: Reloading requested from client PID 2965 ('systemctl') (unit session-7.scope)... Jan 30 14:03:22.444541 systemd[1]: Reloading... Jan 30 14:03:22.677542 zram_generator::config[3011]: No configuration found. Jan 30 14:03:22.911953 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:03:23.070083 systemd[1]: Reloading finished in 624 ms. Jan 30 14:03:23.150243 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 14:03:23.150564 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 14:03:23.151216 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:03:23.158673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:03:23.437819 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:03:23.450169 (kubelet)[3080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:03:23.521188 kubelet[3080]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:03:23.521188 kubelet[3080]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:03:23.521188 kubelet[3080]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:03:23.521797 kubelet[3080]: I0130 14:03:23.521300 3080 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:03:24.812121 kubelet[3080]: I0130 14:03:24.812076 3080 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:03:24.812778 kubelet[3080]: I0130 14:03:24.812754 3080 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:03:24.813278 kubelet[3080]: I0130 14:03:24.813254 3080 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:03:24.840207 kubelet[3080]: I0130 14:03:24.840167 3080 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:03:24.841131 kubelet[3080]: E0130 14:03:24.841101 3080 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.237:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:24.853974 kubelet[3080]: I0130 14:03:24.853932 3080 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:03:24.857615 kubelet[3080]: I0130 14:03:24.857542 3080 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:03:24.857903 kubelet[3080]: I0130 14:03:24.857607 3080 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-237","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:03:24.858079 kubelet[3080]: I0130 14:03:24.857922 3080 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:03:24.858079 kubelet[3080]: I0130 14:03:24.857945 3080 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:03:24.858210 kubelet[3080]: I0130 14:03:24.858182 3080 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:03:24.860608 kubelet[3080]: W0130 14:03:24.860459 3080 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-237&limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:24.860608 kubelet[3080]: E0130 14:03:24.860567 3080 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-237&limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:24.860945 kubelet[3080]: I0130 14:03:24.860897 3080 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:03:24.861012 kubelet[3080]: I0130 14:03:24.860947 3080 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:03:24.861114 kubelet[3080]: I0130 14:03:24.861075 3080 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:03:24.861181 kubelet[3080]: I0130 14:03:24.861152 3080 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:03:24.863512 kubelet[3080]: I0130 14:03:24.862694 3080 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:03:24.863512 kubelet[3080]: I0130 14:03:24.863050 3080 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:03:24.863512 kubelet[3080]: W0130 14:03:24.863114 3080 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:03:24.864194 kubelet[3080]: I0130 14:03:24.864142 3080 server.go:1264] "Started kubelet" Jan 30 14:03:24.864449 kubelet[3080]: W0130 14:03:24.864376 3080 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:24.864587 kubelet[3080]: E0130 14:03:24.864463 3080 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:24.880794 kubelet[3080]: I0130 14:03:24.880725 3080 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:03:24.881836 kubelet[3080]: I0130 14:03:24.881782 3080 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:03:24.883928 kubelet[3080]: I0130 14:03:24.883887 3080 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:03:24.887958 kubelet[3080]: I0130 14:03:24.887873 3080 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:03:24.888463 kubelet[3080]: I0130 14:03:24.888436 3080 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:03:24.889532 kubelet[3080]: E0130 14:03:24.889288 3080 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.237:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.237:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-237.181f7d59933cfb9f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-237,UID:ip-172-31-23-237,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-237,},FirstTimestamp:2025-01-30 14:03:24.864109471 +0000 UTC m=+1.407531032,LastTimestamp:2025-01-30 14:03:24.864109471 +0000 UTC m=+1.407531032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-237,}" Jan 30 14:03:24.892912 kubelet[3080]: I0130 14:03:24.892867 3080 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:03:24.896233 kubelet[3080]: E0130 14:03:24.895497 3080 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-237?timeout=10s\": dial tcp 172.31.23.237:6443: connect: connection refused" interval="200ms" Jan 30 14:03:24.896463 kubelet[3080]: E0130 14:03:24.896426 3080 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:03:24.896930 kubelet[3080]: W0130 14:03:24.896895 3080 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:24.897077 kubelet[3080]: E0130 14:03:24.897055 3080 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:24.898103 kubelet[3080]: I0130 14:03:24.898072 3080 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:03:24.898312 kubelet[3080]: I0130 14:03:24.898291 3080 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:03:24.899838 kubelet[3080]: I0130 14:03:24.899805 3080 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:03:24.899993 kubelet[3080]: I0130 14:03:24.899975 3080 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:03:24.900232 kubelet[3080]: I0130 14:03:24.900201 3080 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:03:24.917437 kubelet[3080]: I0130 14:03:24.917354 3080 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:03:24.919916 kubelet[3080]: I0130 14:03:24.919850 3080 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:03:24.919916 kubelet[3080]: I0130 14:03:24.919920 3080 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:03:24.920131 kubelet[3080]: I0130 14:03:24.919952 3080 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:03:24.920131 kubelet[3080]: E0130 14:03:24.920025 3080 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:03:24.942714 kubelet[3080]: W0130 14:03:24.942639 3080 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:24.942999 kubelet[3080]: E0130 14:03:24.942876 3080 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:24.963751 kubelet[3080]: I0130 14:03:24.963668 3080 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:03:24.963751 kubelet[3080]: I0130 14:03:24.963695 3080 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:03:24.965239 kubelet[3080]: I0130 14:03:24.964247 3080 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:03:24.967952 kubelet[3080]: I0130 14:03:24.967897 3080 policy_none.go:49] "None policy: Start" Jan 30 14:03:24.969230 kubelet[3080]: I0130 14:03:24.969184 3080 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:03:24.969230 kubelet[3080]: I0130 14:03:24.969233 3080 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:03:24.977151 kubelet[3080]: I0130 14:03:24.977091 3080 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:03:24.977496 kubelet[3080]: I0130 14:03:24.977404 3080 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:03:24.977635 kubelet[3080]: I0130 14:03:24.977611 3080 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:03:24.986909 kubelet[3080]: E0130 14:03:24.986738 3080 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-237\" not found" Jan 30 14:03:24.995530 kubelet[3080]: I0130 14:03:24.995452 3080 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-237" Jan 30 14:03:24.995973 kubelet[3080]: E0130 14:03:24.995914 3080 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.237:6443/api/v1/nodes\": dial tcp 172.31.23.237:6443: connect: connection refused" node="ip-172-31-23-237" Jan 30 14:03:25.020772 kubelet[3080]: I0130 14:03:25.020702 3080 topology_manager.go:215] "Topology Admit Handler" podUID="98154fd1a3cd6f77c4f231b6f169707d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-237" Jan 30 14:03:25.023013 kubelet[3080]: I0130 14:03:25.022973 3080 topology_manager.go:215] "Topology Admit Handler" podUID="2231bd501e21f64be3da5e61a0722348" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:25.025546 kubelet[3080]: I0130 14:03:25.025286 3080 topology_manager.go:215] "Topology Admit Handler" podUID="60cc2abae5eaa270ba74098dcd75084c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-237" Jan 30 14:03:25.097110 kubelet[3080]: E0130 14:03:25.096967 3080 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-237?timeout=10s\": dial tcp 172.31.23.237:6443: connect: connection refused" interval="400ms" Jan 30 14:03:25.099577 kubelet[3080]: I0130 14:03:25.099491 3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:25.099673 kubelet[3080]: I0130 14:03:25.099587 3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60cc2abae5eaa270ba74098dcd75084c-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-237\" (UID: \"60cc2abae5eaa270ba74098dcd75084c\") " pod="kube-system/kube-scheduler-ip-172-31-23-237" Jan 30 14:03:25.099673 kubelet[3080]: I0130 14:03:25.099626 3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98154fd1a3cd6f77c4f231b6f169707d-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-237\" (UID: \"98154fd1a3cd6f77c4f231b6f169707d\") " pod="kube-system/kube-apiserver-ip-172-31-23-237" Jan 30 14:03:25.099673 kubelet[3080]: I0130 14:03:25.099664 3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98154fd1a3cd6f77c4f231b6f169707d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-237\" (UID: \"98154fd1a3cd6f77c4f231b6f169707d\") " pod="kube-system/kube-apiserver-ip-172-31-23-237" Jan 30 14:03:25.099851 kubelet[3080]: I0130 14:03:25.099700 3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:25.099851 kubelet[3080]: I0130 14:03:25.099733 3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:25.099851 kubelet[3080]: I0130 14:03:25.099769 3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:25.099851 kubelet[3080]: I0130 14:03:25.099809 3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98154fd1a3cd6f77c4f231b6f169707d-ca-certs\") pod \"kube-apiserver-ip-172-31-23-237\" (UID: \"98154fd1a3cd6f77c4f231b6f169707d\") " pod="kube-system/kube-apiserver-ip-172-31-23-237" Jan 30 14:03:25.099851 kubelet[3080]: I0130 14:03:25.099845 3080 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:25.198346 kubelet[3080]: I0130 14:03:25.198254 3080 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-237" Jan 30 14:03:25.198882 kubelet[3080]: E0130 14:03:25.198802 3080 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.237:6443/api/v1/nodes\": dial tcp 172.31.23.237:6443: connect: connection refused" node="ip-172-31-23-237" Jan 30 14:03:25.333988 containerd[2139]: time="2025-01-30T14:03:25.333617057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-237,Uid:98154fd1a3cd6f77c4f231b6f169707d,Namespace:kube-system,Attempt:0,}" Jan 30 14:03:25.339524 containerd[2139]: time="2025-01-30T14:03:25.339441053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-237,Uid:2231bd501e21f64be3da5e61a0722348,Namespace:kube-system,Attempt:0,}" Jan 30 14:03:25.342429 containerd[2139]: time="2025-01-30T14:03:25.341335661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-237,Uid:60cc2abae5eaa270ba74098dcd75084c,Namespace:kube-system,Attempt:0,}" Jan 30 14:03:25.500594 kubelet[3080]: E0130 14:03:25.500409 3080 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-237?timeout=10s\": dial tcp 172.31.23.237:6443: connect: connection refused" interval="800ms" Jan 30 14:03:25.601521 kubelet[3080]: I0130 14:03:25.601457 3080 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-237" Jan 30 14:03:25.602255 kubelet[3080]: E0130 14:03:25.601969 3080 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.237:6443/api/v1/nodes\": dial tcp 172.31.23.237:6443: connect: connection refused" node="ip-172-31-23-237" Jan 30 14:03:25.720608 kubelet[3080]: W0130 14:03:25.720432 3080 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:25.720608 kubelet[3080]: E0130 14:03:25.720563 3080 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.237:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:25.764054 kubelet[3080]: W0130 14:03:25.763967 3080 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-237&limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:25.764193 kubelet[3080]: E0130 14:03:25.764061 3080 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.237:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-237&limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:25.792444 kubelet[3080]: W0130 14:03:25.792375 3080 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:25.792587 kubelet[3080]: E0130 14:03:25.792458 3080 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.237:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:25.950459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3152039746.mount: Deactivated successfully. Jan 30 14:03:25.956581 containerd[2139]: time="2025-01-30T14:03:25.956505248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:03:25.959258 containerd[2139]: time="2025-01-30T14:03:25.959194352Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:03:25.962351 containerd[2139]: time="2025-01-30T14:03:25.962294000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 30 14:03:25.963271 containerd[2139]: time="2025-01-30T14:03:25.963228152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:03:25.965182 containerd[2139]: time="2025-01-30T14:03:25.964914728Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:03:25.966557 containerd[2139]: time="2025-01-30T14:03:25.966441824Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:03:25.968073 containerd[2139]: time="2025-01-30T14:03:25.967976864Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:03:25.977525 containerd[2139]: time="2025-01-30T14:03:25.977185652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:03:25.982324 containerd[2139]: time="2025-01-30T14:03:25.981950084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 648.223599ms" Jan 30 14:03:25.986943 containerd[2139]: time="2025-01-30T14:03:25.986878220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 645.430095ms" Jan 30 14:03:25.988506 containerd[2139]: time="2025-01-30T14:03:25.988176632Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 648.594891ms" Jan 30 14:03:26.083790 kubelet[3080]: W0130 14:03:26.082872 3080 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:26.084614 kubelet[3080]: E0130 14:03:26.084462 3080 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.237:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.237:6443: connect: connection refused Jan 30 14:03:26.163721 containerd[2139]: time="2025-01-30T14:03:26.162199073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:26.163721 containerd[2139]: time="2025-01-30T14:03:26.163630661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:26.163930 containerd[2139]: time="2025-01-30T14:03:26.163861589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:26.166044 containerd[2139]: time="2025-01-30T14:03:26.165754577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:26.174732 containerd[2139]: time="2025-01-30T14:03:26.174568709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:26.174732 containerd[2139]: time="2025-01-30T14:03:26.174685145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:26.175898 containerd[2139]: time="2025-01-30T14:03:26.175012229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:26.176812 containerd[2139]: time="2025-01-30T14:03:26.176305277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:26.180275 containerd[2139]: time="2025-01-30T14:03:26.179450573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:26.180275 containerd[2139]: time="2025-01-30T14:03:26.179606501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:26.180275 containerd[2139]: time="2025-01-30T14:03:26.179783153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:26.181630 containerd[2139]: time="2025-01-30T14:03:26.180852629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:26.303598 kubelet[3080]: E0130 14:03:26.302466 3080 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-237?timeout=10s\": dial tcp 172.31.23.237:6443: connect: connection refused" interval="1.6s" Jan 30 14:03:26.330658 containerd[2139]: time="2025-01-30T14:03:26.330023754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-237,Uid:2231bd501e21f64be3da5e61a0722348,Namespace:kube-system,Attempt:0,} returns sandbox id \"4289da83ba7cb887f141ed6b0c44886898eb0b26ca3dbfc3432028062f614127\"" Jan 30 14:03:26.335384 containerd[2139]: time="2025-01-30T14:03:26.334253814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-237,Uid:60cc2abae5eaa270ba74098dcd75084c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab7c63b19dbc1035bbc0e94801e82aca6bad635c5bc4d9b09a2350072d3cd1de\"" Jan 30 14:03:26.340845 containerd[2139]: time="2025-01-30T14:03:26.340775634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-237,Uid:98154fd1a3cd6f77c4f231b6f169707d,Namespace:kube-system,Attempt:0,} returns sandbox id \"678f5a75b3c20ef7b1cef51ce1d2fec0935f542ba981694a8d1fbe01363d272a\"" Jan 30 14:03:26.344769 containerd[2139]: time="2025-01-30T14:03:26.344553018Z" level=info msg="CreateContainer within sandbox \"ab7c63b19dbc1035bbc0e94801e82aca6bad635c5bc4d9b09a2350072d3cd1de\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:03:26.345745 containerd[2139]: time="2025-01-30T14:03:26.345061818Z" level=info msg="CreateContainer within sandbox \"4289da83ba7cb887f141ed6b0c44886898eb0b26ca3dbfc3432028062f614127\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:03:26.348569 containerd[2139]: time="2025-01-30T14:03:26.348514290Z" level=info msg="CreateContainer within sandbox \"678f5a75b3c20ef7b1cef51ce1d2fec0935f542ba981694a8d1fbe01363d272a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:03:26.397991 containerd[2139]: time="2025-01-30T14:03:26.397915398Z" level=info msg="CreateContainer within sandbox \"4289da83ba7cb887f141ed6b0c44886898eb0b26ca3dbfc3432028062f614127\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0fa76f116b26397a98d89c66d448483456235a479cf59b5044c7402ba2581f52\"" Jan 30 14:03:26.400978 containerd[2139]: time="2025-01-30T14:03:26.400901286Z" level=info msg="CreateContainer within sandbox \"ab7c63b19dbc1035bbc0e94801e82aca6bad635c5bc4d9b09a2350072d3cd1de\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b19f33363b9a5d336f55d528a808d8c5ebca808d754fa741e33f3f85809da3b\"" Jan 30 14:03:26.401678 containerd[2139]: time="2025-01-30T14:03:26.401618478Z" level=info msg="StartContainer for \"0fa76f116b26397a98d89c66d448483456235a479cf59b5044c7402ba2581f52\"" Jan 30 14:03:26.405512 containerd[2139]: time="2025-01-30T14:03:26.404163714Z" level=info msg="CreateContainer within sandbox \"678f5a75b3c20ef7b1cef51ce1d2fec0935f542ba981694a8d1fbe01363d272a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cc94194d5c3e1fdecb15e92920feb5768fe6b0e121097b6ee72f5209ecf3c018\"" Jan 30 14:03:26.405512 containerd[2139]: time="2025-01-30T14:03:26.404586678Z" level=info msg="StartContainer for \"6b19f33363b9a5d336f55d528a808d8c5ebca808d754fa741e33f3f85809da3b\"" Jan 30 14:03:26.412781 kubelet[3080]: I0130 14:03:26.412742 3080 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-237" Jan 30 14:03:26.413605 containerd[2139]: time="2025-01-30T14:03:26.413373510Z" level=info msg="StartContainer for \"cc94194d5c3e1fdecb15e92920feb5768fe6b0e121097b6ee72f5209ecf3c018\"" Jan 30 14:03:26.413940 kubelet[3080]: E0130 14:03:26.413877 3080 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.237:6443/api/v1/nodes\": dial tcp 172.31.23.237:6443: connect: connection refused" node="ip-172-31-23-237" Jan 30 14:03:26.587346 containerd[2139]: time="2025-01-30T14:03:26.587182771Z" level=info msg="StartContainer for \"6b19f33363b9a5d336f55d528a808d8c5ebca808d754fa741e33f3f85809da3b\" returns successfully" Jan 30 14:03:26.606886 containerd[2139]: time="2025-01-30T14:03:26.606831139Z" level=info msg="StartContainer for \"0fa76f116b26397a98d89c66d448483456235a479cf59b5044c7402ba2581f52\" returns successfully" Jan 30 14:03:26.646450 containerd[2139]: time="2025-01-30T14:03:26.646204831Z" level=info msg="StartContainer for \"cc94194d5c3e1fdecb15e92920feb5768fe6b0e121097b6ee72f5209ecf3c018\" returns successfully" Jan 30 14:03:28.020491 kubelet[3080]: I0130 14:03:28.018031 3080 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-237" Jan 30 14:03:30.931535 update_engine[2112]: I20250130 14:03:30.930689 2112 update_attempter.cc:509] Updating boot flags... Jan 30 14:03:30.986850 kubelet[3080]: E0130 14:03:30.986786 3080 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-237\" not found" node="ip-172-31-23-237" Jan 30 14:03:31.042546 kubelet[3080]: E0130 14:03:31.036939 3080 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-237.181f7d59933cfb9f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-237,UID:ip-172-31-23-237,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-237,},FirstTimestamp:2025-01-30 14:03:24.864109471 +0000 UTC m=+1.407531032,LastTimestamp:2025-01-30 14:03:24.864109471 +0000 UTC m=+1.407531032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-237,}" Jan 30 14:03:31.069773 kubelet[3080]: I0130 14:03:31.069730 3080 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-237" Jan 30 14:03:31.107690 kubelet[3080]: E0130 14:03:31.105697 3080 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-237.181f7d599529c16f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-237,UID:ip-172-31-23-237,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-23-237,},FirstTimestamp:2025-01-30 14:03:24.896403823 +0000 UTC m=+1.439825432,LastTimestamp:2025-01-30 14:03:24.896403823 +0000 UTC m=+1.439825432,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-237,}" Jan 30 14:03:31.161630 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3363) Jan 30 14:03:31.573590 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3365) Jan 30 14:03:31.865912 kubelet[3080]: I0130 14:03:31.865748 3080 apiserver.go:52] "Watching apiserver" Jan 30 14:03:31.899435 kubelet[3080]: I0130 14:03:31.899371 3080 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:03:33.406209 systemd[1]: Reloading requested from client PID 3532 ('systemctl') (unit session-7.scope)... Jan 30 14:03:33.406589 systemd[1]: Reloading... Jan 30 14:03:33.574583 zram_generator::config[3578]: No configuration found. Jan 30 14:03:33.806892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:03:33.984408 systemd[1]: Reloading finished in 577 ms. Jan 30 14:03:34.045063 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:03:34.046084 kubelet[3080]: I0130 14:03:34.045964 3080 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:03:34.062753 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:03:34.064429 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:03:34.074260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:03:34.377427 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:03:34.395216 (kubelet)[3642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:03:34.503571 kubelet[3642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:03:34.503571 kubelet[3642]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 14:03:34.503571 kubelet[3642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:03:34.504205 kubelet[3642]: I0130 14:03:34.503641 3642 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:03:34.513373 kubelet[3642]: I0130 14:03:34.513319 3642 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 14:03:34.513373 kubelet[3642]: I0130 14:03:34.513362 3642 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:03:34.514011 kubelet[3642]: I0130 14:03:34.513854 3642 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 14:03:34.516859 kubelet[3642]: I0130 14:03:34.516805 3642 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:03:34.520593 kubelet[3642]: I0130 14:03:34.520522 3642 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:03:34.539581 kubelet[3642]: I0130 14:03:34.538844 3642 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:03:34.539985 kubelet[3642]: I0130 14:03:34.539913 3642 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:03:34.540514 kubelet[3642]: I0130 14:03:34.539977 3642 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-237","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 14:03:34.540514 kubelet[3642]: I0130 14:03:34.540284 3642 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:03:34.540514 kubelet[3642]: I0130 14:03:34.540308 3642 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 14:03:34.540514 kubelet[3642]: I0130 14:03:34.540377 3642 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:03:34.540830 kubelet[3642]: I0130 14:03:34.540590 3642 kubelet.go:400] "Attempting to sync node with API server" Jan 30 14:03:34.544401 kubelet[3642]: I0130 14:03:34.540619 3642 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:03:34.544401 kubelet[3642]: I0130 14:03:34.541997 3642 kubelet.go:312] "Adding apiserver pod source" Jan 30 14:03:34.544401 kubelet[3642]: I0130 14:03:34.542031 3642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:03:34.545063 kubelet[3642]: I0130 14:03:34.545012 3642 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 14:03:34.545529 kubelet[3642]: I0130 14:03:34.545314 3642 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:03:34.548519 kubelet[3642]: I0130 14:03:34.548029 3642 server.go:1264] "Started kubelet" Jan 30 14:03:34.558934 kubelet[3642]: I0130 14:03:34.556337 3642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:03:34.562395 kubelet[3642]: I0130 14:03:34.561988 3642 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:03:34.565237 kubelet[3642]: I0130 14:03:34.565144 3642 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:03:34.565632 kubelet[3642]: I0130 14:03:34.565598 3642 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:03:34.581530 kubelet[3642]: I0130 14:03:34.581458 3642 server.go:455] "Adding debug handlers to kubelet server" Jan 30 14:03:34.584240 kubelet[3642]: I0130 14:03:34.584209 3642 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 14:03:34.586556 kubelet[3642]: I0130 14:03:34.585430 3642 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:03:34.586556 kubelet[3642]: I0130 14:03:34.585651 3642 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:03:34.591407 kubelet[3642]: I0130 14:03:34.590296 3642 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:03:34.595944 kubelet[3642]: I0130 14:03:34.595911 3642 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:03:34.623962 kubelet[3642]: I0130 14:03:34.623604 3642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:03:34.626302 kubelet[3642]: I0130 14:03:34.625836 3642 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:03:34.626302 kubelet[3642]: I0130 14:03:34.625910 3642 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 14:03:34.626302 kubelet[3642]: I0130 14:03:34.625942 3642 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 14:03:34.626302 kubelet[3642]: E0130 14:03:34.626010 3642 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:03:34.643571 kubelet[3642]: I0130 14:03:34.643393 3642 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:03:34.654614 kubelet[3642]: E0130 14:03:34.653452 3642 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:03:34.699957 kubelet[3642]: I0130 14:03:34.699911 3642 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-237" Jan 30 14:03:34.723269 kubelet[3642]: I0130 14:03:34.723210 3642 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-237" Jan 30 14:03:34.723429 kubelet[3642]: I0130 14:03:34.723353 3642 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-237" Jan 30 14:03:34.727126 kubelet[3642]: E0130 14:03:34.726503 3642 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:03:34.786576 kubelet[3642]: I0130 14:03:34.786459 3642 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 14:03:34.786576 kubelet[3642]: I0130 14:03:34.786510 3642 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 14:03:34.786576 kubelet[3642]: I0130 14:03:34.786545 3642 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:03:34.786799 kubelet[3642]: I0130 14:03:34.786772 3642 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:03:34.786883 kubelet[3642]: I0130 14:03:34.786791 3642 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:03:34.786937 kubelet[3642]: I0130 14:03:34.786892 3642 policy_none.go:49] "None policy: Start" Jan 30 14:03:34.788105 kubelet[3642]: I0130 14:03:34.788074 3642 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 14:03:34.788397 kubelet[3642]: I0130 14:03:34.788376 3642 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:03:34.789424 kubelet[3642]: I0130 14:03:34.788822 3642 state_mem.go:75] "Updated machine memory state" Jan 30 14:03:34.791285 kubelet[3642]: I0130 14:03:34.791244 3642 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:03:34.793165 kubelet[3642]: I0130 14:03:34.791537 3642 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:03:34.793165 kubelet[3642]: I0130 14:03:34.791902 3642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:03:34.928930 kubelet[3642]: I0130 14:03:34.926773 3642 topology_manager.go:215] "Topology Admit Handler" podUID="60cc2abae5eaa270ba74098dcd75084c" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-237" Jan 30 14:03:34.928930 kubelet[3642]: I0130 14:03:34.926931 3642 topology_manager.go:215] "Topology Admit Handler" podUID="98154fd1a3cd6f77c4f231b6f169707d" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-237" Jan 30 14:03:34.928930 kubelet[3642]: I0130 14:03:34.926999 3642 topology_manager.go:215] "Topology Admit Handler" podUID="2231bd501e21f64be3da5e61a0722348" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:34.999448 kubelet[3642]: I0130 14:03:34.999394 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98154fd1a3cd6f77c4f231b6f169707d-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-237\" (UID: \"98154fd1a3cd6f77c4f231b6f169707d\") " pod="kube-system/kube-apiserver-ip-172-31-23-237" Jan 30 14:03:34.999605 kubelet[3642]: I0130 14:03:34.999464 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:34.999605 kubelet[3642]: I0130 14:03:34.999528 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:34.999605 kubelet[3642]: I0130 14:03:34.999573 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:34.999786 kubelet[3642]: I0130 14:03:34.999612 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:34.999786 kubelet[3642]: I0130 14:03:34.999652 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60cc2abae5eaa270ba74098dcd75084c-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-237\" (UID: \"60cc2abae5eaa270ba74098dcd75084c\") " pod="kube-system/kube-scheduler-ip-172-31-23-237" Jan 30 14:03:34.999786 kubelet[3642]: I0130 14:03:34.999686 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98154fd1a3cd6f77c4f231b6f169707d-ca-certs\") pod \"kube-apiserver-ip-172-31-23-237\" (UID: \"98154fd1a3cd6f77c4f231b6f169707d\") " pod="kube-system/kube-apiserver-ip-172-31-23-237" Jan 30 14:03:34.999786 kubelet[3642]: I0130 14:03:34.999720 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98154fd1a3cd6f77c4f231b6f169707d-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-237\" (UID: \"98154fd1a3cd6f77c4f231b6f169707d\") " pod="kube-system/kube-apiserver-ip-172-31-23-237" Jan 30 14:03:34.999786 kubelet[3642]: I0130 14:03:34.999758 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2231bd501e21f64be3da5e61a0722348-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-237\" (UID: \"2231bd501e21f64be3da5e61a0722348\") " pod="kube-system/kube-controller-manager-ip-172-31-23-237" Jan 30 14:03:35.564068 kubelet[3642]: I0130 14:03:35.563971 3642 apiserver.go:52] "Watching apiserver" Jan 30 14:03:35.595571 kubelet[3642]: I0130 14:03:35.593758 3642 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:03:35.630756 kubelet[3642]: I0130 14:03:35.630623 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-237" podStartSLOduration=1.6306052119999999 podStartE2EDuration="1.630605212s" podCreationTimestamp="2025-01-30 14:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:03:35.630511612 +0000 UTC m=+1.226988967" watchObservedRunningTime="2025-01-30 14:03:35.630605212 +0000 UTC m=+1.227082579" Jan 30 14:03:35.649881 kubelet[3642]: I0130 14:03:35.648018 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-237" podStartSLOduration=1.647995192 podStartE2EDuration="1.647995192s" podCreationTimestamp="2025-01-30 14:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:03:35.645359392 +0000 UTC m=+1.241836735" watchObservedRunningTime="2025-01-30 14:03:35.647995192 +0000 UTC m=+1.244472547" Jan 30 14:03:35.662890 kubelet[3642]: I0130 14:03:35.662563 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-237" podStartSLOduration=1.662430796 podStartE2EDuration="1.662430796s" podCreationTimestamp="2025-01-30 14:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:03:35.662140084 +0000 UTC m=+1.258617427" watchObservedRunningTime="2025-01-30 14:03:35.662430796 +0000 UTC m=+1.258908151" Jan 30 14:03:41.459187 sudo[2486]: pam_unix(sudo:session): session closed for user root Jan 30 14:03:41.483823 sshd[2482]: pam_unix(sshd:session): session closed for user core Jan 30 14:03:41.489115 systemd[1]: sshd@6-172.31.23.237:22-139.178.89.65:35566.service: Deactivated successfully. Jan 30 14:03:41.496918 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:03:41.498715 systemd-logind[2109]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:03:41.502235 systemd-logind[2109]: Removed session 7. Jan 30 14:03:48.631774 kubelet[3642]: I0130 14:03:48.631434 3642 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:03:48.632428 containerd[2139]: time="2025-01-30T14:03:48.632016005Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:03:48.634790 kubelet[3642]: I0130 14:03:48.633543 3642 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:03:49.219703 kubelet[3642]: I0130 14:03:49.218580 3642 topology_manager.go:215] "Topology Admit Handler" podUID="0ff80817-7230-4c74-b639-11e1d1f1f39c" podNamespace="kube-system" podName="kube-proxy-b2wj8" Jan 30 14:03:49.293506 kubelet[3642]: I0130 14:03:49.291965 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ff80817-7230-4c74-b639-11e1d1f1f39c-lib-modules\") pod \"kube-proxy-b2wj8\" (UID: \"0ff80817-7230-4c74-b639-11e1d1f1f39c\") " pod="kube-system/kube-proxy-b2wj8" Jan 30 14:03:49.293971 kubelet[3642]: I0130 14:03:49.293765 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0ff80817-7230-4c74-b639-11e1d1f1f39c-kube-proxy\") pod \"kube-proxy-b2wj8\" (UID: \"0ff80817-7230-4c74-b639-11e1d1f1f39c\") " pod="kube-system/kube-proxy-b2wj8" Jan 30 14:03:49.293971 kubelet[3642]: I0130 14:03:49.293826 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ff80817-7230-4c74-b639-11e1d1f1f39c-xtables-lock\") pod \"kube-proxy-b2wj8\" (UID: \"0ff80817-7230-4c74-b639-11e1d1f1f39c\") " pod="kube-system/kube-proxy-b2wj8" Jan 30 14:03:49.293971 kubelet[3642]: I0130 14:03:49.293863 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dk9z\" (UniqueName: \"kubernetes.io/projected/0ff80817-7230-4c74-b639-11e1d1f1f39c-kube-api-access-7dk9z\") pod \"kube-proxy-b2wj8\" (UID: \"0ff80817-7230-4c74-b639-11e1d1f1f39c\") " pod="kube-system/kube-proxy-b2wj8" Jan 30 14:03:49.394197 kubelet[3642]: I0130 14:03:49.391705 3642 topology_manager.go:215] "Topology Admit Handler" podUID="155a9f43-c273-47cd-b826-512cfaed5399" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-vjlxn" Jan 30 14:03:49.497608 kubelet[3642]: I0130 14:03:49.497454 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87tz7\" (UniqueName: \"kubernetes.io/projected/155a9f43-c273-47cd-b826-512cfaed5399-kube-api-access-87tz7\") pod \"tigera-operator-7bc55997bb-vjlxn\" (UID: \"155a9f43-c273-47cd-b826-512cfaed5399\") " pod="tigera-operator/tigera-operator-7bc55997bb-vjlxn" Jan 30 14:03:49.497911 kubelet[3642]: I0130 14:03:49.497882 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/155a9f43-c273-47cd-b826-512cfaed5399-var-lib-calico\") pod \"tigera-operator-7bc55997bb-vjlxn\" (UID: \"155a9f43-c273-47cd-b826-512cfaed5399\") " pod="tigera-operator/tigera-operator-7bc55997bb-vjlxn" Jan 30 14:03:49.544929 containerd[2139]: time="2025-01-30T14:03:49.544877525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2wj8,Uid:0ff80817-7230-4c74-b639-11e1d1f1f39c,Namespace:kube-system,Attempt:0,}" Jan 30 14:03:49.587464 containerd[2139]: time="2025-01-30T14:03:49.587150033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:49.589237 containerd[2139]: time="2025-01-30T14:03:49.589098725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:49.589237 containerd[2139]: time="2025-01-30T14:03:49.589171421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:49.589727 containerd[2139]: time="2025-01-30T14:03:49.589518509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:49.676099 containerd[2139]: time="2025-01-30T14:03:49.676004226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2wj8,Uid:0ff80817-7230-4c74-b639-11e1d1f1f39c,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc84856591933210b8d2fb9c3966581eb08387c57d4727b276d9514e13473fbc\"" Jan 30 14:03:49.682206 containerd[2139]: time="2025-01-30T14:03:49.681952638Z" level=info msg="CreateContainer within sandbox \"fc84856591933210b8d2fb9c3966581eb08387c57d4727b276d9514e13473fbc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:03:49.700930 containerd[2139]: time="2025-01-30T14:03:49.700757022Z" level=info msg="CreateContainer within sandbox \"fc84856591933210b8d2fb9c3966581eb08387c57d4727b276d9514e13473fbc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4da711cd2a93e4ac177ffc4c972a8751d3528cd7caac649661dfacf59a5654f4\"" Jan 30 14:03:49.703062 containerd[2139]: time="2025-01-30T14:03:49.701575770Z" level=info msg="StartContainer for \"4da711cd2a93e4ac177ffc4c972a8751d3528cd7caac649661dfacf59a5654f4\"" Jan 30 14:03:49.711163 containerd[2139]: time="2025-01-30T14:03:49.710633586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-vjlxn,Uid:155a9f43-c273-47cd-b826-512cfaed5399,Namespace:tigera-operator,Attempt:0,}" Jan 30 14:03:49.788606 containerd[2139]: time="2025-01-30T14:03:49.787720890Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:49.788606 containerd[2139]: time="2025-01-30T14:03:49.787844550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:49.788606 containerd[2139]: time="2025-01-30T14:03:49.787883058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:49.788606 containerd[2139]: time="2025-01-30T14:03:49.788049390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:49.831871 containerd[2139]: time="2025-01-30T14:03:49.831805855Z" level=info msg="StartContainer for \"4da711cd2a93e4ac177ffc4c972a8751d3528cd7caac649661dfacf59a5654f4\" returns successfully" Jan 30 14:03:49.927107 containerd[2139]: time="2025-01-30T14:03:49.927023899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-vjlxn,Uid:155a9f43-c273-47cd-b826-512cfaed5399,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8316a491434f5b97efac1f5a24642b772906fc7e8101fbe28ebf30228168c785\"" Jan 30 14:03:49.935571 containerd[2139]: time="2025-01-30T14:03:49.935511499Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 14:03:51.389074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785675383.mount: Deactivated successfully. Jan 30 14:03:52.006180 containerd[2139]: time="2025-01-30T14:03:52.006099677Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:52.008021 containerd[2139]: time="2025-01-30T14:03:52.007953293Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19124160" Jan 30 14:03:52.010400 containerd[2139]: time="2025-01-30T14:03:52.010327229Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:52.015291 containerd[2139]: time="2025-01-30T14:03:52.015194069Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:03:52.017115 containerd[2139]: time="2025-01-30T14:03:52.016925658Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.080064603s" Jan 30 14:03:52.017115 containerd[2139]: time="2025-01-30T14:03:52.016981266Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 30 14:03:52.023351 containerd[2139]: time="2025-01-30T14:03:52.023271822Z" level=info msg="CreateContainer within sandbox \"8316a491434f5b97efac1f5a24642b772906fc7e8101fbe28ebf30228168c785\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 14:03:52.050855 containerd[2139]: time="2025-01-30T14:03:52.050787462Z" level=info msg="CreateContainer within sandbox \"8316a491434f5b97efac1f5a24642b772906fc7e8101fbe28ebf30228168c785\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2a35a4a978482bc9359223f1172b6523f7f897bbee4aa6f817bb3da06f97c062\"" Jan 30 14:03:52.051141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984654920.mount: Deactivated successfully. Jan 30 14:03:52.053739 containerd[2139]: time="2025-01-30T14:03:52.053638134Z" level=info msg="StartContainer for \"2a35a4a978482bc9359223f1172b6523f7f897bbee4aa6f817bb3da06f97c062\"" Jan 30 14:03:52.154904 containerd[2139]: time="2025-01-30T14:03:52.154719966Z" level=info msg="StartContainer for \"2a35a4a978482bc9359223f1172b6523f7f897bbee4aa6f817bb3da06f97c062\" returns successfully" Jan 30 14:03:52.773678 kubelet[3642]: I0130 14:03:52.773577 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b2wj8" podStartSLOduration=3.773552793 podStartE2EDuration="3.773552793s" podCreationTimestamp="2025-01-30 14:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:03:50.764415463 +0000 UTC m=+16.360892818" watchObservedRunningTime="2025-01-30 14:03:52.773552793 +0000 UTC m=+18.370030172" Jan 30 14:03:57.463556 kubelet[3642]: I0130 14:03:57.463434 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-vjlxn" podStartSLOduration=6.378179542 podStartE2EDuration="8.463350817s" podCreationTimestamp="2025-01-30 14:03:49 +0000 UTC" firstStartedPulling="2025-01-30 14:03:49.933795571 +0000 UTC m=+15.530272914" lastFinishedPulling="2025-01-30 14:03:52.018966846 +0000 UTC m=+17.615444189" observedRunningTime="2025-01-30 14:03:52.774646437 +0000 UTC m=+18.371123792" watchObservedRunningTime="2025-01-30 14:03:57.463350817 +0000 UTC m=+23.059828184" Jan 30 14:03:57.467657 kubelet[3642]: I0130 14:03:57.463700 3642 topology_manager.go:215] "Topology Admit Handler" podUID="1410af31-37f6-4030-9ad6-13ffaad35386" podNamespace="calico-system" podName="calico-typha-69d5fcd5c-wjv4j" Jan 30 14:03:57.559596 kubelet[3642]: I0130 14:03:57.559489 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5br9s\" (UniqueName: \"kubernetes.io/projected/1410af31-37f6-4030-9ad6-13ffaad35386-kube-api-access-5br9s\") pod \"calico-typha-69d5fcd5c-wjv4j\" (UID: \"1410af31-37f6-4030-9ad6-13ffaad35386\") " pod="calico-system/calico-typha-69d5fcd5c-wjv4j" Jan 30 14:03:57.559759 kubelet[3642]: I0130 14:03:57.559628 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1410af31-37f6-4030-9ad6-13ffaad35386-tigera-ca-bundle\") pod \"calico-typha-69d5fcd5c-wjv4j\" (UID: \"1410af31-37f6-4030-9ad6-13ffaad35386\") " pod="calico-system/calico-typha-69d5fcd5c-wjv4j" Jan 30 14:03:57.559759 kubelet[3642]: I0130 14:03:57.559710 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1410af31-37f6-4030-9ad6-13ffaad35386-typha-certs\") pod \"calico-typha-69d5fcd5c-wjv4j\" (UID: \"1410af31-37f6-4030-9ad6-13ffaad35386\") " pod="calico-system/calico-typha-69d5fcd5c-wjv4j" Jan 30 14:03:57.695507 kubelet[3642]: I0130 14:03:57.693418 3642 topology_manager.go:215] "Topology Admit Handler" podUID="f605331c-7978-43f6-8347-6fd2777be317" podNamespace="calico-system" podName="calico-node-bwzrs" Jan 30 14:03:57.762176 kubelet[3642]: I0130 14:03:57.760837 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f605331c-7978-43f6-8347-6fd2777be317-var-lib-calico\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762176 kubelet[3642]: I0130 14:03:57.760909 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f605331c-7978-43f6-8347-6fd2777be317-lib-modules\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762176 kubelet[3642]: I0130 14:03:57.760949 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f605331c-7978-43f6-8347-6fd2777be317-cni-net-dir\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762176 kubelet[3642]: I0130 14:03:57.760990 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f605331c-7978-43f6-8347-6fd2777be317-policysync\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762176 kubelet[3642]: I0130 14:03:57.761069 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f605331c-7978-43f6-8347-6fd2777be317-xtables-lock\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762642 kubelet[3642]: I0130 14:03:57.761116 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f605331c-7978-43f6-8347-6fd2777be317-tigera-ca-bundle\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762642 kubelet[3642]: I0130 14:03:57.761154 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwsfm\" (UniqueName: \"kubernetes.io/projected/f605331c-7978-43f6-8347-6fd2777be317-kube-api-access-jwsfm\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762642 kubelet[3642]: I0130 14:03:57.761190 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f605331c-7978-43f6-8347-6fd2777be317-cni-bin-dir\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762642 kubelet[3642]: I0130 14:03:57.761229 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f605331c-7978-43f6-8347-6fd2777be317-node-certs\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762642 kubelet[3642]: I0130 14:03:57.761265 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f605331c-7978-43f6-8347-6fd2777be317-cni-log-dir\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762915 kubelet[3642]: I0130 14:03:57.761308 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f605331c-7978-43f6-8347-6fd2777be317-var-run-calico\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.762915 kubelet[3642]: I0130 14:03:57.761343 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f605331c-7978-43f6-8347-6fd2777be317-flexvol-driver-host\") pod \"calico-node-bwzrs\" (UID: \"f605331c-7978-43f6-8347-6fd2777be317\") " pod="calico-system/calico-node-bwzrs" Jan 30 14:03:57.789366 containerd[2139]: time="2025-01-30T14:03:57.788794238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d5fcd5c-wjv4j,Uid:1410af31-37f6-4030-9ad6-13ffaad35386,Namespace:calico-system,Attempt:0,}" Jan 30 14:03:57.855368 containerd[2139]: time="2025-01-30T14:03:57.854836875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:57.855368 containerd[2139]: time="2025-01-30T14:03:57.854940315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:57.855368 containerd[2139]: time="2025-01-30T14:03:57.854999751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:57.858200 containerd[2139]: time="2025-01-30T14:03:57.855194967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:57.901069 kubelet[3642]: E0130 14:03:57.900113 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:57.906882 kubelet[3642]: W0130 14:03:57.906535 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:57.915891 kubelet[3642]: E0130 14:03:57.907107 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:57.922842 kubelet[3642]: E0130 14:03:57.921792 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:57.922842 kubelet[3642]: W0130 14:03:57.921834 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:57.922842 kubelet[3642]: E0130 14:03:57.921871 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:57.955651 kubelet[3642]: I0130 14:03:57.954243 3642 topology_manager.go:215] "Topology Admit Handler" podUID="6a54a6e6-ac55-4daa-ba3e-bb307511354c" podNamespace="calico-system" podName="csi-node-driver-9nwm6" Jan 30 14:03:57.961516 kubelet[3642]: E0130 14:03:57.960566 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nwm6" podUID="6a54a6e6-ac55-4daa-ba3e-bb307511354c" Jan 30 14:03:57.968736 kubelet[3642]: E0130 14:03:57.966934 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:57.968736 kubelet[3642]: W0130 14:03:57.966974 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:57.968736 kubelet[3642]: E0130 14:03:57.967009 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:57.969378 kubelet[3642]: E0130 14:03:57.969022 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:57.969378 kubelet[3642]: W0130 14:03:57.969064 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:57.969378 kubelet[3642]: E0130 14:03:57.969100 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:57.972788 kubelet[3642]: E0130 14:03:57.972324 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:57.972788 kubelet[3642]: W0130 14:03:57.972362 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:57.972788 kubelet[3642]: E0130 14:03:57.972444 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:57.976793 kubelet[3642]: E0130 14:03:57.976366 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:57.976793 kubelet[3642]: W0130 14:03:57.976403 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:57.976793 kubelet[3642]: E0130 14:03:57.976438 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:57.978707 kubelet[3642]: E0130 14:03:57.977781 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:57.978707 kubelet[3642]: W0130 14:03:57.977821 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:57.978707 kubelet[3642]: E0130 14:03:57.977856 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:57.980540 kubelet[3642]: E0130 14:03:57.980101 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:57.982258 kubelet[3642]: W0130 14:03:57.981915 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:57.982258 kubelet[3642]: E0130 14:03:57.981970 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:57.986020 kubelet[3642]: E0130 14:03:57.985847 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:57.986623 kubelet[3642]: W0130 14:03:57.986413 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:57.986623 kubelet[3642]: E0130 14:03:57.986556 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.019402 containerd[2139]: time="2025-01-30T14:03:58.018824027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bwzrs,Uid:f605331c-7978-43f6-8347-6fd2777be317,Namespace:calico-system,Attempt:0,}" Jan 30 14:03:58.042525 kubelet[3642]: E0130 14:03:58.041891 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.042525 kubelet[3642]: W0130 14:03:58.041930 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.042525 kubelet[3642]: E0130 14:03:58.041965 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.045315 kubelet[3642]: E0130 14:03:58.042662 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.045315 kubelet[3642]: W0130 14:03:58.042686 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.045315 kubelet[3642]: E0130 14:03:58.042719 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.046810 kubelet[3642]: E0130 14:03:58.045644 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.046810 kubelet[3642]: W0130 14:03:58.045680 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.046810 kubelet[3642]: E0130 14:03:58.045717 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.046810 kubelet[3642]: E0130 14:03:58.046141 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.046810 kubelet[3642]: W0130 14:03:58.046226 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.046810 kubelet[3642]: E0130 14:03:58.046281 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.049546 kubelet[3642]: E0130 14:03:58.048076 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.049546 kubelet[3642]: W0130 14:03:58.048116 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.049546 kubelet[3642]: E0130 14:03:58.048151 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.051178 kubelet[3642]: E0130 14:03:58.049794 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.051178 kubelet[3642]: W0130 14:03:58.049822 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.051178 kubelet[3642]: E0130 14:03:58.049855 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.054733 kubelet[3642]: E0130 14:03:58.054684 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.054733 kubelet[3642]: W0130 14:03:58.054723 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.055830 kubelet[3642]: E0130 14:03:58.054758 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.057060 kubelet[3642]: E0130 14:03:58.056416 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.057060 kubelet[3642]: W0130 14:03:58.056455 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.057060 kubelet[3642]: E0130 14:03:58.056532 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.058540 kubelet[3642]: E0130 14:03:58.057634 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.058540 kubelet[3642]: W0130 14:03:58.057676 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.058540 kubelet[3642]: E0130 14:03:58.057711 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.061551 kubelet[3642]: E0130 14:03:58.060844 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.061551 kubelet[3642]: W0130 14:03:58.060886 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.061551 kubelet[3642]: E0130 14:03:58.060922 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.067010 kubelet[3642]: E0130 14:03:58.064053 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.067010 kubelet[3642]: W0130 14:03:58.064093 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.067010 kubelet[3642]: E0130 14:03:58.064128 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.067010 kubelet[3642]: E0130 14:03:58.064507 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.067010 kubelet[3642]: W0130 14:03:58.064528 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.067010 kubelet[3642]: E0130 14:03:58.064551 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.068891 kubelet[3642]: E0130 14:03:58.067142 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.068891 kubelet[3642]: W0130 14:03:58.067170 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.068891 kubelet[3642]: E0130 14:03:58.067204 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.072515 kubelet[3642]: E0130 14:03:58.072445 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.072515 kubelet[3642]: W0130 14:03:58.072500 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.073638 kubelet[3642]: E0130 14:03:58.072538 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.076168 kubelet[3642]: E0130 14:03:58.075611 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.076168 kubelet[3642]: W0130 14:03:58.075649 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.076168 kubelet[3642]: E0130 14:03:58.075683 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.083430 kubelet[3642]: E0130 14:03:58.080194 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.083430 kubelet[3642]: W0130 14:03:58.080243 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.083430 kubelet[3642]: E0130 14:03:58.080279 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.083430 kubelet[3642]: E0130 14:03:58.082943 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.083430 kubelet[3642]: W0130 14:03:58.082973 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.083430 kubelet[3642]: E0130 14:03:58.083007 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.085782 kubelet[3642]: E0130 14:03:58.085730 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.085782 kubelet[3642]: W0130 14:03:58.085770 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.085987 kubelet[3642]: E0130 14:03:58.085804 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.091601 kubelet[3642]: E0130 14:03:58.089595 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.091601 kubelet[3642]: W0130 14:03:58.089633 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.091601 kubelet[3642]: E0130 14:03:58.089664 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.091858 kubelet[3642]: E0130 14:03:58.091800 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.091858 kubelet[3642]: W0130 14:03:58.091828 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.091976 kubelet[3642]: E0130 14:03:58.091862 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.095864 kubelet[3642]: E0130 14:03:58.095813 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.095864 kubelet[3642]: W0130 14:03:58.095852 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.096901 kubelet[3642]: E0130 14:03:58.095888 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.096901 kubelet[3642]: I0130 14:03:58.095950 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6a54a6e6-ac55-4daa-ba3e-bb307511354c-varrun\") pod \"csi-node-driver-9nwm6\" (UID: \"6a54a6e6-ac55-4daa-ba3e-bb307511354c\") " pod="calico-system/csi-node-driver-9nwm6" Jan 30 14:03:58.100431 kubelet[3642]: E0130 14:03:58.098174 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.100431 kubelet[3642]: W0130 14:03:58.098217 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.100431 kubelet[3642]: E0130 14:03:58.098268 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.100431 kubelet[3642]: I0130 14:03:58.099438 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ltv9\" (UniqueName: \"kubernetes.io/projected/6a54a6e6-ac55-4daa-ba3e-bb307511354c-kube-api-access-7ltv9\") pod \"csi-node-driver-9nwm6\" (UID: \"6a54a6e6-ac55-4daa-ba3e-bb307511354c\") " pod="calico-system/csi-node-driver-9nwm6" Jan 30 14:03:58.103601 kubelet[3642]: E0130 14:03:58.103562 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.105512 kubelet[3642]: W0130 14:03:58.104150 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.106136 kubelet[3642]: E0130 14:03:58.105747 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.107687 kubelet[3642]: E0130 14:03:58.107648 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.109659 kubelet[3642]: W0130 14:03:58.108223 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.109659 kubelet[3642]: E0130 14:03:58.108274 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.111267 kubelet[3642]: E0130 14:03:58.110589 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.112354 kubelet[3642]: W0130 14:03:58.111531 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.113511 kubelet[3642]: E0130 14:03:58.112563 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.115780 kubelet[3642]: I0130 14:03:58.114138 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6a54a6e6-ac55-4daa-ba3e-bb307511354c-registration-dir\") pod \"csi-node-driver-9nwm6\" (UID: \"6a54a6e6-ac55-4daa-ba3e-bb307511354c\") " pod="calico-system/csi-node-driver-9nwm6" Jan 30 14:03:58.117796 kubelet[3642]: E0130 14:03:58.117619 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.120611 kubelet[3642]: W0130 14:03:58.117661 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.120611 kubelet[3642]: E0130 14:03:58.118580 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.121592 kubelet[3642]: E0130 14:03:58.121559 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.123528 kubelet[3642]: W0130 14:03:58.121976 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.123528 kubelet[3642]: E0130 14:03:58.122070 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.126511 kubelet[3642]: E0130 14:03:58.125001 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.126897 kubelet[3642]: W0130 14:03:58.126725 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.128451 kubelet[3642]: E0130 14:03:58.127562 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.128451 kubelet[3642]: I0130 14:03:58.127637 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6a54a6e6-ac55-4daa-ba3e-bb307511354c-kubelet-dir\") pod \"csi-node-driver-9nwm6\" (UID: \"6a54a6e6-ac55-4daa-ba3e-bb307511354c\") " pod="calico-system/csi-node-driver-9nwm6" Jan 30 14:03:58.132740 kubelet[3642]: E0130 14:03:58.132510 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.132740 kubelet[3642]: W0130 14:03:58.132564 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.132740 kubelet[3642]: E0130 14:03:58.132618 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.137382 kubelet[3642]: E0130 14:03:58.136530 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.137382 kubelet[3642]: W0130 14:03:58.136578 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.137382 kubelet[3642]: E0130 14:03:58.136697 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.141911 containerd[2139]: time="2025-01-30T14:03:58.138484212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:03:58.141911 containerd[2139]: time="2025-01-30T14:03:58.141294480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:03:58.142129 kubelet[3642]: E0130 14:03:58.140959 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.142129 kubelet[3642]: W0130 14:03:58.140987 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.142129 kubelet[3642]: E0130 14:03:58.141082 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.142129 kubelet[3642]: I0130 14:03:58.141138 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6a54a6e6-ac55-4daa-ba3e-bb307511354c-socket-dir\") pod \"csi-node-driver-9nwm6\" (UID: \"6a54a6e6-ac55-4daa-ba3e-bb307511354c\") " pod="calico-system/csi-node-driver-9nwm6" Jan 30 14:03:58.142377 containerd[2139]: time="2025-01-30T14:03:58.141919500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:58.145114 containerd[2139]: time="2025-01-30T14:03:58.143270556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:03:58.147551 kubelet[3642]: E0130 14:03:58.146631 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.149517 kubelet[3642]: W0130 14:03:58.147717 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.149517 kubelet[3642]: E0130 14:03:58.147786 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.153033 kubelet[3642]: E0130 14:03:58.152346 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.153033 kubelet[3642]: W0130 14:03:58.152385 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.153033 kubelet[3642]: E0130 14:03:58.152447 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.156962 kubelet[3642]: E0130 14:03:58.155655 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.156962 kubelet[3642]: W0130 14:03:58.155693 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.156962 kubelet[3642]: E0130 14:03:58.155726 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.160631 kubelet[3642]: E0130 14:03:58.160589 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.162205 kubelet[3642]: W0130 14:03:58.160783 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.162205 kubelet[3642]: E0130 14:03:58.160825 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.237722 containerd[2139]: time="2025-01-30T14:03:58.236622372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-69d5fcd5c-wjv4j,Uid:1410af31-37f6-4030-9ad6-13ffaad35386,Namespace:calico-system,Attempt:0,} returns sandbox id \"48b92a680d79c7fcd3e3cd0bc1b7d810d5ef3d598ddd737ce0e1fae4e0da0c5d\"" Jan 30 14:03:58.240928 containerd[2139]: time="2025-01-30T14:03:58.240587400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 14:03:58.241984 kubelet[3642]: E0130 14:03:58.241947 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.242157 kubelet[3642]: W0130 14:03:58.242132 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.242312 kubelet[3642]: E0130 14:03:58.242271 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.243682 kubelet[3642]: E0130 14:03:58.243527 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.243682 kubelet[3642]: W0130 14:03:58.243558 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.243682 kubelet[3642]: E0130 14:03:58.243629 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.246447 kubelet[3642]: E0130 14:03:58.245307 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.246447 kubelet[3642]: W0130 14:03:58.245345 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.246447 kubelet[3642]: E0130 14:03:58.245395 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.247782 kubelet[3642]: E0130 14:03:58.247375 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.247782 kubelet[3642]: W0130 14:03:58.247403 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.249343 kubelet[3642]: E0130 14:03:58.248570 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.249343 kubelet[3642]: W0130 14:03:58.248605 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.249343 kubelet[3642]: E0130 14:03:58.249088 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.249343 kubelet[3642]: E0130 14:03:58.249102 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.249343 kubelet[3642]: E0130 14:03:58.249121 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.249343 kubelet[3642]: W0130 14:03:58.249107 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.249343 kubelet[3642]: E0130 14:03:58.249172 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.250753 kubelet[3642]: E0130 14:03:58.250368 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.250753 kubelet[3642]: W0130 14:03:58.250400 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.250753 kubelet[3642]: E0130 14:03:58.250441 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.251489 kubelet[3642]: E0130 14:03:58.251290 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.251489 kubelet[3642]: W0130 14:03:58.251321 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.251623 kubelet[3642]: E0130 14:03:58.251509 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.252259 kubelet[3642]: E0130 14:03:58.251911 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.252259 kubelet[3642]: W0130 14:03:58.251935 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.252259 kubelet[3642]: E0130 14:03:58.252184 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.253077 kubelet[3642]: E0130 14:03:58.252834 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.253077 kubelet[3642]: W0130 14:03:58.252861 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.253077 kubelet[3642]: E0130 14:03:58.253026 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.253886 kubelet[3642]: E0130 14:03:58.253552 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.253886 kubelet[3642]: W0130 14:03:58.253577 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.253886 kubelet[3642]: E0130 14:03:58.253694 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.255296 kubelet[3642]: E0130 14:03:58.255081 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.255296 kubelet[3642]: W0130 14:03:58.255114 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.255869 kubelet[3642]: E0130 14:03:58.255554 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.256633 kubelet[3642]: E0130 14:03:58.256332 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.256633 kubelet[3642]: W0130 14:03:58.256370 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.257661 kubelet[3642]: E0130 14:03:58.257349 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.257661 kubelet[3642]: W0130 14:03:58.257380 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.258984 kubelet[3642]: E0130 14:03:58.258619 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.258984 kubelet[3642]: W0130 14:03:58.258652 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.263345 kubelet[3642]: E0130 14:03:58.260805 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.263345 kubelet[3642]: E0130 14:03:58.260858 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.263345 kubelet[3642]: E0130 14:03:58.260891 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.263345 kubelet[3642]: E0130 14:03:58.261019 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.263345 kubelet[3642]: W0130 14:03:58.261037 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.263345 kubelet[3642]: E0130 14:03:58.261388 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.263345 kubelet[3642]: W0130 14:03:58.261408 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.263345 kubelet[3642]: E0130 14:03:58.261740 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.263345 kubelet[3642]: E0130 14:03:58.261779 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.264883 kubelet[3642]: E0130 14:03:58.264569 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.264883 kubelet[3642]: W0130 14:03:58.264736 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.265338 kubelet[3642]: E0130 14:03:58.264833 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.268649 kubelet[3642]: E0130 14:03:58.268295 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.268649 kubelet[3642]: W0130 14:03:58.268331 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.269332 kubelet[3642]: E0130 14:03:58.268455 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.270526 kubelet[3642]: E0130 14:03:58.270160 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.270526 kubelet[3642]: W0130 14:03:58.270338 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.272154 kubelet[3642]: E0130 14:03:58.271590 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.277962 kubelet[3642]: E0130 14:03:58.277434 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.277962 kubelet[3642]: W0130 14:03:58.277950 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.279653 kubelet[3642]: E0130 14:03:58.278410 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.282694 kubelet[3642]: E0130 14:03:58.282606 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.282694 kubelet[3642]: W0130 14:03:58.282643 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.284447 kubelet[3642]: E0130 14:03:58.283977 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.285578 kubelet[3642]: E0130 14:03:58.284938 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.286484 kubelet[3642]: W0130 14:03:58.286126 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.287226 kubelet[3642]: E0130 14:03:58.287041 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.287226 kubelet[3642]: W0130 14:03:58.287068 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.287927 kubelet[3642]: E0130 14:03:58.287900 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.288226 kubelet[3642]: W0130 14:03:58.288055 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.288226 kubelet[3642]: E0130 14:03:58.288093 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.288226 kubelet[3642]: E0130 14:03:58.288151 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.288226 kubelet[3642]: E0130 14:03:58.288180 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.297308 kubelet[3642]: E0130 14:03:58.297249 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:03:58.297308 kubelet[3642]: W0130 14:03:58.297289 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:03:58.298364 kubelet[3642]: E0130 14:03:58.297325 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:03:58.300947 containerd[2139]: time="2025-01-30T14:03:58.300874033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bwzrs,Uid:f605331c-7978-43f6-8347-6fd2777be317,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea05faad0669ab8b845cbdb67eea49259e4c2440ed970e1a52e7f9e00f7ee429\"" Jan 30 14:03:59.632535 kubelet[3642]: E0130 14:03:59.628630 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nwm6" podUID="6a54a6e6-ac55-4daa-ba3e-bb307511354c" Jan 30 14:03:59.669666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163570969.mount: Deactivated successfully. Jan 30 14:04:00.495995 containerd[2139]: time="2025-01-30T14:04:00.495932524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:00.497463 containerd[2139]: time="2025-01-30T14:04:00.497407156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 30 14:04:00.498691 containerd[2139]: time="2025-01-30T14:04:00.498618652Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:00.508457 containerd[2139]: time="2025-01-30T14:04:00.507822868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:00.509756 containerd[2139]: time="2025-01-30T14:04:00.509683420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.26903362s" Jan 30 14:04:00.509877 containerd[2139]: time="2025-01-30T14:04:00.509750536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 30 14:04:00.513846 containerd[2139]: time="2025-01-30T14:04:00.513782872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 14:04:00.537038 containerd[2139]: time="2025-01-30T14:04:00.536682880Z" level=info msg="CreateContainer within sandbox \"48b92a680d79c7fcd3e3cd0bc1b7d810d5ef3d598ddd737ce0e1fae4e0da0c5d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 14:04:00.568494 containerd[2139]: time="2025-01-30T14:04:00.568403116Z" level=info msg="CreateContainer within sandbox \"48b92a680d79c7fcd3e3cd0bc1b7d810d5ef3d598ddd737ce0e1fae4e0da0c5d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e1df3f3f07d9299c997cbbf30a3f47114fc4bdc4d6ff7e702b1565cec853162c\"" Jan 30 14:04:00.572932 containerd[2139]: time="2025-01-30T14:04:00.569307052Z" level=info msg="StartContainer for \"e1df3f3f07d9299c997cbbf30a3f47114fc4bdc4d6ff7e702b1565cec853162c\"" Jan 30 14:04:00.704786 containerd[2139]: time="2025-01-30T14:04:00.704721137Z" level=info msg="StartContainer for \"e1df3f3f07d9299c997cbbf30a3f47114fc4bdc4d6ff7e702b1565cec853162c\" returns successfully" Jan 30 14:04:00.812869 kubelet[3642]: E0130 14:04:00.812817 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.812869 kubelet[3642]: W0130 14:04:00.812855 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.813564 kubelet[3642]: E0130 14:04:00.812889 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.813564 kubelet[3642]: E0130 14:04:00.813337 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.813564 kubelet[3642]: W0130 14:04:00.813356 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.813564 kubelet[3642]: E0130 14:04:00.813385 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.816441 kubelet[3642]: E0130 14:04:00.813751 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.816441 kubelet[3642]: W0130 14:04:00.813769 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.816441 kubelet[3642]: E0130 14:04:00.813791 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.816441 kubelet[3642]: E0130 14:04:00.814110 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.816441 kubelet[3642]: W0130 14:04:00.814129 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.816441 kubelet[3642]: E0130 14:04:00.814150 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.816441 kubelet[3642]: E0130 14:04:00.814567 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.816441 kubelet[3642]: W0130 14:04:00.814588 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.816441 kubelet[3642]: E0130 14:04:00.814610 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.816441 kubelet[3642]: E0130 14:04:00.814926 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.819050 kubelet[3642]: W0130 14:04:00.814940 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.819050 kubelet[3642]: E0130 14:04:00.814960 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.819050 kubelet[3642]: E0130 14:04:00.815222 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.819050 kubelet[3642]: W0130 14:04:00.815238 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.819050 kubelet[3642]: E0130 14:04:00.815256 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.819050 kubelet[3642]: E0130 14:04:00.815590 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.819050 kubelet[3642]: W0130 14:04:00.815607 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.819050 kubelet[3642]: E0130 14:04:00.815628 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.819050 kubelet[3642]: E0130 14:04:00.815958 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.819050 kubelet[3642]: W0130 14:04:00.815976 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.821594 kubelet[3642]: E0130 14:04:00.816018 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.821594 kubelet[3642]: E0130 14:04:00.816392 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.821594 kubelet[3642]: W0130 14:04:00.816413 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.821594 kubelet[3642]: E0130 14:04:00.816436 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.821594 kubelet[3642]: E0130 14:04:00.816826 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.821594 kubelet[3642]: W0130 14:04:00.816844 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.821594 kubelet[3642]: E0130 14:04:00.816863 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.821594 kubelet[3642]: E0130 14:04:00.817144 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.821594 kubelet[3642]: W0130 14:04:00.817162 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.821594 kubelet[3642]: E0130 14:04:00.817183 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.824538 kubelet[3642]: E0130 14:04:00.817565 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.824538 kubelet[3642]: W0130 14:04:00.817585 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.824538 kubelet[3642]: E0130 14:04:00.817606 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.824538 kubelet[3642]: E0130 14:04:00.817908 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.824538 kubelet[3642]: W0130 14:04:00.817924 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.824538 kubelet[3642]: E0130 14:04:00.817948 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.824538 kubelet[3642]: E0130 14:04:00.818233 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.824538 kubelet[3642]: W0130 14:04:00.818249 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.824538 kubelet[3642]: E0130 14:04:00.818268 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.834501 kubelet[3642]: I0130 14:04:00.832962 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-69d5fcd5c-wjv4j" podStartSLOduration=1.558962725 podStartE2EDuration="3.832938941s" podCreationTimestamp="2025-01-30 14:03:57 +0000 UTC" firstStartedPulling="2025-01-30 14:03:58.23935056 +0000 UTC m=+23.835827915" lastFinishedPulling="2025-01-30 14:04:00.5133268 +0000 UTC m=+26.109804131" observedRunningTime="2025-01-30 14:04:00.832747577 +0000 UTC m=+26.429224932" watchObservedRunningTime="2025-01-30 14:04:00.832938941 +0000 UTC m=+26.429416308" Jan 30 14:04:00.877209 kubelet[3642]: E0130 14:04:00.877153 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.877463 kubelet[3642]: W0130 14:04:00.877414 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.878656 kubelet[3642]: E0130 14:04:00.878608 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.881054 kubelet[3642]: E0130 14:04:00.880827 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.881054 kubelet[3642]: W0130 14:04:00.880885 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.881657 kubelet[3642]: E0130 14:04:00.881384 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.883726 kubelet[3642]: E0130 14:04:00.883429 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.883726 kubelet[3642]: W0130 14:04:00.883466 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.883726 kubelet[3642]: E0130 14:04:00.883558 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.885251 kubelet[3642]: E0130 14:04:00.884814 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.885251 kubelet[3642]: W0130 14:04:00.884912 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.885636 kubelet[3642]: E0130 14:04:00.885409 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.888341 kubelet[3642]: E0130 14:04:00.887071 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.888341 kubelet[3642]: W0130 14:04:00.887105 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.888583 kubelet[3642]: E0130 14:04:00.888552 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.889648 kubelet[3642]: E0130 14:04:00.888748 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.889648 kubelet[3642]: W0130 14:04:00.888771 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.891089 kubelet[3642]: E0130 14:04:00.889887 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.891615 kubelet[3642]: E0130 14:04:00.891370 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.891615 kubelet[3642]: W0130 14:04:00.891403 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.892199 kubelet[3642]: E0130 14:04:00.891814 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.893606 kubelet[3642]: E0130 14:04:00.892979 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.893606 kubelet[3642]: W0130 14:04:00.893010 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.894972 kubelet[3642]: E0130 14:04:00.894754 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.894972 kubelet[3642]: E0130 14:04:00.894832 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.894972 kubelet[3642]: W0130 14:04:00.894853 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.896309 kubelet[3642]: E0130 14:04:00.895439 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.897069 kubelet[3642]: E0130 14:04:00.896765 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.897069 kubelet[3642]: W0130 14:04:00.896796 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.897541 kubelet[3642]: E0130 14:04:00.897261 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.898435 kubelet[3642]: E0130 14:04:00.898404 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.898794 kubelet[3642]: W0130 14:04:00.898518 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.899350 kubelet[3642]: E0130 14:04:00.899324 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.899596 kubelet[3642]: W0130 14:04:00.899524 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.899775 kubelet[3642]: E0130 14:04:00.899556 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.899775 kubelet[3642]: E0130 14:04:00.899701 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.900606 kubelet[3642]: E0130 14:04:00.900521 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.900606 kubelet[3642]: W0130 14:04:00.900555 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.900972 kubelet[3642]: E0130 14:04:00.900790 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.901636 kubelet[3642]: E0130 14:04:00.901541 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.901636 kubelet[3642]: W0130 14:04:00.901602 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.902030 kubelet[3642]: E0130 14:04:00.901852 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.902809 kubelet[3642]: E0130 14:04:00.902601 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.902809 kubelet[3642]: W0130 14:04:00.902632 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.902809 kubelet[3642]: E0130 14:04:00.902677 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.904085 kubelet[3642]: E0130 14:04:00.903932 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.904085 kubelet[3642]: W0130 14:04:00.903964 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.905335 kubelet[3642]: E0130 14:04:00.904746 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.906258 kubelet[3642]: E0130 14:04:00.906180 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.907387 kubelet[3642]: W0130 14:04:00.906525 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.907387 kubelet[3642]: E0130 14:04:00.906571 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:00.907949 kubelet[3642]: E0130 14:04:00.907917 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:00.908069 kubelet[3642]: W0130 14:04:00.908044 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:00.908202 kubelet[3642]: E0130 14:04:00.908176 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.626994 kubelet[3642]: E0130 14:04:01.626929 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nwm6" podUID="6a54a6e6-ac55-4daa-ba3e-bb307511354c" Jan 30 14:04:01.814591 kubelet[3642]: I0130 14:04:01.810014 3642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:04:01.827129 kubelet[3642]: E0130 14:04:01.827073 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.827268 kubelet[3642]: W0130 14:04:01.827227 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.827324 kubelet[3642]: E0130 14:04:01.827265 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.828457 kubelet[3642]: E0130 14:04:01.828176 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.828457 kubelet[3642]: W0130 14:04:01.828245 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.828457 kubelet[3642]: E0130 14:04:01.828278 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.829673 kubelet[3642]: E0130 14:04:01.829058 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.829673 kubelet[3642]: W0130 14:04:01.829093 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.829673 kubelet[3642]: E0130 14:04:01.829156 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.829909 kubelet[3642]: E0130 14:04:01.829721 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.829909 kubelet[3642]: W0130 14:04:01.829741 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.829909 kubelet[3642]: E0130 14:04:01.829797 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.831426 kubelet[3642]: E0130 14:04:01.830925 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.831426 kubelet[3642]: W0130 14:04:01.831000 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.831426 kubelet[3642]: E0130 14:04:01.831033 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.831713 kubelet[3642]: E0130 14:04:01.831583 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.831713 kubelet[3642]: W0130 14:04:01.831661 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.831818 kubelet[3642]: E0130 14:04:01.831719 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.833098 kubelet[3642]: E0130 14:04:01.832294 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.833098 kubelet[3642]: W0130 14:04:01.832327 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.833098 kubelet[3642]: E0130 14:04:01.832354 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.833098 kubelet[3642]: E0130 14:04:01.832745 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.833098 kubelet[3642]: W0130 14:04:01.832765 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.833098 kubelet[3642]: E0130 14:04:01.832789 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.836083 kubelet[3642]: E0130 14:04:01.833324 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.836083 kubelet[3642]: W0130 14:04:01.833534 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.836083 kubelet[3642]: E0130 14:04:01.833567 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.836083 kubelet[3642]: E0130 14:04:01.834737 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.836083 kubelet[3642]: W0130 14:04:01.834765 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.836083 kubelet[3642]: E0130 14:04:01.834798 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.836083 kubelet[3642]: E0130 14:04:01.835654 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.836083 kubelet[3642]: W0130 14:04:01.835686 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.836083 kubelet[3642]: E0130 14:04:01.835717 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.837202 kubelet[3642]: E0130 14:04:01.837108 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.837202 kubelet[3642]: W0130 14:04:01.837141 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.837556 kubelet[3642]: E0130 14:04:01.837287 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.838146 kubelet[3642]: E0130 14:04:01.838001 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.838146 kubelet[3642]: W0130 14:04:01.838029 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.838146 kubelet[3642]: E0130 14:04:01.838059 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.839420 kubelet[3642]: E0130 14:04:01.838939 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.839420 kubelet[3642]: W0130 14:04:01.839011 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.839420 kubelet[3642]: E0130 14:04:01.839042 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.841087 kubelet[3642]: E0130 14:04:01.840775 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.841087 kubelet[3642]: W0130 14:04:01.840825 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.841087 kubelet[3642]: E0130 14:04:01.840860 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.901624 kubelet[3642]: E0130 14:04:01.901490 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.902693 kubelet[3642]: W0130 14:04:01.901768 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.902693 kubelet[3642]: E0130 14:04:01.901809 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.903513 kubelet[3642]: E0130 14:04:01.903453 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.904177 kubelet[3642]: W0130 14:04:01.904101 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.905023 kubelet[3642]: E0130 14:04:01.904573 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.907013 kubelet[3642]: E0130 14:04:01.905572 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.907013 kubelet[3642]: W0130 14:04:01.905612 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.907013 kubelet[3642]: E0130 14:04:01.905648 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.907013 kubelet[3642]: E0130 14:04:01.906086 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.907013 kubelet[3642]: W0130 14:04:01.906110 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.907013 kubelet[3642]: E0130 14:04:01.906135 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.907013 kubelet[3642]: E0130 14:04:01.906496 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.907013 kubelet[3642]: W0130 14:04:01.906519 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.907013 kubelet[3642]: E0130 14:04:01.906542 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.907013 kubelet[3642]: E0130 14:04:01.906884 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.908809 kubelet[3642]: W0130 14:04:01.906903 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.908809 kubelet[3642]: E0130 14:04:01.906926 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.908809 kubelet[3642]: E0130 14:04:01.907601 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.908809 kubelet[3642]: W0130 14:04:01.907629 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.908809 kubelet[3642]: E0130 14:04:01.907668 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.908809 kubelet[3642]: E0130 14:04:01.908265 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.908809 kubelet[3642]: W0130 14:04:01.908288 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.908809 kubelet[3642]: E0130 14:04:01.908328 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.909207 kubelet[3642]: E0130 14:04:01.909038 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.909207 kubelet[3642]: W0130 14:04:01.909056 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.909207 kubelet[3642]: E0130 14:04:01.909089 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.910228 kubelet[3642]: E0130 14:04:01.909559 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.910228 kubelet[3642]: W0130 14:04:01.909579 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.910228 kubelet[3642]: E0130 14:04:01.909603 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.910228 kubelet[3642]: E0130 14:04:01.909994 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.910228 kubelet[3642]: W0130 14:04:01.910013 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.910228 kubelet[3642]: E0130 14:04:01.910090 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.916175 kubelet[3642]: E0130 14:04:01.910634 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.916175 kubelet[3642]: W0130 14:04:01.910654 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.916175 kubelet[3642]: E0130 14:04:01.910691 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.916175 kubelet[3642]: E0130 14:04:01.911442 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.916175 kubelet[3642]: W0130 14:04:01.911467 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.916175 kubelet[3642]: E0130 14:04:01.911532 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.916175 kubelet[3642]: E0130 14:04:01.912193 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.916175 kubelet[3642]: W0130 14:04:01.912216 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.916175 kubelet[3642]: E0130 14:04:01.912242 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.916175 kubelet[3642]: E0130 14:04:01.912731 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.916828 kubelet[3642]: W0130 14:04:01.912754 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.916828 kubelet[3642]: E0130 14:04:01.912833 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.916828 kubelet[3642]: E0130 14:04:01.913085 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.916828 kubelet[3642]: W0130 14:04:01.913102 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.916828 kubelet[3642]: E0130 14:04:01.913123 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.919602 kubelet[3642]: E0130 14:04:01.918649 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.919602 kubelet[3642]: W0130 14:04:01.918685 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.919602 kubelet[3642]: E0130 14:04:01.918967 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.920347 kubelet[3642]: E0130 14:04:01.919909 3642 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 14:04:01.920347 kubelet[3642]: W0130 14:04:01.919935 3642 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 14:04:01.920580 kubelet[3642]: E0130 14:04:01.920514 3642 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 14:04:01.992182 containerd[2139]: time="2025-01-30T14:04:01.992096443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:01.994220 containerd[2139]: time="2025-01-30T14:04:01.994058359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 30 14:04:01.996587 containerd[2139]: time="2025-01-30T14:04:01.996541303Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:02.001848 containerd[2139]: time="2025-01-30T14:04:02.001723131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:02.003728 containerd[2139]: time="2025-01-30T14:04:02.003502911Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.489627555s" Jan 30 14:04:02.003728 containerd[2139]: time="2025-01-30T14:04:02.003560499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 30 14:04:02.010923 containerd[2139]: time="2025-01-30T14:04:02.010855731Z" level=info msg="CreateContainer within sandbox \"ea05faad0669ab8b845cbdb67eea49259e4c2440ed970e1a52e7f9e00f7ee429\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 14:04:02.047932 containerd[2139]: time="2025-01-30T14:04:02.047853891Z" level=info msg="CreateContainer within sandbox \"ea05faad0669ab8b845cbdb67eea49259e4c2440ed970e1a52e7f9e00f7ee429\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7a2bfcb0c1e90dc69541280837e0c8bbabc7985f63d6ae4586a7b10385501405\"" Jan 30 14:04:02.048614 containerd[2139]: time="2025-01-30T14:04:02.048564663Z" level=info msg="StartContainer for \"7a2bfcb0c1e90dc69541280837e0c8bbabc7985f63d6ae4586a7b10385501405\"" Jan 30 14:04:02.173349 containerd[2139]: time="2025-01-30T14:04:02.172556584Z" level=info msg="StartContainer for \"7a2bfcb0c1e90dc69541280837e0c8bbabc7985f63d6ae4586a7b10385501405\" returns successfully" Jan 30 14:04:02.484980 containerd[2139]: time="2025-01-30T14:04:02.484619610Z" level=info msg="shim disconnected" id=7a2bfcb0c1e90dc69541280837e0c8bbabc7985f63d6ae4586a7b10385501405 namespace=k8s.io Jan 30 14:04:02.484980 containerd[2139]: time="2025-01-30T14:04:02.484725090Z" level=warning msg="cleaning up after shim disconnected" id=7a2bfcb0c1e90dc69541280837e0c8bbabc7985f63d6ae4586a7b10385501405 namespace=k8s.io Jan 30 14:04:02.484980 containerd[2139]: time="2025-01-30T14:04:02.484747086Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:02.524019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a2bfcb0c1e90dc69541280837e0c8bbabc7985f63d6ae4586a7b10385501405-rootfs.mount: Deactivated successfully. Jan 30 14:04:02.816869 containerd[2139]: time="2025-01-30T14:04:02.816784831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 14:04:03.627587 kubelet[3642]: E0130 14:04:03.626997 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nwm6" podUID="6a54a6e6-ac55-4daa-ba3e-bb307511354c" Jan 30 14:04:05.626884 kubelet[3642]: E0130 14:04:05.626751 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nwm6" podUID="6a54a6e6-ac55-4daa-ba3e-bb307511354c" Jan 30 14:04:06.718749 containerd[2139]: time="2025-01-30T14:04:06.718685111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:06.720259 containerd[2139]: time="2025-01-30T14:04:06.720173615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 30 14:04:06.721176 containerd[2139]: time="2025-01-30T14:04:06.721122731Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:06.729681 containerd[2139]: time="2025-01-30T14:04:06.729620567Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:06.734852 containerd[2139]: time="2025-01-30T14:04:06.734788619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.917916368s" Jan 30 14:04:06.735065 containerd[2139]: time="2025-01-30T14:04:06.735034307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 30 14:04:06.742092 containerd[2139]: time="2025-01-30T14:04:06.741566579Z" level=info msg="CreateContainer within sandbox \"ea05faad0669ab8b845cbdb67eea49259e4c2440ed970e1a52e7f9e00f7ee429\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:04:06.764536 containerd[2139]: time="2025-01-30T14:04:06.764286635Z" level=info msg="CreateContainer within sandbox \"ea05faad0669ab8b845cbdb67eea49259e4c2440ed970e1a52e7f9e00f7ee429\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2e96dfc703fae1d07e9a8ecb466e36ca1132b791add39f74587eb194f210db77\"" Jan 30 14:04:06.767524 containerd[2139]: time="2025-01-30T14:04:06.765313691Z" level=info msg="StartContainer for \"2e96dfc703fae1d07e9a8ecb466e36ca1132b791add39f74587eb194f210db77\"" Jan 30 14:04:06.872507 containerd[2139]: time="2025-01-30T14:04:06.872414315Z" level=info msg="StartContainer for \"2e96dfc703fae1d07e9a8ecb466e36ca1132b791add39f74587eb194f210db77\" returns successfully" Jan 30 14:04:07.627352 kubelet[3642]: E0130 14:04:07.627269 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9nwm6" podUID="6a54a6e6-ac55-4daa-ba3e-bb307511354c" Jan 30 14:04:07.760511 containerd[2139]: time="2025-01-30T14:04:07.760413180Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:04:07.769644 kubelet[3642]: I0130 14:04:07.769328 3642 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 14:04:07.828493 kubelet[3642]: I0130 14:04:07.827415 3642 topology_manager.go:215] "Topology Admit Handler" podUID="42c1d2cd-78ec-4529-88c1-93534af9989c" podNamespace="kube-system" podName="coredns-7db6d8ff4d-n6wgq" Jan 30 14:04:07.829728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e96dfc703fae1d07e9a8ecb466e36ca1132b791add39f74587eb194f210db77-rootfs.mount: Deactivated successfully. Jan 30 14:04:07.846533 kubelet[3642]: I0130 14:04:07.843312 3642 topology_manager.go:215] "Topology Admit Handler" podUID="fbfb33fd-7135-4e07-94f5-b6878a9da2f6" podNamespace="calico-system" podName="calico-kube-controllers-5d545d88fb-6s48d" Jan 30 14:04:07.846533 kubelet[3642]: I0130 14:04:07.843697 3642 topology_manager.go:215] "Topology Admit Handler" podUID="0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-8tv4w" Jan 30 14:04:07.854031 kubelet[3642]: I0130 14:04:07.853644 3642 topology_manager.go:215] "Topology Admit Handler" podUID="ac1e96c2-de99-4017-b353-ed7792f81a20" podNamespace="calico-apiserver" podName="calico-apiserver-5979bcf7dd-9l7bs" Jan 30 14:04:07.900293 kubelet[3642]: I0130 14:04:07.899711 3642 topology_manager.go:215] "Topology Admit Handler" podUID="ba1c580d-2c05-4884-a289-a60900c388b5" podNamespace="calico-apiserver" podName="calico-apiserver-5979bcf7dd-sqhjj" Jan 30 14:04:07.954554 kubelet[3642]: I0130 14:04:07.954487 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fbfb33fd-7135-4e07-94f5-b6878a9da2f6-tigera-ca-bundle\") pod \"calico-kube-controllers-5d545d88fb-6s48d\" (UID: \"fbfb33fd-7135-4e07-94f5-b6878a9da2f6\") " pod="calico-system/calico-kube-controllers-5d545d88fb-6s48d" Jan 30 14:04:07.954736 kubelet[3642]: I0130 14:04:07.954662 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k48rk\" (UniqueName: \"kubernetes.io/projected/fbfb33fd-7135-4e07-94f5-b6878a9da2f6-kube-api-access-k48rk\") pod \"calico-kube-controllers-5d545d88fb-6s48d\" (UID: \"fbfb33fd-7135-4e07-94f5-b6878a9da2f6\") " pod="calico-system/calico-kube-controllers-5d545d88fb-6s48d" Jan 30 14:04:07.955662 kubelet[3642]: I0130 14:04:07.954773 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrc2m\" (UniqueName: \"kubernetes.io/projected/ac1e96c2-de99-4017-b353-ed7792f81a20-kube-api-access-zrc2m\") pod \"calico-apiserver-5979bcf7dd-9l7bs\" (UID: \"ac1e96c2-de99-4017-b353-ed7792f81a20\") " pod="calico-apiserver/calico-apiserver-5979bcf7dd-9l7bs" Jan 30 14:04:07.955662 kubelet[3642]: I0130 14:04:07.954880 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl7fv\" (UniqueName: \"kubernetes.io/projected/0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e-kube-api-access-hl7fv\") pod \"coredns-7db6d8ff4d-8tv4w\" (UID: \"0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e\") " pod="kube-system/coredns-7db6d8ff4d-8tv4w" Jan 30 14:04:07.955662 kubelet[3642]: I0130 14:04:07.954928 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ac1e96c2-de99-4017-b353-ed7792f81a20-calico-apiserver-certs\") pod \"calico-apiserver-5979bcf7dd-9l7bs\" (UID: \"ac1e96c2-de99-4017-b353-ed7792f81a20\") " pod="calico-apiserver/calico-apiserver-5979bcf7dd-9l7bs" Jan 30 14:04:07.955662 kubelet[3642]: I0130 14:04:07.954969 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js9mf\" (UniqueName: \"kubernetes.io/projected/42c1d2cd-78ec-4529-88c1-93534af9989c-kube-api-access-js9mf\") pod \"coredns-7db6d8ff4d-n6wgq\" (UID: \"42c1d2cd-78ec-4529-88c1-93534af9989c\") " pod="kube-system/coredns-7db6d8ff4d-n6wgq" Jan 30 14:04:07.955662 kubelet[3642]: I0130 14:04:07.955026 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e-config-volume\") pod \"coredns-7db6d8ff4d-8tv4w\" (UID: \"0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e\") " pod="kube-system/coredns-7db6d8ff4d-8tv4w" Jan 30 14:04:07.955973 kubelet[3642]: I0130 14:04:07.955082 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/42c1d2cd-78ec-4529-88c1-93534af9989c-config-volume\") pod \"coredns-7db6d8ff4d-n6wgq\" (UID: \"42c1d2cd-78ec-4529-88c1-93534af9989c\") " pod="kube-system/coredns-7db6d8ff4d-n6wgq" Jan 30 14:04:08.060781 kubelet[3642]: I0130 14:04:08.056833 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ba1c580d-2c05-4884-a289-a60900c388b5-calico-apiserver-certs\") pod \"calico-apiserver-5979bcf7dd-sqhjj\" (UID: \"ba1c580d-2c05-4884-a289-a60900c388b5\") " pod="calico-apiserver/calico-apiserver-5979bcf7dd-sqhjj" Jan 30 14:04:08.060781 kubelet[3642]: I0130 14:04:08.056912 3642 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk8v6\" (UniqueName: \"kubernetes.io/projected/ba1c580d-2c05-4884-a289-a60900c388b5-kube-api-access-wk8v6\") pod \"calico-apiserver-5979bcf7dd-sqhjj\" (UID: \"ba1c580d-2c05-4884-a289-a60900c388b5\") " pod="calico-apiserver/calico-apiserver-5979bcf7dd-sqhjj" Jan 30 14:04:08.158979 containerd[2139]: time="2025-01-30T14:04:08.158841406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8tv4w,Uid:0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e,Namespace:kube-system,Attempt:0,}" Jan 30 14:04:08.166776 containerd[2139]: time="2025-01-30T14:04:08.166705618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n6wgq,Uid:42c1d2cd-78ec-4529-88c1-93534af9989c,Namespace:kube-system,Attempt:0,}" Jan 30 14:04:08.188420 containerd[2139]: time="2025-01-30T14:04:08.188366254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d545d88fb-6s48d,Uid:fbfb33fd-7135-4e07-94f5-b6878a9da2f6,Namespace:calico-system,Attempt:0,}" Jan 30 14:04:08.222519 containerd[2139]: time="2025-01-30T14:04:08.222347302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcf7dd-9l7bs,Uid:ac1e96c2-de99-4017-b353-ed7792f81a20,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:04:08.228241 containerd[2139]: time="2025-01-30T14:04:08.228190450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcf7dd-sqhjj,Uid:ba1c580d-2c05-4884-a289-a60900c388b5,Namespace:calico-apiserver,Attempt:0,}" Jan 30 14:04:08.713903 containerd[2139]: time="2025-01-30T14:04:08.713800320Z" level=info msg="shim disconnected" id=2e96dfc703fae1d07e9a8ecb466e36ca1132b791add39f74587eb194f210db77 namespace=k8s.io Jan 30 14:04:08.714355 containerd[2139]: time="2025-01-30T14:04:08.713904360Z" level=warning msg="cleaning up after shim disconnected" id=2e96dfc703fae1d07e9a8ecb466e36ca1132b791add39f74587eb194f210db77 namespace=k8s.io Jan 30 14:04:08.714355 containerd[2139]: time="2025-01-30T14:04:08.713948916Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:04:08.959454 containerd[2139]: time="2025-01-30T14:04:08.959405870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 14:04:09.032701 containerd[2139]: time="2025-01-30T14:04:09.032611762Z" level=error msg="Failed to destroy network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.035496 containerd[2139]: time="2025-01-30T14:04:09.033843778Z" level=error msg="encountered an error cleaning up failed sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.035496 containerd[2139]: time="2025-01-30T14:04:09.033946546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d545d88fb-6s48d,Uid:fbfb33fd-7135-4e07-94f5-b6878a9da2f6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.036256 kubelet[3642]: E0130 14:04:09.036048 3642 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.036256 kubelet[3642]: E0130 14:04:09.036156 3642 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d545d88fb-6s48d" Jan 30 14:04:09.036256 kubelet[3642]: E0130 14:04:09.036190 3642 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d545d88fb-6s48d" Jan 30 14:04:09.037193 kubelet[3642]: E0130 14:04:09.036261 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d545d88fb-6s48d_calico-system(fbfb33fd-7135-4e07-94f5-b6878a9da2f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d545d88fb-6s48d_calico-system(fbfb33fd-7135-4e07-94f5-b6878a9da2f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d545d88fb-6s48d" podUID="fbfb33fd-7135-4e07-94f5-b6878a9da2f6" Jan 30 14:04:09.044426 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b-shm.mount: Deactivated successfully. Jan 30 14:04:09.117301 containerd[2139]: time="2025-01-30T14:04:09.117000478Z" level=error msg="Failed to destroy network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.118554 containerd[2139]: time="2025-01-30T14:04:09.117637858Z" level=error msg="Failed to destroy network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.120207 containerd[2139]: time="2025-01-30T14:04:09.119171842Z" level=error msg="encountered an error cleaning up failed sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.120207 containerd[2139]: time="2025-01-30T14:04:09.119305150Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcf7dd-9l7bs,Uid:ac1e96c2-de99-4017-b353-ed7792f81a20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.120457 kubelet[3642]: E0130 14:04:09.119652 3642 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.120457 kubelet[3642]: E0130 14:04:09.119760 3642 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5979bcf7dd-9l7bs" Jan 30 14:04:09.120457 kubelet[3642]: E0130 14:04:09.119823 3642 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5979bcf7dd-9l7bs" Jan 30 14:04:09.122598 kubelet[3642]: E0130 14:04:09.120903 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5979bcf7dd-9l7bs_calico-apiserver(ac1e96c2-de99-4017-b353-ed7792f81a20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5979bcf7dd-9l7bs_calico-apiserver(ac1e96c2-de99-4017-b353-ed7792f81a20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5979bcf7dd-9l7bs" podUID="ac1e96c2-de99-4017-b353-ed7792f81a20" Jan 30 14:04:09.125199 containerd[2139]: time="2025-01-30T14:04:09.124685026Z" level=error msg="encountered an error cleaning up failed sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.124977 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d-shm.mount: Deactivated successfully. Jan 30 14:04:09.125923 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25-shm.mount: Deactivated successfully. Jan 30 14:04:09.129709 containerd[2139]: time="2025-01-30T14:04:09.125815318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8tv4w,Uid:0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.137295 containerd[2139]: time="2025-01-30T14:04:09.135229643Z" level=error msg="Failed to destroy network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.137295 containerd[2139]: time="2025-01-30T14:04:09.137052971Z" level=error msg="encountered an error cleaning up failed sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.137295 containerd[2139]: time="2025-01-30T14:04:09.137137631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n6wgq,Uid:42c1d2cd-78ec-4529-88c1-93534af9989c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.137593 kubelet[3642]: E0130 14:04:09.135236 3642 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.137593 kubelet[3642]: E0130 14:04:09.135314 3642 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8tv4w" Jan 30 14:04:09.137593 kubelet[3642]: E0130 14:04:09.135352 3642 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-8tv4w" Jan 30 14:04:09.141156 kubelet[3642]: E0130 14:04:09.139666 3642 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.141156 kubelet[3642]: E0130 14:04:09.139748 3642 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n6wgq" Jan 30 14:04:09.141156 kubelet[3642]: E0130 14:04:09.139783 3642 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-n6wgq" Jan 30 14:04:09.142066 kubelet[3642]: E0130 14:04:09.139844 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-n6wgq_kube-system(42c1d2cd-78ec-4529-88c1-93534af9989c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-n6wgq_kube-system(42c1d2cd-78ec-4529-88c1-93534af9989c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-n6wgq" podUID="42c1d2cd-78ec-4529-88c1-93534af9989c" Jan 30 14:04:09.142066 kubelet[3642]: E0130 14:04:09.137563 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-8tv4w_kube-system(0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-8tv4w_kube-system(0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8tv4w" podUID="0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e" Jan 30 14:04:09.145564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a-shm.mount: Deactivated successfully. Jan 30 14:04:09.148015 containerd[2139]: time="2025-01-30T14:04:09.146249891Z" level=error msg="Failed to destroy network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.150086 containerd[2139]: time="2025-01-30T14:04:09.149992355Z" level=error msg="encountered an error cleaning up failed sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.150222 containerd[2139]: time="2025-01-30T14:04:09.150132575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcf7dd-sqhjj,Uid:ba1c580d-2c05-4884-a289-a60900c388b5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.150628 kubelet[3642]: E0130 14:04:09.150581 3642 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.150805 kubelet[3642]: E0130 14:04:09.150776 3642 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5979bcf7dd-sqhjj" Jan 30 14:04:09.151044 kubelet[3642]: E0130 14:04:09.150902 3642 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5979bcf7dd-sqhjj" Jan 30 14:04:09.151044 kubelet[3642]: E0130 14:04:09.150984 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5979bcf7dd-sqhjj_calico-apiserver(ba1c580d-2c05-4884-a289-a60900c388b5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5979bcf7dd-sqhjj_calico-apiserver(ba1c580d-2c05-4884-a289-a60900c388b5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5979bcf7dd-sqhjj" podUID="ba1c580d-2c05-4884-a289-a60900c388b5" Jan 30 14:04:09.632527 containerd[2139]: time="2025-01-30T14:04:09.632451157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nwm6,Uid:6a54a6e6-ac55-4daa-ba3e-bb307511354c,Namespace:calico-system,Attempt:0,}" Jan 30 14:04:09.728773 containerd[2139]: time="2025-01-30T14:04:09.728437789Z" level=error msg="Failed to destroy network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.729693 containerd[2139]: time="2025-01-30T14:04:09.729455653Z" level=error msg="encountered an error cleaning up failed sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.729693 containerd[2139]: time="2025-01-30T14:04:09.729597757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nwm6,Uid:6a54a6e6-ac55-4daa-ba3e-bb307511354c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.730548 kubelet[3642]: E0130 14:04:09.730281 3642 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:09.730548 kubelet[3642]: E0130 14:04:09.730367 3642 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9nwm6" Jan 30 14:04:09.730548 kubelet[3642]: E0130 14:04:09.730400 3642 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9nwm6" Jan 30 14:04:09.732707 kubelet[3642]: E0130 14:04:09.730545 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9nwm6_calico-system(6a54a6e6-ac55-4daa-ba3e-bb307511354c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9nwm6_calico-system(6a54a6e6-ac55-4daa-ba3e-bb307511354c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9nwm6" podUID="6a54a6e6-ac55-4daa-ba3e-bb307511354c" Jan 30 14:04:09.823596 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07-shm.mount: Deactivated successfully. Jan 30 14:04:09.959515 kubelet[3642]: I0130 14:04:09.959294 3642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:09.963433 containerd[2139]: time="2025-01-30T14:04:09.962848431Z" level=info msg="StopPodSandbox for \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\"" Jan 30 14:04:09.963433 containerd[2139]: time="2025-01-30T14:04:09.963117795Z" level=info msg="Ensure that sandbox 92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25 in task-service has been cleanup successfully" Jan 30 14:04:09.967714 kubelet[3642]: I0130 14:04:09.966612 3642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:09.969692 containerd[2139]: time="2025-01-30T14:04:09.969585519Z" level=info msg="StopPodSandbox for \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\"" Jan 30 14:04:09.970789 containerd[2139]: time="2025-01-30T14:04:09.970607619Z" level=info msg="Ensure that sandbox 36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea in task-service has been cleanup successfully" Jan 30 14:04:09.977018 kubelet[3642]: I0130 14:04:09.976749 3642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:09.980262 containerd[2139]: time="2025-01-30T14:04:09.980197995Z" level=info msg="StopPodSandbox for \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\"" Jan 30 14:04:09.980702 containerd[2139]: time="2025-01-30T14:04:09.980534343Z" level=info msg="Ensure that sandbox 8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d in task-service has been cleanup successfully" Jan 30 14:04:09.988242 kubelet[3642]: I0130 14:04:09.988194 3642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:09.994212 containerd[2139]: time="2025-01-30T14:04:09.993057027Z" level=info msg="StopPodSandbox for \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\"" Jan 30 14:04:09.997965 kubelet[3642]: I0130 14:04:09.997817 3642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:10.002979 containerd[2139]: time="2025-01-30T14:04:10.002564279Z" level=info msg="Ensure that sandbox 5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b in task-service has been cleanup successfully" Jan 30 14:04:10.004001 containerd[2139]: time="2025-01-30T14:04:10.000898103Z" level=info msg="StopPodSandbox for \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\"" Jan 30 14:04:10.004347 containerd[2139]: time="2025-01-30T14:04:10.004281827Z" level=info msg="Ensure that sandbox 9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07 in task-service has been cleanup successfully" Jan 30 14:04:10.019922 kubelet[3642]: I0130 14:04:10.019877 3642 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:10.022729 containerd[2139]: time="2025-01-30T14:04:10.022660403Z" level=info msg="StopPodSandbox for \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\"" Jan 30 14:04:10.023395 containerd[2139]: time="2025-01-30T14:04:10.022998635Z" level=info msg="Ensure that sandbox de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a in task-service has been cleanup successfully" Jan 30 14:04:10.142144 containerd[2139]: time="2025-01-30T14:04:10.141922872Z" level=error msg="StopPodSandbox for \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\" failed" error="failed to destroy network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:10.143832 kubelet[3642]: E0130 14:04:10.142464 3642 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:10.143832 kubelet[3642]: E0130 14:04:10.143571 3642 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07"} Jan 30 14:04:10.143832 kubelet[3642]: E0130 14:04:10.143690 3642 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ba1c580d-2c05-4884-a289-a60900c388b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:10.143832 kubelet[3642]: E0130 14:04:10.143755 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ba1c580d-2c05-4884-a289-a60900c388b5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5979bcf7dd-sqhjj" podUID="ba1c580d-2c05-4884-a289-a60900c388b5" Jan 30 14:04:10.156226 containerd[2139]: time="2025-01-30T14:04:10.155791764Z" level=error msg="StopPodSandbox for \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\" failed" error="failed to destroy network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:10.156387 kubelet[3642]: E0130 14:04:10.156120 3642 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:10.157186 kubelet[3642]: E0130 14:04:10.156624 3642 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a"} Jan 30 14:04:10.157186 kubelet[3642]: E0130 14:04:10.156722 3642 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"42c1d2cd-78ec-4529-88c1-93534af9989c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:10.157186 kubelet[3642]: E0130 14:04:10.157101 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"42c1d2cd-78ec-4529-88c1-93534af9989c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-n6wgq" podUID="42c1d2cd-78ec-4529-88c1-93534af9989c" Jan 30 14:04:10.187060 containerd[2139]: time="2025-01-30T14:04:10.186725184Z" level=error msg="StopPodSandbox for \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\" failed" error="failed to destroy network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:10.187875 kubelet[3642]: E0130 14:04:10.187611 3642 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:10.187875 kubelet[3642]: E0130 14:04:10.187695 3642 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea"} Jan 30 14:04:10.187875 kubelet[3642]: E0130 14:04:10.187753 3642 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a54a6e6-ac55-4daa-ba3e-bb307511354c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:10.187875 kubelet[3642]: E0130 14:04:10.187794 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a54a6e6-ac55-4daa-ba3e-bb307511354c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9nwm6" podUID="6a54a6e6-ac55-4daa-ba3e-bb307511354c" Jan 30 14:04:10.190668 containerd[2139]: time="2025-01-30T14:04:10.190561596Z" level=error msg="StopPodSandbox for \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\" failed" error="failed to destroy network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:10.191678 kubelet[3642]: E0130 14:04:10.191618 3642 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:10.192163 kubelet[3642]: E0130 14:04:10.191927 3642 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25"} Jan 30 14:04:10.192163 kubelet[3642]: E0130 14:04:10.192088 3642 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:10.192163 kubelet[3642]: E0130 14:04:10.192128 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-8tv4w" podUID="0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e" Jan 30 14:04:10.197947 containerd[2139]: time="2025-01-30T14:04:10.197861232Z" level=error msg="StopPodSandbox for \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\" failed" error="failed to destroy network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:10.198556 kubelet[3642]: E0130 14:04:10.198248 3642 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:10.198556 kubelet[3642]: E0130 14:04:10.198333 3642 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b"} Jan 30 14:04:10.198556 kubelet[3642]: E0130 14:04:10.198395 3642 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fbfb33fd-7135-4e07-94f5-b6878a9da2f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:10.198556 kubelet[3642]: E0130 14:04:10.198437 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fbfb33fd-7135-4e07-94f5-b6878a9da2f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d545d88fb-6s48d" podUID="fbfb33fd-7135-4e07-94f5-b6878a9da2f6" Jan 30 14:04:10.204360 containerd[2139]: time="2025-01-30T14:04:10.204073308Z" level=error msg="StopPodSandbox for \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\" failed" error="failed to destroy network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 14:04:10.205188 kubelet[3642]: E0130 14:04:10.205052 3642 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:10.205188 kubelet[3642]: E0130 14:04:10.205132 3642 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d"} Jan 30 14:04:10.205379 kubelet[3642]: E0130 14:04:10.205187 3642 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac1e96c2-de99-4017-b353-ed7792f81a20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 14:04:10.205379 kubelet[3642]: E0130 14:04:10.205227 3642 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac1e96c2-de99-4017-b353-ed7792f81a20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5979bcf7dd-9l7bs" podUID="ac1e96c2-de99-4017-b353-ed7792f81a20" Jan 30 14:04:13.706373 kubelet[3642]: I0130 14:04:13.706308 3642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:04:15.336573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486056057.mount: Deactivated successfully. Jan 30 14:04:15.406003 containerd[2139]: time="2025-01-30T14:04:15.405930522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:15.407553 containerd[2139]: time="2025-01-30T14:04:15.407450694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 30 14:04:15.410290 containerd[2139]: time="2025-01-30T14:04:15.410195526Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:15.418682 containerd[2139]: time="2025-01-30T14:04:15.417542166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:15.422018 containerd[2139]: time="2025-01-30T14:04:15.421952850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.462389852s" Jan 30 14:04:15.422371 containerd[2139]: time="2025-01-30T14:04:15.422023818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 30 14:04:15.452812 containerd[2139]: time="2025-01-30T14:04:15.452464014Z" level=info msg="CreateContainer within sandbox \"ea05faad0669ab8b845cbdb67eea49259e4c2440ed970e1a52e7f9e00f7ee429\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 14:04:15.483008 containerd[2139]: time="2025-01-30T14:04:15.482837262Z" level=info msg="CreateContainer within sandbox \"ea05faad0669ab8b845cbdb67eea49259e4c2440ed970e1a52e7f9e00f7ee429\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2f8c1c4b537cd233de07ad651ce958a2d2116e9ee32d1168fc0357d0e2f52acb\"" Jan 30 14:04:15.485619 containerd[2139]: time="2025-01-30T14:04:15.484183134Z" level=info msg="StartContainer for \"2f8c1c4b537cd233de07ad651ce958a2d2116e9ee32d1168fc0357d0e2f52acb\"" Jan 30 14:04:15.583200 containerd[2139]: time="2025-01-30T14:04:15.583028503Z" level=info msg="StartContainer for \"2f8c1c4b537cd233de07ad651ce958a2d2116e9ee32d1168fc0357d0e2f52acb\" returns successfully" Jan 30 14:04:15.720933 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 14:04:15.721336 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 14:04:16.101488 kubelet[3642]: I0130 14:04:16.100318 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bwzrs" podStartSLOduration=1.9799693120000001 podStartE2EDuration="19.100294193s" podCreationTimestamp="2025-01-30 14:03:57 +0000 UTC" firstStartedPulling="2025-01-30 14:03:58.302696017 +0000 UTC m=+23.899173372" lastFinishedPulling="2025-01-30 14:04:15.42302091 +0000 UTC m=+41.019498253" observedRunningTime="2025-01-30 14:04:16.099559085 +0000 UTC m=+41.696036440" watchObservedRunningTime="2025-01-30 14:04:16.100294193 +0000 UTC m=+41.696771536" Jan 30 14:04:17.852515 kernel: bpftool[4966]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 14:04:18.160337 systemd-networkd[1682]: vxlan.calico: Link UP Jan 30 14:04:18.160351 systemd-networkd[1682]: vxlan.calico: Gained carrier Jan 30 14:04:18.160774 (udev-worker)[4782]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:04:18.208002 (udev-worker)[4783]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:04:19.981965 systemd[1]: Started sshd@7-172.31.23.237:22-139.178.89.65:50010.service - OpenSSH per-connection server daemon (139.178.89.65:50010). Jan 30 14:04:20.050751 systemd-networkd[1682]: vxlan.calico: Gained IPv6LL Jan 30 14:04:20.173111 sshd[5037]: Accepted publickey for core from 139.178.89.65 port 50010 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:20.176256 sshd[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:20.183966 systemd-logind[2109]: New session 8 of user core. Jan 30 14:04:20.190524 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:04:20.464768 sshd[5037]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:20.472322 systemd[1]: sshd@7-172.31.23.237:22-139.178.89.65:50010.service: Deactivated successfully. Jan 30 14:04:20.478218 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:04:20.478813 systemd-logind[2109]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:04:20.482821 systemd-logind[2109]: Removed session 8. Jan 30 14:04:21.628039 containerd[2139]: time="2025-01-30T14:04:21.627928141Z" level=info msg="StopPodSandbox for \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\"" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.785 [INFO][5068] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.785 [INFO][5068] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" iface="eth0" netns="/var/run/netns/cni-8184b7cb-8103-584f-56b0-406bcdabc1fa" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.788 [INFO][5068] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" iface="eth0" netns="/var/run/netns/cni-8184b7cb-8103-584f-56b0-406bcdabc1fa" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.790 [INFO][5068] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" iface="eth0" netns="/var/run/netns/cni-8184b7cb-8103-584f-56b0-406bcdabc1fa" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.790 [INFO][5068] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.790 [INFO][5068] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.852 [INFO][5075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" HandleID="k8s-pod-network.8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.853 [INFO][5075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.853 [INFO][5075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.871 [WARNING][5075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" HandleID="k8s-pod-network.8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.871 [INFO][5075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" HandleID="k8s-pod-network.8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.874 [INFO][5075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:21.885971 containerd[2139]: 2025-01-30 14:04:21.880 [INFO][5068] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:21.890681 containerd[2139]: time="2025-01-30T14:04:21.886780586Z" level=info msg="TearDown network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\" successfully" Jan 30 14:04:21.890681 containerd[2139]: time="2025-01-30T14:04:21.886827386Z" level=info msg="StopPodSandbox for \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\" returns successfully" Jan 30 14:04:21.890972 systemd[1]: run-netns-cni\x2d8184b7cb\x2d8103\x2d584f\x2d56b0\x2d406bcdabc1fa.mount: Deactivated successfully. Jan 30 14:04:21.900839 containerd[2139]: time="2025-01-30T14:04:21.900769562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcf7dd-9l7bs,Uid:ac1e96c2-de99-4017-b353-ed7792f81a20,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:04:22.176315 systemd-networkd[1682]: cali7cd49c93158: Link UP Jan 30 14:04:22.176835 systemd-networkd[1682]: cali7cd49c93158: Gained carrier Jan 30 14:04:22.184547 (udev-worker)[5101]: Network interface NamePolicy= disabled on kernel command line. Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.021 [INFO][5082] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0 calico-apiserver-5979bcf7dd- calico-apiserver ac1e96c2-de99-4017-b353-ed7792f81a20 821 0 2025-01-30 14:03:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5979bcf7dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-237 calico-apiserver-5979bcf7dd-9l7bs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7cd49c93158 [] []}} ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-9l7bs" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.022 [INFO][5082] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-9l7bs" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.095 [INFO][5093] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" HandleID="k8s-pod-network.991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.113 [INFO][5093] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" HandleID="k8s-pod-network.991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003f6fa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-237", "pod":"calico-apiserver-5979bcf7dd-9l7bs", "timestamp":"2025-01-30 14:04:22.095251403 +0000 UTC"}, Hostname:"ip-172-31-23-237", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.114 [INFO][5093] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.114 [INFO][5093] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.114 [INFO][5093] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-237' Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.117 [INFO][5093] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" host="ip-172-31-23-237" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.125 [INFO][5093] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-237" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.135 [INFO][5093] ipam/ipam.go 489: Trying affinity for 192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.139 [INFO][5093] ipam/ipam.go 155: Attempting to load block cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.143 [INFO][5093] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.143 [INFO][5093] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" host="ip-172-31-23-237" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.146 [INFO][5093] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.152 [INFO][5093] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" host="ip-172-31-23-237" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.164 [INFO][5093] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.15.1/26] block=192.168.15.0/26 handle="k8s-pod-network.991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" host="ip-172-31-23-237" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.164 [INFO][5093] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.15.1/26] handle="k8s-pod-network.991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" host="ip-172-31-23-237" Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.164 [INFO][5093] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:22.216794 containerd[2139]: 2025-01-30 14:04:22.164 [INFO][5093] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.1/26] IPv6=[] ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" HandleID="k8s-pod-network.991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:22.218897 containerd[2139]: 2025-01-30 14:04:22.168 [INFO][5082] cni-plugin/k8s.go 386: Populated endpoint ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-9l7bs" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0", GenerateName:"calico-apiserver-5979bcf7dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac1e96c2-de99-4017-b353-ed7792f81a20", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcf7dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"", Pod:"calico-apiserver-5979bcf7dd-9l7bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cd49c93158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:22.218897 containerd[2139]: 2025-01-30 14:04:22.168 [INFO][5082] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.15.1/32] ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-9l7bs" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:22.218897 containerd[2139]: 2025-01-30 14:04:22.168 [INFO][5082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cd49c93158 ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-9l7bs" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:22.218897 containerd[2139]: 2025-01-30 14:04:22.177 [INFO][5082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-9l7bs" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:22.218897 containerd[2139]: 2025-01-30 14:04:22.179 [INFO][5082] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-9l7bs" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0", GenerateName:"calico-apiserver-5979bcf7dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac1e96c2-de99-4017-b353-ed7792f81a20", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcf7dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac", Pod:"calico-apiserver-5979bcf7dd-9l7bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cd49c93158", MAC:"ca:13:a9:60:78:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:22.218897 containerd[2139]: 2025-01-30 14:04:22.204 [INFO][5082] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-9l7bs" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:22.258561 containerd[2139]: time="2025-01-30T14:04:22.257779872Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:22.258962 containerd[2139]: time="2025-01-30T14:04:22.258843696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:22.259162 containerd[2139]: time="2025-01-30T14:04:22.258939936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:22.259656 containerd[2139]: time="2025-01-30T14:04:22.259568964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:22.355984 containerd[2139]: time="2025-01-30T14:04:22.355934316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcf7dd-9l7bs,Uid:ac1e96c2-de99-4017-b353-ed7792f81a20,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac\"" Jan 30 14:04:22.368875 containerd[2139]: time="2025-01-30T14:04:22.368766828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:04:22.628493 containerd[2139]: time="2025-01-30T14:04:22.628381214Z" level=info msg="StopPodSandbox for \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\"" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.727 [INFO][5169] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.727 [INFO][5169] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" iface="eth0" netns="/var/run/netns/cni-4f7e075c-fb39-fe6b-6945-ff569c98218b" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.728 [INFO][5169] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" iface="eth0" netns="/var/run/netns/cni-4f7e075c-fb39-fe6b-6945-ff569c98218b" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.730 [INFO][5169] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" iface="eth0" netns="/var/run/netns/cni-4f7e075c-fb39-fe6b-6945-ff569c98218b" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.731 [INFO][5169] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.731 [INFO][5169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.770 [INFO][5175] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" HandleID="k8s-pod-network.9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.771 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.771 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.785 [WARNING][5175] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" HandleID="k8s-pod-network.9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.785 [INFO][5175] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" HandleID="k8s-pod-network.9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.787 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:22.793040 containerd[2139]: 2025-01-30 14:04:22.790 [INFO][5169] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:22.794133 containerd[2139]: time="2025-01-30T14:04:22.793336694Z" level=info msg="TearDown network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\" successfully" Jan 30 14:04:22.794133 containerd[2139]: time="2025-01-30T14:04:22.793408106Z" level=info msg="StopPodSandbox for \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\" returns successfully" Jan 30 14:04:22.794869 containerd[2139]: time="2025-01-30T14:04:22.794522246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcf7dd-sqhjj,Uid:ba1c580d-2c05-4884-a289-a60900c388b5,Namespace:calico-apiserver,Attempt:1,}" Jan 30 14:04:22.905773 systemd[1]: run-netns-cni\x2d4f7e075c\x2dfb39\x2dfe6b\x2d6945\x2dff569c98218b.mount: Deactivated successfully. Jan 30 14:04:23.041460 systemd-networkd[1682]: cali8796e0b155d: Link UP Jan 30 14:04:23.044916 systemd-networkd[1682]: cali8796e0b155d: Gained carrier Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:22.883 [INFO][5182] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0 calico-apiserver-5979bcf7dd- calico-apiserver ba1c580d-2c05-4884-a289-a60900c388b5 830 0 2025-01-30 14:03:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5979bcf7dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-237 calico-apiserver-5979bcf7dd-sqhjj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8796e0b155d [] []}} ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-sqhjj" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:22.884 [INFO][5182] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-sqhjj" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:22.958 [INFO][5192] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" HandleID="k8s-pod-network.8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:22.976 [INFO][5192] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" HandleID="k8s-pod-network.8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000384230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-237", "pod":"calico-apiserver-5979bcf7dd-sqhjj", "timestamp":"2025-01-30 14:04:22.958751175 +0000 UTC"}, Hostname:"ip-172-31-23-237", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:22.977 [INFO][5192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:22.977 [INFO][5192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:22.977 [INFO][5192] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-237' Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:22.982 [INFO][5192] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" host="ip-172-31-23-237" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:22.993 [INFO][5192] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-237" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.000 [INFO][5192] ipam/ipam.go 489: Trying affinity for 192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.003 [INFO][5192] ipam/ipam.go 155: Attempting to load block cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.007 [INFO][5192] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.007 [INFO][5192] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" host="ip-172-31-23-237" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.009 [INFO][5192] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.018 [INFO][5192] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" host="ip-172-31-23-237" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.027 [INFO][5192] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.15.2/26] block=192.168.15.0/26 handle="k8s-pod-network.8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" host="ip-172-31-23-237" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.027 [INFO][5192] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.15.2/26] handle="k8s-pod-network.8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" host="ip-172-31-23-237" Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.028 [INFO][5192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:23.080999 containerd[2139]: 2025-01-30 14:04:23.028 [INFO][5192] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.2/26] IPv6=[] ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" HandleID="k8s-pod-network.8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:23.084947 containerd[2139]: 2025-01-30 14:04:23.032 [INFO][5182] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-sqhjj" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0", GenerateName:"calico-apiserver-5979bcf7dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba1c580d-2c05-4884-a289-a60900c388b5", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcf7dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"", Pod:"calico-apiserver-5979bcf7dd-sqhjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8796e0b155d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:23.084947 containerd[2139]: 2025-01-30 14:04:23.033 [INFO][5182] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.15.2/32] ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-sqhjj" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:23.084947 containerd[2139]: 2025-01-30 14:04:23.033 [INFO][5182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8796e0b155d ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-sqhjj" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:23.084947 containerd[2139]: 2025-01-30 14:04:23.046 [INFO][5182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-sqhjj" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:23.084947 containerd[2139]: 2025-01-30 14:04:23.047 [INFO][5182] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-sqhjj" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0", GenerateName:"calico-apiserver-5979bcf7dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba1c580d-2c05-4884-a289-a60900c388b5", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcf7dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e", Pod:"calico-apiserver-5979bcf7dd-sqhjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8796e0b155d", MAC:"d6:8b:48:95:25:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:23.084947 containerd[2139]: 2025-01-30 14:04:23.070 [INFO][5182] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e" Namespace="calico-apiserver" Pod="calico-apiserver-5979bcf7dd-sqhjj" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:23.124946 containerd[2139]: time="2025-01-30T14:04:23.124766976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:23.125325 containerd[2139]: time="2025-01-30T14:04:23.124901940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:23.125325 containerd[2139]: time="2025-01-30T14:04:23.124930020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:23.125655 containerd[2139]: time="2025-01-30T14:04:23.125251104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:23.186747 systemd-networkd[1682]: cali7cd49c93158: Gained IPv6LL Jan 30 14:04:23.237839 containerd[2139]: time="2025-01-30T14:04:23.237743497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5979bcf7dd-sqhjj,Uid:ba1c580d-2c05-4884-a289-a60900c388b5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e\"" Jan 30 14:04:23.630056 containerd[2139]: time="2025-01-30T14:04:23.628269939Z" level=info msg="StopPodSandbox for \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\"" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.793 [INFO][5269] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.795 [INFO][5269] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" iface="eth0" netns="/var/run/netns/cni-e308eb87-886b-0fe7-2ca1-c5570fbbc00a" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.797 [INFO][5269] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" iface="eth0" netns="/var/run/netns/cni-e308eb87-886b-0fe7-2ca1-c5570fbbc00a" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.798 [INFO][5269] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" iface="eth0" netns="/var/run/netns/cni-e308eb87-886b-0fe7-2ca1-c5570fbbc00a" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.798 [INFO][5269] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.801 [INFO][5269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.937 [INFO][5276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" HandleID="k8s-pod-network.de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.938 [INFO][5276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.938 [INFO][5276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.951 [WARNING][5276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" HandleID="k8s-pod-network.de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.951 [INFO][5276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" HandleID="k8s-pod-network.de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.953 [INFO][5276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:23.960265 containerd[2139]: 2025-01-30 14:04:23.956 [INFO][5269] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:23.965639 containerd[2139]: time="2025-01-30T14:04:23.960670408Z" level=info msg="TearDown network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\" successfully" Jan 30 14:04:23.965639 containerd[2139]: time="2025-01-30T14:04:23.960714808Z" level=info msg="StopPodSandbox for \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\" returns successfully" Jan 30 14:04:23.965639 containerd[2139]: time="2025-01-30T14:04:23.963882472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n6wgq,Uid:42c1d2cd-78ec-4529-88c1-93534af9989c,Namespace:kube-system,Attempt:1,}" Jan 30 14:04:23.968986 systemd[1]: run-netns-cni\x2de308eb87\x2d886b\x2d0fe7\x2d2ca1\x2dc5570fbbc00a.mount: Deactivated successfully. Jan 30 14:04:24.210653 systemd-networkd[1682]: cali8796e0b155d: Gained IPv6LL Jan 30 14:04:24.311509 systemd-networkd[1682]: calic68bcb08305: Link UP Jan 30 14:04:24.313091 systemd-networkd[1682]: calic68bcb08305: Gained carrier Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.099 [INFO][5286] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0 coredns-7db6d8ff4d- kube-system 42c1d2cd-78ec-4529-88c1-93534af9989c 843 0 2025-01-30 14:03:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-237 coredns-7db6d8ff4d-n6wgq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic68bcb08305 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n6wgq" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.101 [INFO][5286] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n6wgq" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.201 [INFO][5299] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" HandleID="k8s-pod-network.96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.227 [INFO][5299] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" HandleID="k8s-pod-network.96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002793b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-237", "pod":"coredns-7db6d8ff4d-n6wgq", "timestamp":"2025-01-30 14:04:24.201973897 +0000 UTC"}, Hostname:"ip-172-31-23-237", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.228 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.228 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.229 [INFO][5299] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-237' Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.234 [INFO][5299] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" host="ip-172-31-23-237" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.246 [INFO][5299] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-237" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.256 [INFO][5299] ipam/ipam.go 489: Trying affinity for 192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.260 [INFO][5299] ipam/ipam.go 155: Attempting to load block cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.265 [INFO][5299] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.265 [INFO][5299] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" host="ip-172-31-23-237" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.269 [INFO][5299] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.278 [INFO][5299] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" host="ip-172-31-23-237" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.295 [INFO][5299] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.15.3/26] block=192.168.15.0/26 handle="k8s-pod-network.96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" host="ip-172-31-23-237" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.296 [INFO][5299] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.15.3/26] handle="k8s-pod-network.96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" host="ip-172-31-23-237" Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.296 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:24.363551 containerd[2139]: 2025-01-30 14:04:24.296 [INFO][5299] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.3/26] IPv6=[] ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" HandleID="k8s-pod-network.96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:24.364756 containerd[2139]: 2025-01-30 14:04:24.301 [INFO][5286] cni-plugin/k8s.go 386: Populated endpoint ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n6wgq" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42c1d2cd-78ec-4529-88c1-93534af9989c", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"", Pod:"coredns-7db6d8ff4d-n6wgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic68bcb08305", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:24.364756 containerd[2139]: 2025-01-30 14:04:24.302 [INFO][5286] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.15.3/32] ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n6wgq" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:24.364756 containerd[2139]: 2025-01-30 14:04:24.303 [INFO][5286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic68bcb08305 ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n6wgq" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:24.364756 containerd[2139]: 2025-01-30 14:04:24.317 [INFO][5286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n6wgq" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:24.364756 containerd[2139]: 2025-01-30 14:04:24.318 [INFO][5286] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n6wgq" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42c1d2cd-78ec-4529-88c1-93534af9989c", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d", Pod:"coredns-7db6d8ff4d-n6wgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic68bcb08305", MAC:"de:75:d0:ce:f1:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:24.364756 containerd[2139]: 2025-01-30 14:04:24.339 [INFO][5286] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-n6wgq" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:24.423948 containerd[2139]: time="2025-01-30T14:04:24.422065802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:24.423948 containerd[2139]: time="2025-01-30T14:04:24.422174450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:24.423948 containerd[2139]: time="2025-01-30T14:04:24.422217602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:24.423948 containerd[2139]: time="2025-01-30T14:04:24.422443370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:24.567658 containerd[2139]: time="2025-01-30T14:04:24.567353487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-n6wgq,Uid:42c1d2cd-78ec-4529-88c1-93534af9989c,Namespace:kube-system,Attempt:1,} returns sandbox id \"96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d\"" Jan 30 14:04:24.606984 containerd[2139]: time="2025-01-30T14:04:24.606790071Z" level=info msg="CreateContainer within sandbox \"96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:04:24.631389 containerd[2139]: time="2025-01-30T14:04:24.631334248Z" level=info msg="StopPodSandbox for \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\"" Jan 30 14:04:24.635433 containerd[2139]: time="2025-01-30T14:04:24.635367856Z" level=info msg="StopPodSandbox for \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\"" Jan 30 14:04:24.642551 containerd[2139]: time="2025-01-30T14:04:24.642462616Z" level=info msg="CreateContainer within sandbox \"96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5c3707d9a697216ff15c1cf5f9160e1afd1d49e6d639d2c460646eb9302a856e\"" Jan 30 14:04:24.648239 containerd[2139]: time="2025-01-30T14:04:24.644975188Z" level=info msg="StartContainer for \"5c3707d9a697216ff15c1cf5f9160e1afd1d49e6d639d2c460646eb9302a856e\"" Jan 30 14:04:24.895823 containerd[2139]: time="2025-01-30T14:04:24.895176365Z" level=info msg="StartContainer for \"5c3707d9a697216ff15c1cf5f9160e1afd1d49e6d639d2c460646eb9302a856e\" returns successfully" Jan 30 14:04:25.156107 kubelet[3642]: I0130 14:04:25.155517 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-n6wgq" podStartSLOduration=36.15546659 podStartE2EDuration="36.15546659s" podCreationTimestamp="2025-01-30 14:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:04:25.15202955 +0000 UTC m=+50.748506917" watchObservedRunningTime="2025-01-30 14:04:25.15546659 +0000 UTC m=+50.751944041" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:24.997 [INFO][5381] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:24.998 [INFO][5381] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" iface="eth0" netns="/var/run/netns/cni-8f4d09a6-4be2-2137-06ee-ca4539224e13" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.003 [INFO][5381] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" iface="eth0" netns="/var/run/netns/cni-8f4d09a6-4be2-2137-06ee-ca4539224e13" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.003 [INFO][5381] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" iface="eth0" netns="/var/run/netns/cni-8f4d09a6-4be2-2137-06ee-ca4539224e13" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.003 [INFO][5381] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.003 [INFO][5381] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.159 [INFO][5438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" HandleID="k8s-pod-network.5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.160 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.162 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.240 [WARNING][5438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" HandleID="k8s-pod-network.5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.240 [INFO][5438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" HandleID="k8s-pod-network.5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.246 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:25.271242 containerd[2139]: 2025-01-30 14:04:25.261 [INFO][5381] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:25.275407 containerd[2139]: time="2025-01-30T14:04:25.271462707Z" level=info msg="TearDown network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\" successfully" Jan 30 14:04:25.275407 containerd[2139]: time="2025-01-30T14:04:25.271608783Z" level=info msg="StopPodSandbox for \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\" returns successfully" Jan 30 14:04:25.277030 containerd[2139]: time="2025-01-30T14:04:25.276385371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d545d88fb-6s48d,Uid:fbfb33fd-7135-4e07-94f5-b6878a9da2f6,Namespace:calico-system,Attempt:1,}" Jan 30 14:04:25.283009 systemd[1]: run-netns-cni\x2d8f4d09a6\x2d4be2\x2d2137\x2d06ee\x2dca4539224e13.mount: Deactivated successfully. Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.003 [INFO][5395] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.004 [INFO][5395] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" iface="eth0" netns="/var/run/netns/cni-f91b9131-32af-f3c8-2db3-549c4a23d819" Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.006 [INFO][5395] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" iface="eth0" netns="/var/run/netns/cni-f91b9131-32af-f3c8-2db3-549c4a23d819" Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.012 [INFO][5395] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" iface="eth0" netns="/var/run/netns/cni-f91b9131-32af-f3c8-2db3-549c4a23d819" Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.013 [INFO][5395] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.014 [INFO][5395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.282 [INFO][5442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" HandleID="k8s-pod-network.92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.282 [INFO][5442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.282 [INFO][5442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.308 [WARNING][5442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" HandleID="k8s-pod-network.92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.308 [INFO][5442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" HandleID="k8s-pod-network.92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.312 [INFO][5442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:25.378791 containerd[2139]: 2025-01-30 14:04:25.355 [INFO][5395] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:25.384275 containerd[2139]: time="2025-01-30T14:04:25.379208319Z" level=info msg="TearDown network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\" successfully" Jan 30 14:04:25.384275 containerd[2139]: time="2025-01-30T14:04:25.379250967Z" level=info msg="StopPodSandbox for \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\" returns successfully" Jan 30 14:04:25.386285 systemd[1]: run-netns-cni\x2df91b9131\x2d32af\x2df3c8\x2d2db3\x2d549c4a23d819.mount: Deactivated successfully. Jan 30 14:04:25.387964 containerd[2139]: time="2025-01-30T14:04:25.386658147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8tv4w,Uid:0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e,Namespace:kube-system,Attempt:1,}" Jan 30 14:04:25.500657 systemd[1]: Started sshd@8-172.31.23.237:22-139.178.89.65:57786.service - OpenSSH per-connection server daemon (139.178.89.65:57786). Jan 30 14:04:25.631585 containerd[2139]: time="2025-01-30T14:04:25.631513156Z" level=info msg="StopPodSandbox for \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\"" Jan 30 14:04:25.730019 sshd[5480]: Accepted publickey for core from 139.178.89.65 port 57786 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:25.735841 sshd[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:25.760847 systemd-logind[2109]: New session 9 of user core. Jan 30 14:04:25.768696 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:04:25.916938 systemd-networkd[1682]: calif4db88f85da: Link UP Jan 30 14:04:25.918447 systemd-networkd[1682]: calif4db88f85da: Gained carrier Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.479 [INFO][5456] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0 calico-kube-controllers-5d545d88fb- calico-system fbfb33fd-7135-4e07-94f5-b6878a9da2f6 858 0 2025-01-30 14:03:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d545d88fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-237 calico-kube-controllers-5d545d88fb-6s48d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif4db88f85da [] []}} ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Namespace="calico-system" Pod="calico-kube-controllers-5d545d88fb-6s48d" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.480 [INFO][5456] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Namespace="calico-system" Pod="calico-kube-controllers-5d545d88fb-6s48d" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.656 [INFO][5481] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" HandleID="k8s-pod-network.69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.691 [INFO][5481] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" HandleID="k8s-pod-network.69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c360), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-237", "pod":"calico-kube-controllers-5d545d88fb-6s48d", "timestamp":"2025-01-30 14:04:25.656325089 +0000 UTC"}, Hostname:"ip-172-31-23-237", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.691 [INFO][5481] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.692 [INFO][5481] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.692 [INFO][5481] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-237' Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.699 [INFO][5481] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" host="ip-172-31-23-237" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.712 [INFO][5481] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-237" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.726 [INFO][5481] ipam/ipam.go 489: Trying affinity for 192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.732 [INFO][5481] ipam/ipam.go 155: Attempting to load block cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.750 [INFO][5481] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.750 [INFO][5481] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" host="ip-172-31-23-237" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.758 [INFO][5481] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9 Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.781 [INFO][5481] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" host="ip-172-31-23-237" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.812 [INFO][5481] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.15.4/26] block=192.168.15.0/26 handle="k8s-pod-network.69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" host="ip-172-31-23-237" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.812 [INFO][5481] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.15.4/26] handle="k8s-pod-network.69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" host="ip-172-31-23-237" Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.812 [INFO][5481] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:25.994843 containerd[2139]: 2025-01-30 14:04:25.815 [INFO][5481] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.4/26] IPv6=[] ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" HandleID="k8s-pod-network.69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:25.998322 containerd[2139]: 2025-01-30 14:04:25.850 [INFO][5456] cni-plugin/k8s.go 386: Populated endpoint ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Namespace="calico-system" Pod="calico-kube-controllers-5d545d88fb-6s48d" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0", GenerateName:"calico-kube-controllers-5d545d88fb-", Namespace:"calico-system", SelfLink:"", UID:"fbfb33fd-7135-4e07-94f5-b6878a9da2f6", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d545d88fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"", Pod:"calico-kube-controllers-5d545d88fb-6s48d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4db88f85da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:25.998322 containerd[2139]: 2025-01-30 14:04:25.851 [INFO][5456] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.15.4/32] ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Namespace="calico-system" Pod="calico-kube-controllers-5d545d88fb-6s48d" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:25.998322 containerd[2139]: 2025-01-30 14:04:25.851 [INFO][5456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif4db88f85da ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Namespace="calico-system" Pod="calico-kube-controllers-5d545d88fb-6s48d" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:25.998322 containerd[2139]: 2025-01-30 14:04:25.904 [INFO][5456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Namespace="calico-system" Pod="calico-kube-controllers-5d545d88fb-6s48d" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:25.998322 containerd[2139]: 2025-01-30 14:04:25.905 [INFO][5456] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Namespace="calico-system" Pod="calico-kube-controllers-5d545d88fb-6s48d" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0", GenerateName:"calico-kube-controllers-5d545d88fb-", Namespace:"calico-system", SelfLink:"", UID:"fbfb33fd-7135-4e07-94f5-b6878a9da2f6", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d545d88fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9", Pod:"calico-kube-controllers-5d545d88fb-6s48d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4db88f85da", MAC:"02:ef:2a:f1:cc:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:25.998322 containerd[2139]: 2025-01-30 14:04:25.957 [INFO][5456] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9" Namespace="calico-system" Pod="calico-kube-controllers-5d545d88fb-6s48d" WorkloadEndpoint="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:26.066852 systemd-networkd[1682]: calic68bcb08305: Gained IPv6LL Jan 30 14:04:26.171657 systemd-networkd[1682]: calid70a33a9691: Link UP Jan 30 14:04:26.172047 systemd-networkd[1682]: calid70a33a9691: Gained carrier Jan 30 14:04:26.283535 containerd[2139]: time="2025-01-30T14:04:26.266915728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:26.283535 containerd[2139]: time="2025-01-30T14:04:26.267014476Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:26.283535 containerd[2139]: time="2025-01-30T14:04:26.267050320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:26.283535 containerd[2139]: time="2025-01-30T14:04:26.267231220Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:25.614 [INFO][5469] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0 coredns-7db6d8ff4d- kube-system 0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e 859 0 2025-01-30 14:03:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-237 coredns-7db6d8ff4d-8tv4w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid70a33a9691 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8tv4w" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:25.614 [INFO][5469] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8tv4w" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:25.872 [INFO][5497] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" HandleID="k8s-pod-network.46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:25.993 [INFO][5497] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" HandleID="k8s-pod-network.46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-237", "pod":"coredns-7db6d8ff4d-8tv4w", "timestamp":"2025-01-30 14:04:25.87270393 +0000 UTC"}, Hostname:"ip-172-31-23-237", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:25.993 [INFO][5497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:25.993 [INFO][5497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:25.993 [INFO][5497] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-237' Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.011 [INFO][5497] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" host="ip-172-31-23-237" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.030 [INFO][5497] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-237" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.059 [INFO][5497] ipam/ipam.go 489: Trying affinity for 192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.073 [INFO][5497] ipam/ipam.go 155: Attempting to load block cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.081 [INFO][5497] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.081 [INFO][5497] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" host="ip-172-31-23-237" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.086 [INFO][5497] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.104 [INFO][5497] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" host="ip-172-31-23-237" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.125 [INFO][5497] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.15.5/26] block=192.168.15.0/26 handle="k8s-pod-network.46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" host="ip-172-31-23-237" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.125 [INFO][5497] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.15.5/26] handle="k8s-pod-network.46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" host="ip-172-31-23-237" Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.125 [INFO][5497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:26.283535 containerd[2139]: 2025-01-30 14:04:26.125 [INFO][5497] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.5/26] IPv6=[] ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" HandleID="k8s-pod-network.46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:26.285029 containerd[2139]: 2025-01-30 14:04:26.148 [INFO][5469] cni-plugin/k8s.go 386: Populated endpoint ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8tv4w" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"", Pod:"coredns-7db6d8ff4d-8tv4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid70a33a9691", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:26.285029 containerd[2139]: 2025-01-30 14:04:26.148 [INFO][5469] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.15.5/32] ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8tv4w" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:26.285029 containerd[2139]: 2025-01-30 14:04:26.148 [INFO][5469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid70a33a9691 ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8tv4w" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:26.285029 containerd[2139]: 2025-01-30 14:04:26.177 [INFO][5469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8tv4w" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:26.285029 containerd[2139]: 2025-01-30 14:04:26.198 [INFO][5469] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8tv4w" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd", Pod:"coredns-7db6d8ff4d-8tv4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid70a33a9691", MAC:"c2:b3:c6:5d:43:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:26.285029 containerd[2139]: 2025-01-30 14:04:26.231 [INFO][5469] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-8tv4w" WorkloadEndpoint="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:26.298545 sshd[5480]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:26.312444 systemd[1]: sshd@8-172.31.23.237:22-139.178.89.65:57786.service: Deactivated successfully. Jan 30 14:04:26.324956 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:04:26.331440 systemd-logind[2109]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:04:26.336658 systemd-logind[2109]: Removed session 9. Jan 30 14:04:26.443442 containerd[2139]: time="2025-01-30T14:04:26.438848068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:26.443442 containerd[2139]: time="2025-01-30T14:04:26.440257060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:26.443442 containerd[2139]: time="2025-01-30T14:04:26.440324104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:26.443442 containerd[2139]: time="2025-01-30T14:04:26.440530384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:25.981 [INFO][5508] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:25.984 [INFO][5508] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" iface="eth0" netns="/var/run/netns/cni-a5b1f2ff-e6c7-2385-32aa-cc906e853e0d" Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:25.988 [INFO][5508] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" iface="eth0" netns="/var/run/netns/cni-a5b1f2ff-e6c7-2385-32aa-cc906e853e0d" Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:25.991 [INFO][5508] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" iface="eth0" netns="/var/run/netns/cni-a5b1f2ff-e6c7-2385-32aa-cc906e853e0d" Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:25.991 [INFO][5508] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:25.991 [INFO][5508] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:26.364 [INFO][5528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" HandleID="k8s-pod-network.36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:26.366 [INFO][5528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:26.366 [INFO][5528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:26.410 [WARNING][5528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" HandleID="k8s-pod-network.36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:26.410 [INFO][5528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" HandleID="k8s-pod-network.36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:26.414 [INFO][5528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:26.469316 containerd[2139]: 2025-01-30 14:04:26.426 [INFO][5508] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:26.473975 containerd[2139]: time="2025-01-30T14:04:26.470005853Z" level=info msg="TearDown network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\" successfully" Jan 30 14:04:26.473975 containerd[2139]: time="2025-01-30T14:04:26.470052473Z" level=info msg="StopPodSandbox for \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\" returns successfully" Jan 30 14:04:26.479257 containerd[2139]: time="2025-01-30T14:04:26.475301549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nwm6,Uid:6a54a6e6-ac55-4daa-ba3e-bb307511354c,Namespace:calico-system,Attempt:1,}" Jan 30 14:04:26.499597 systemd[1]: run-netns-cni\x2da5b1f2ff\x2de6c7\x2d2385\x2d32aa\x2dcc906e853e0d.mount: Deactivated successfully. Jan 30 14:04:26.586259 containerd[2139]: time="2025-01-30T14:04:26.586059341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d545d88fb-6s48d,Uid:fbfb33fd-7135-4e07-94f5-b6878a9da2f6,Namespace:calico-system,Attempt:1,} returns sandbox id \"69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9\"" Jan 30 14:04:26.612957 containerd[2139]: time="2025-01-30T14:04:26.612437981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-8tv4w,Uid:0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e,Namespace:kube-system,Attempt:1,} returns sandbox id \"46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd\"" Jan 30 14:04:26.621491 containerd[2139]: time="2025-01-30T14:04:26.621257141Z" level=info msg="CreateContainer within sandbox \"46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:04:26.690227 containerd[2139]: time="2025-01-30T14:04:26.690149310Z" level=info msg="CreateContainer within sandbox \"46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"72aae540e2290d208cfb7d247d5a01b5da6e1ad51d92e3bb9b13de04790cc651\"" Jan 30 14:04:26.694707 containerd[2139]: time="2025-01-30T14:04:26.693805362Z" level=info msg="StartContainer for \"72aae540e2290d208cfb7d247d5a01b5da6e1ad51d92e3bb9b13de04790cc651\"" Jan 30 14:04:27.077309 containerd[2139]: time="2025-01-30T14:04:27.076905412Z" level=info msg="StartContainer for \"72aae540e2290d208cfb7d247d5a01b5da6e1ad51d92e3bb9b13de04790cc651\" returns successfully" Jan 30 14:04:27.227777 systemd-networkd[1682]: cali7a13f9601c9: Link UP Jan 30 14:04:27.232887 systemd-networkd[1682]: cali7a13f9601c9: Gained carrier Jan 30 14:04:27.278344 kubelet[3642]: I0130 14:04:27.277322 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-8tv4w" podStartSLOduration=38.277299185 podStartE2EDuration="38.277299185s" podCreationTimestamp="2025-01-30 14:03:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:04:27.262877957 +0000 UTC m=+52.859355312" watchObservedRunningTime="2025-01-30 14:04:27.277299185 +0000 UTC m=+52.873776528" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:26.775 [INFO][5635] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0 csi-node-driver- calico-system 6a54a6e6-ac55-4daa-ba3e-bb307511354c 877 0 2025-01-30 14:03:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-237 csi-node-driver-9nwm6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7a13f9601c9 [] []}} ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Namespace="calico-system" Pod="csi-node-driver-9nwm6" WorkloadEndpoint="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:26.776 [INFO][5635] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Namespace="calico-system" Pod="csi-node-driver-9nwm6" WorkloadEndpoint="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:26.984 [INFO][5680] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" HandleID="k8s-pod-network.bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.049 [INFO][5680] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" HandleID="k8s-pod-network.bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cee60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-237", "pod":"csi-node-driver-9nwm6", "timestamp":"2025-01-30 14:04:26.984762511 +0000 UTC"}, Hostname:"ip-172-31-23-237", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.049 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.050 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.050 [INFO][5680] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-237' Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.055 [INFO][5680] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" host="ip-172-31-23-237" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.094 [INFO][5680] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-237" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.114 [INFO][5680] ipam/ipam.go 489: Trying affinity for 192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.121 [INFO][5680] ipam/ipam.go 155: Attempting to load block cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.128 [INFO][5680] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.15.0/26 host="ip-172-31-23-237" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.128 [INFO][5680] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.15.0/26 handle="k8s-pod-network.bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" host="ip-172-31-23-237" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.134 [INFO][5680] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958 Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.154 [INFO][5680] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.15.0/26 handle="k8s-pod-network.bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" host="ip-172-31-23-237" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.192 [INFO][5680] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.15.6/26] block=192.168.15.0/26 handle="k8s-pod-network.bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" host="ip-172-31-23-237" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.195 [INFO][5680] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.15.6/26] handle="k8s-pod-network.bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" host="ip-172-31-23-237" Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.195 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:27.300516 containerd[2139]: 2025-01-30 14:04:27.195 [INFO][5680] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.15.6/26] IPv6=[] ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" HandleID="k8s-pod-network.bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:27.302778 containerd[2139]: 2025-01-30 14:04:27.202 [INFO][5635] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Namespace="calico-system" Pod="csi-node-driver-9nwm6" WorkloadEndpoint="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a54a6e6-ac55-4daa-ba3e-bb307511354c", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"", Pod:"csi-node-driver-9nwm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a13f9601c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:27.302778 containerd[2139]: 2025-01-30 14:04:27.202 [INFO][5635] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.15.6/32] ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Namespace="calico-system" Pod="csi-node-driver-9nwm6" WorkloadEndpoint="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:27.302778 containerd[2139]: 2025-01-30 14:04:27.202 [INFO][5635] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a13f9601c9 ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Namespace="calico-system" Pod="csi-node-driver-9nwm6" WorkloadEndpoint="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:27.302778 containerd[2139]: 2025-01-30 14:04:27.238 [INFO][5635] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Namespace="calico-system" Pod="csi-node-driver-9nwm6" WorkloadEndpoint="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:27.302778 containerd[2139]: 2025-01-30 14:04:27.246 [INFO][5635] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Namespace="calico-system" Pod="csi-node-driver-9nwm6" WorkloadEndpoint="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a54a6e6-ac55-4daa-ba3e-bb307511354c", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958", Pod:"csi-node-driver-9nwm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a13f9601c9", MAC:"fe:04:54:c7:91:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:27.302778 containerd[2139]: 2025-01-30 14:04:27.288 [INFO][5635] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958" Namespace="calico-system" Pod="csi-node-driver-9nwm6" WorkloadEndpoint="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:27.376949 containerd[2139]: time="2025-01-30T14:04:27.374639741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:27.383425 containerd[2139]: time="2025-01-30T14:04:27.383357309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 30 14:04:27.387649 containerd[2139]: time="2025-01-30T14:04:27.387589601Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:27.396730 containerd[2139]: time="2025-01-30T14:04:27.396675989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:27.403736 containerd[2139]: time="2025-01-30T14:04:27.403678265Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 5.034849097s" Jan 30 14:04:27.406584 containerd[2139]: time="2025-01-30T14:04:27.406528409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:04:27.411333 containerd[2139]: time="2025-01-30T14:04:27.411018809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 14:04:27.421284 containerd[2139]: time="2025-01-30T14:04:27.421153325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:04:27.423168 containerd[2139]: time="2025-01-30T14:04:27.422938529Z" level=info msg="CreateContainer within sandbox \"991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:04:27.423168 containerd[2139]: time="2025-01-30T14:04:27.422057549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:04:27.423168 containerd[2139]: time="2025-01-30T14:04:27.422129537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:27.423168 containerd[2139]: time="2025-01-30T14:04:27.422386373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:04:27.476790 containerd[2139]: time="2025-01-30T14:04:27.476386758Z" level=info msg="CreateContainer within sandbox \"991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bff06611e7fadb465fd74fb105ed8006ffca9f368fcdd55710750150e7ccaaa6\"" Jan 30 14:04:27.479375 containerd[2139]: time="2025-01-30T14:04:27.478506306Z" level=info msg="StartContainer for \"bff06611e7fadb465fd74fb105ed8006ffca9f368fcdd55710750150e7ccaaa6\"" Jan 30 14:04:27.604836 systemd-networkd[1682]: calif4db88f85da: Gained IPv6LL Jan 30 14:04:27.607347 systemd-networkd[1682]: calid70a33a9691: Gained IPv6LL Jan 30 14:04:27.658542 containerd[2139]: time="2025-01-30T14:04:27.657597943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9nwm6,Uid:6a54a6e6-ac55-4daa-ba3e-bb307511354c,Namespace:calico-system,Attempt:1,} returns sandbox id \"bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958\"" Jan 30 14:04:27.762527 containerd[2139]: time="2025-01-30T14:04:27.762439459Z" level=info msg="StartContainer for \"bff06611e7fadb465fd74fb105ed8006ffca9f368fcdd55710750150e7ccaaa6\" returns successfully" Jan 30 14:04:27.786975 containerd[2139]: time="2025-01-30T14:04:27.786555463Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:27.789837 containerd[2139]: time="2025-01-30T14:04:27.789783523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 14:04:27.796562 containerd[2139]: time="2025-01-30T14:04:27.796502995Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 385.395254ms" Jan 30 14:04:27.797038 containerd[2139]: time="2025-01-30T14:04:27.796748119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 30 14:04:27.801109 containerd[2139]: time="2025-01-30T14:04:27.798776767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 14:04:27.805889 containerd[2139]: time="2025-01-30T14:04:27.805827259Z" level=info msg="CreateContainer within sandbox \"8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 14:04:27.833016 containerd[2139]: time="2025-01-30T14:04:27.832951015Z" level=info msg="CreateContainer within sandbox \"8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fbb7455430f23cea8941e0303c1927985ccdb96ec68fc32ed37cdee968f41e45\"" Jan 30 14:04:27.834007 containerd[2139]: time="2025-01-30T14:04:27.833953363Z" level=info msg="StartContainer for \"fbb7455430f23cea8941e0303c1927985ccdb96ec68fc32ed37cdee968f41e45\"" Jan 30 14:04:28.004958 containerd[2139]: time="2025-01-30T14:04:28.003444376Z" level=info msg="StartContainer for \"fbb7455430f23cea8941e0303c1927985ccdb96ec68fc32ed37cdee968f41e45\" returns successfully" Jan 30 14:04:28.277888 kubelet[3642]: I0130 14:04:28.277003 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5979bcf7dd-9l7bs" podStartSLOduration=27.225790921 podStartE2EDuration="32.276977574s" podCreationTimestamp="2025-01-30 14:03:56 +0000 UTC" firstStartedPulling="2025-01-30 14:04:22.358256676 +0000 UTC m=+47.954734019" lastFinishedPulling="2025-01-30 14:04:27.409443329 +0000 UTC m=+53.005920672" observedRunningTime="2025-01-30 14:04:28.276583698 +0000 UTC m=+53.873061065" watchObservedRunningTime="2025-01-30 14:04:28.276977574 +0000 UTC m=+53.873454929" Jan 30 14:04:28.306002 kubelet[3642]: I0130 14:04:28.305528 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5979bcf7dd-sqhjj" podStartSLOduration=27.747710632 podStartE2EDuration="32.305504622s" podCreationTimestamp="2025-01-30 14:03:56 +0000 UTC" firstStartedPulling="2025-01-30 14:04:23.240624181 +0000 UTC m=+48.837101524" lastFinishedPulling="2025-01-30 14:04:27.798418183 +0000 UTC m=+53.394895514" observedRunningTime="2025-01-30 14:04:28.305493726 +0000 UTC m=+53.901971093" watchObservedRunningTime="2025-01-30 14:04:28.305504622 +0000 UTC m=+53.901981989" Jan 30 14:04:29.207946 systemd-networkd[1682]: cali7a13f9601c9: Gained IPv6LL Jan 30 14:04:29.281532 kubelet[3642]: I0130 14:04:29.279991 3642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:04:31.009530 containerd[2139]: time="2025-01-30T14:04:31.009439483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:31.014491 containerd[2139]: time="2025-01-30T14:04:31.014116147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 30 14:04:31.016387 containerd[2139]: time="2025-01-30T14:04:31.016332583Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:31.024084 containerd[2139]: time="2025-01-30T14:04:31.024016579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:31.027068 containerd[2139]: time="2025-01-30T14:04:31.027011131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.228175828s" Jan 30 14:04:31.027212 containerd[2139]: time="2025-01-30T14:04:31.027072619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 30 14:04:31.032850 containerd[2139]: time="2025-01-30T14:04:31.030055531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 14:04:31.103502 containerd[2139]: time="2025-01-30T14:04:31.103405736Z" level=info msg="CreateContainer within sandbox \"69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 14:04:31.135983 containerd[2139]: time="2025-01-30T14:04:31.135906728Z" level=info msg="CreateContainer within sandbox \"69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e7a246b210c187d05dd2ededbf1d54f556b8bcc07a8749d88e3df248bed948f8\"" Jan 30 14:04:31.137816 containerd[2139]: time="2025-01-30T14:04:31.137747636Z" level=info msg="StartContainer for \"e7a246b210c187d05dd2ededbf1d54f556b8bcc07a8749d88e3df248bed948f8\"" Jan 30 14:04:31.307068 containerd[2139]: time="2025-01-30T14:04:31.306160365Z" level=info msg="StartContainer for \"e7a246b210c187d05dd2ededbf1d54f556b8bcc07a8749d88e3df248bed948f8\" returns successfully" Jan 30 14:04:31.341392 systemd[1]: Started sshd@9-172.31.23.237:22-139.178.89.65:43352.service - OpenSSH per-connection server daemon (139.178.89.65:43352). Jan 30 14:04:31.573146 sshd[5891]: Accepted publickey for core from 139.178.89.65 port 43352 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:31.579234 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:31.588735 systemd-logind[2109]: New session 10 of user core. Jan 30 14:04:31.593028 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:04:31.887265 sshd[5891]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:31.895137 systemd-logind[2109]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:04:31.898327 systemd[1]: sshd@9-172.31.23.237:22-139.178.89.65:43352.service: Deactivated successfully. Jan 30 14:04:31.904387 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:04:31.913882 systemd-logind[2109]: Removed session 10. Jan 30 14:04:31.921068 systemd[1]: Started sshd@10-172.31.23.237:22-139.178.89.65:43358.service - OpenSSH per-connection server daemon (139.178.89.65:43358). Jan 30 14:04:32.020049 ntpd[2088]: Listen normally on 6 vxlan.calico 192.168.15.0:123 Jan 30 14:04:32.020179 ntpd[2088]: Listen normally on 7 vxlan.calico [fe80::6476:b5ff:feda:e849%4]:123 Jan 30 14:04:32.020688 ntpd[2088]: 30 Jan 14:04:32 ntpd[2088]: Listen normally on 6 vxlan.calico 192.168.15.0:123 Jan 30 14:04:32.020688 ntpd[2088]: 30 Jan 14:04:32 ntpd[2088]: Listen normally on 7 vxlan.calico [fe80::6476:b5ff:feda:e849%4]:123 Jan 30 14:04:32.020688 ntpd[2088]: 30 Jan 14:04:32 ntpd[2088]: Listen normally on 8 cali7cd49c93158 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 14:04:32.020688 ntpd[2088]: 30 Jan 14:04:32 ntpd[2088]: Listen normally on 9 cali8796e0b155d [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 14:04:32.020688 ntpd[2088]: 30 Jan 14:04:32 ntpd[2088]: Listen normally on 10 calic68bcb08305 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 14:04:32.020688 ntpd[2088]: 30 Jan 14:04:32 ntpd[2088]: Listen normally on 11 calif4db88f85da [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 14:04:32.020688 ntpd[2088]: 30 Jan 14:04:32 ntpd[2088]: Listen normally on 12 calid70a33a9691 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 14:04:32.020688 ntpd[2088]: 30 Jan 14:04:32 ntpd[2088]: Listen normally on 13 cali7a13f9601c9 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 14:04:32.020264 ntpd[2088]: Listen normally on 8 cali7cd49c93158 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 30 14:04:32.020340 ntpd[2088]: Listen normally on 9 cali8796e0b155d [fe80::ecee:eeff:feee:eeee%8]:123 Jan 30 14:04:32.020405 ntpd[2088]: Listen normally on 10 calic68bcb08305 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 30 14:04:32.020490 ntpd[2088]: Listen normally on 11 calif4db88f85da [fe80::ecee:eeff:feee:eeee%10]:123 Jan 30 14:04:32.020566 ntpd[2088]: Listen normally on 12 calid70a33a9691 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 30 14:04:32.020633 ntpd[2088]: Listen normally on 13 cali7a13f9601c9 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 30 14:04:32.107528 sshd[5908]: Accepted publickey for core from 139.178.89.65 port 43358 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:32.109173 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:32.119898 systemd-logind[2109]: New session 11 of user core. Jan 30 14:04:32.128755 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:04:32.400624 kubelet[3642]: I0130 14:04:32.397026 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d545d88fb-6s48d" podStartSLOduration=29.962150828 podStartE2EDuration="34.39700321s" podCreationTimestamp="2025-01-30 14:03:58 +0000 UTC" firstStartedPulling="2025-01-30 14:04:26.593707997 +0000 UTC m=+52.190185340" lastFinishedPulling="2025-01-30 14:04:31.028560391 +0000 UTC m=+56.625037722" observedRunningTime="2025-01-30 14:04:32.39681949 +0000 UTC m=+57.993296917" watchObservedRunningTime="2025-01-30 14:04:32.39700321 +0000 UTC m=+57.993480565" Jan 30 14:04:32.765774 sshd[5908]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:32.783268 systemd[1]: sshd@10-172.31.23.237:22-139.178.89.65:43358.service: Deactivated successfully. Jan 30 14:04:32.804084 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:04:32.823597 systemd-logind[2109]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:04:32.834072 systemd[1]: Started sshd@11-172.31.23.237:22-139.178.89.65:43372.service - OpenSSH per-connection server daemon (139.178.89.65:43372). Jan 30 14:04:32.841787 systemd-logind[2109]: Removed session 11. Jan 30 14:04:32.957430 containerd[2139]: time="2025-01-30T14:04:32.957324925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:32.959848 containerd[2139]: time="2025-01-30T14:04:32.959734405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 30 14:04:32.963365 containerd[2139]: time="2025-01-30T14:04:32.963240025Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:32.971809 containerd[2139]: time="2025-01-30T14:04:32.971178721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:32.975192 containerd[2139]: time="2025-01-30T14:04:32.975120097Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.945001182s" Jan 30 14:04:32.975433 containerd[2139]: time="2025-01-30T14:04:32.975399109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 30 14:04:32.983426 containerd[2139]: time="2025-01-30T14:04:32.983218657Z" level=info msg="CreateContainer within sandbox \"bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 14:04:33.029312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount85653852.mount: Deactivated successfully. Jan 30 14:04:33.043455 containerd[2139]: time="2025-01-30T14:04:33.043279209Z" level=info msg="CreateContainer within sandbox \"bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"337a2e2232ae8679fc379104607b7b10e75b47facad9a5da4dc8528a4df8836d\"" Jan 30 14:04:33.050213 containerd[2139]: time="2025-01-30T14:04:33.048154605Z" level=info msg="StartContainer for \"337a2e2232ae8679fc379104607b7b10e75b47facad9a5da4dc8528a4df8836d\"" Jan 30 14:04:33.077162 sshd[5944]: Accepted publickey for core from 139.178.89.65 port 43372 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:33.083313 sshd[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:33.111482 systemd-logind[2109]: New session 12 of user core. Jan 30 14:04:33.135853 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:04:33.186874 systemd[1]: run-containerd-runc-k8s.io-337a2e2232ae8679fc379104607b7b10e75b47facad9a5da4dc8528a4df8836d-runc.ekyz0m.mount: Deactivated successfully. Jan 30 14:04:33.258962 containerd[2139]: time="2025-01-30T14:04:33.258892690Z" level=info msg="StartContainer for \"337a2e2232ae8679fc379104607b7b10e75b47facad9a5da4dc8528a4df8836d\" returns successfully" Jan 30 14:04:33.274550 containerd[2139]: time="2025-01-30T14:04:33.270299902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 14:04:33.525675 sshd[5944]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:33.532955 systemd-logind[2109]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:04:33.533342 systemd[1]: sshd@11-172.31.23.237:22-139.178.89.65:43372.service: Deactivated successfully. Jan 30 14:04:33.542384 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:04:33.546938 systemd-logind[2109]: Removed session 12. Jan 30 14:04:34.664213 containerd[2139]: time="2025-01-30T14:04:34.663092593Z" level=info msg="StopPodSandbox for \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\"" Jan 30 14:04:34.980584 containerd[2139]: time="2025-01-30T14:04:34.979064943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:34.982356 containerd[2139]: time="2025-01-30T14:04:34.982287675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 30 14:04:34.984797 containerd[2139]: time="2025-01-30T14:04:34.984740235Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:34.995909 containerd[2139]: time="2025-01-30T14:04:34.994438143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:04:34.996803 containerd[2139]: time="2025-01-30T14:04:34.995872215Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.725510261s" Jan 30 14:04:34.997046 containerd[2139]: time="2025-01-30T14:04:34.997013307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 30 14:04:35.002677 containerd[2139]: time="2025-01-30T14:04:35.002627891Z" level=info msg="CreateContainer within sandbox \"bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:34.913 [WARNING][6012] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0", GenerateName:"calico-kube-controllers-5d545d88fb-", Namespace:"calico-system", SelfLink:"", UID:"fbfb33fd-7135-4e07-94f5-b6878a9da2f6", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d545d88fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9", Pod:"calico-kube-controllers-5d545d88fb-6s48d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4db88f85da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:34.914 [INFO][6012] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:34.914 [INFO][6012] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" iface="eth0" netns="" Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:34.914 [INFO][6012] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:34.914 [INFO][6012] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:34.985 [INFO][6018] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" HandleID="k8s-pod-network.5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:34.985 [INFO][6018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:34.985 [INFO][6018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:35.003 [WARNING][6018] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" HandleID="k8s-pod-network.5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:35.003 [INFO][6018] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" HandleID="k8s-pod-network.5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:35.007 [INFO][6018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:35.013087 containerd[2139]: 2025-01-30 14:04:35.010 [INFO][6012] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:35.014202 containerd[2139]: time="2025-01-30T14:04:35.013141103Z" level=info msg="TearDown network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\" successfully" Jan 30 14:04:35.014202 containerd[2139]: time="2025-01-30T14:04:35.013177655Z" level=info msg="StopPodSandbox for \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\" returns successfully" Jan 30 14:04:35.025351 containerd[2139]: time="2025-01-30T14:04:35.023856071Z" level=info msg="RemovePodSandbox for \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\"" Jan 30 14:04:35.025351 containerd[2139]: time="2025-01-30T14:04:35.023921723Z" level=info msg="Forcibly stopping sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\"" Jan 30 14:04:35.035708 containerd[2139]: time="2025-01-30T14:04:35.035621267Z" level=info msg="CreateContainer within sandbox \"bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8eeefec2c99c38ff16c6c42e6aa565ce23de9c59f027111fabdc09ff72b70f42\"" Jan 30 14:04:35.039697 containerd[2139]: time="2025-01-30T14:04:35.039116471Z" level=info msg="StartContainer for \"8eeefec2c99c38ff16c6c42e6aa565ce23de9c59f027111fabdc09ff72b70f42\"" Jan 30 14:04:35.236029 containerd[2139]: time="2025-01-30T14:04:35.235718544Z" level=info msg="StartContainer for \"8eeefec2c99c38ff16c6c42e6aa565ce23de9c59f027111fabdc09ff72b70f42\" returns successfully" Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.156 [WARNING][6037] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0", GenerateName:"calico-kube-controllers-5d545d88fb-", Namespace:"calico-system", SelfLink:"", UID:"fbfb33fd-7135-4e07-94f5-b6878a9da2f6", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d545d88fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"69598c382a22be8815bb0289a2890777b57a4be5691d79833c8bd0fb2f74e8e9", Pod:"calico-kube-controllers-5d545d88fb-6s48d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.15.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif4db88f85da", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.157 [INFO][6037] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.157 [INFO][6037] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" iface="eth0" netns="" Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.157 [INFO][6037] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.157 [INFO][6037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.225 [INFO][6066] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" HandleID="k8s-pod-network.5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.225 [INFO][6066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.225 [INFO][6066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.247 [WARNING][6066] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" HandleID="k8s-pod-network.5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.247 [INFO][6066] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" HandleID="k8s-pod-network.5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Workload="ip--172--31--23--237-k8s-calico--kube--controllers--5d545d88fb--6s48d-eth0" Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.250 [INFO][6066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:35.255510 containerd[2139]: 2025-01-30 14:04:35.252 [INFO][6037] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b" Jan 30 14:04:35.256528 containerd[2139]: time="2025-01-30T14:04:35.255610800Z" level=info msg="TearDown network for sandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\" successfully" Jan 30 14:04:35.261166 containerd[2139]: time="2025-01-30T14:04:35.261080220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:04:35.261338 containerd[2139]: time="2025-01-30T14:04:35.261210900Z" level=info msg="RemovePodSandbox \"5d8cae6caf72c70078017f1309371c5a2cbb19c0b906e0193970f5fef63ca39b\" returns successfully" Jan 30 14:04:35.262249 containerd[2139]: time="2025-01-30T14:04:35.262183560Z" level=info msg="StopPodSandbox for \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\"" Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.359 [WARNING][6096] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a54a6e6-ac55-4daa-ba3e-bb307511354c", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958", Pod:"csi-node-driver-9nwm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a13f9601c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.359 [INFO][6096] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.359 [INFO][6096] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" iface="eth0" netns="" Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.359 [INFO][6096] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.360 [INFO][6096] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.478 [INFO][6119] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" HandleID="k8s-pod-network.36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.479 [INFO][6119] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.479 [INFO][6119] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.497 [WARNING][6119] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" HandleID="k8s-pod-network.36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.499 [INFO][6119] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" HandleID="k8s-pod-network.36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.507 [INFO][6119] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:35.526243 containerd[2139]: 2025-01-30 14:04:35.517 [INFO][6096] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:35.528107 containerd[2139]: time="2025-01-30T14:04:35.526462910Z" level=info msg="TearDown network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\" successfully" Jan 30 14:04:35.528107 containerd[2139]: time="2025-01-30T14:04:35.526563626Z" level=info msg="StopPodSandbox for \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\" returns successfully" Jan 30 14:04:35.529766 containerd[2139]: time="2025-01-30T14:04:35.529561598Z" level=info msg="RemovePodSandbox for \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\"" Jan 30 14:04:35.529766 containerd[2139]: time="2025-01-30T14:04:35.529641590Z" level=info msg="Forcibly stopping sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\"" Jan 30 14:04:35.574061 kubelet[3642]: I0130 14:04:35.573891 3642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9nwm6" podStartSLOduration=31.244983942 podStartE2EDuration="38.573797906s" podCreationTimestamp="2025-01-30 14:03:57 +0000 UTC" firstStartedPulling="2025-01-30 14:04:27.670400215 +0000 UTC m=+53.266877546" lastFinishedPulling="2025-01-30 14:04:34.999214167 +0000 UTC m=+60.595691510" observedRunningTime="2025-01-30 14:04:35.391428253 +0000 UTC m=+60.987905620" watchObservedRunningTime="2025-01-30 14:04:35.573797906 +0000 UTC m=+61.170275249" Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.645 [WARNING][6148] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6a54a6e6-ac55-4daa-ba3e-bb307511354c", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"bd8c44fb5ffa9d6aac6db30fef08dd5879ee7e5c17f431519e7f545205e68958", Pod:"csi-node-driver-9nwm6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.15.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a13f9601c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.645 [INFO][6148] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.645 [INFO][6148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" iface="eth0" netns="" Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.645 [INFO][6148] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.645 [INFO][6148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.688 [INFO][6154] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" HandleID="k8s-pod-network.36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.689 [INFO][6154] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.689 [INFO][6154] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.701 [WARNING][6154] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" HandleID="k8s-pod-network.36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.701 [INFO][6154] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" HandleID="k8s-pod-network.36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Workload="ip--172--31--23--237-k8s-csi--node--driver--9nwm6-eth0" Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.704 [INFO][6154] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:35.710312 containerd[2139]: 2025-01-30 14:04:35.707 [INFO][6148] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea" Jan 30 14:04:35.710312 containerd[2139]: time="2025-01-30T14:04:35.710196675Z" level=info msg="TearDown network for sandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\" successfully" Jan 30 14:04:35.715794 containerd[2139]: time="2025-01-30T14:04:35.715693935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:04:35.715917 containerd[2139]: time="2025-01-30T14:04:35.715872147Z" level=info msg="RemovePodSandbox \"36ecec98b0f14e2efff759c5eb4d2158fc0abdf32806c5849d0f34d30ceb0dea\" returns successfully" Jan 30 14:04:35.716653 containerd[2139]: time="2025-01-30T14:04:35.716592903Z" level=info msg="StopPodSandbox for \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\"" Jan 30 14:04:35.824107 kubelet[3642]: I0130 14:04:35.823707 3642 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 14:04:35.843560 kubelet[3642]: I0130 14:04:35.842409 3642 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 14:04:35.844206 kubelet[3642]: I0130 14:04:35.842528 3642 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.782 [WARNING][6172] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0", GenerateName:"calico-apiserver-5979bcf7dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac1e96c2-de99-4017-b353-ed7792f81a20", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcf7dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac", Pod:"calico-apiserver-5979bcf7dd-9l7bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cd49c93158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.782 [INFO][6172] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.782 [INFO][6172] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" iface="eth0" netns="" Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.782 [INFO][6172] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.782 [INFO][6172] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.829 [INFO][6178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" HandleID="k8s-pod-network.8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.830 [INFO][6178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.830 [INFO][6178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.848 [WARNING][6178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" HandleID="k8s-pod-network.8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.848 [INFO][6178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" HandleID="k8s-pod-network.8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.854 [INFO][6178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:35.871560 containerd[2139]: 2025-01-30 14:04:35.860 [INFO][6172] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:35.871560 containerd[2139]: time="2025-01-30T14:04:35.870387531Z" level=info msg="TearDown network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\" successfully" Jan 30 14:04:35.871560 containerd[2139]: time="2025-01-30T14:04:35.870425835Z" level=info msg="StopPodSandbox for \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\" returns successfully" Jan 30 14:04:35.872875 containerd[2139]: time="2025-01-30T14:04:35.872818011Z" level=info msg="RemovePodSandbox for \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\"" Jan 30 14:04:35.873505 containerd[2139]: time="2025-01-30T14:04:35.873018939Z" level=info msg="Forcibly stopping sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\"" Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:35.987 [WARNING][6197] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0", GenerateName:"calico-apiserver-5979bcf7dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ac1e96c2-de99-4017-b353-ed7792f81a20", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcf7dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"991ef15bac1a0444babf139d17e69cc6f21594ee1f47b85b2d172395c010ebac", Pod:"calico-apiserver-5979bcf7dd-9l7bs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7cd49c93158", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:35.988 [INFO][6197] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:35.988 [INFO][6197] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" iface="eth0" netns="" Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:35.988 [INFO][6197] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:35.988 [INFO][6197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:36.029 [INFO][6205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" HandleID="k8s-pod-network.8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:36.029 [INFO][6205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:36.029 [INFO][6205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:36.041 [WARNING][6205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" HandleID="k8s-pod-network.8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:36.041 [INFO][6205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" HandleID="k8s-pod-network.8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--9l7bs-eth0" Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:36.044 [INFO][6205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:36.049777 containerd[2139]: 2025-01-30 14:04:36.046 [INFO][6197] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d" Jan 30 14:04:36.051220 containerd[2139]: time="2025-01-30T14:04:36.050230476Z" level=info msg="TearDown network for sandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\" successfully" Jan 30 14:04:36.056734 containerd[2139]: time="2025-01-30T14:04:36.056652192Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:04:36.056963 containerd[2139]: time="2025-01-30T14:04:36.056754708Z" level=info msg="RemovePodSandbox \"8701aa07615bc41abe0d7cca06f0aa8f32146ba193d0e90c8d19dda1c704c51d\" returns successfully" Jan 30 14:04:36.058033 containerd[2139]: time="2025-01-30T14:04:36.057556608Z" level=info msg="StopPodSandbox for \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\"" Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.119 [WARNING][6223] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0", GenerateName:"calico-apiserver-5979bcf7dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba1c580d-2c05-4884-a289-a60900c388b5", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcf7dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e", Pod:"calico-apiserver-5979bcf7dd-sqhjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8796e0b155d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.120 [INFO][6223] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.120 [INFO][6223] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" iface="eth0" netns="" Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.120 [INFO][6223] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.120 [INFO][6223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.158 [INFO][6229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" HandleID="k8s-pod-network.9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.158 [INFO][6229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.158 [INFO][6229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.173 [WARNING][6229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" HandleID="k8s-pod-network.9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.173 [INFO][6229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" HandleID="k8s-pod-network.9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.175 [INFO][6229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:36.180571 containerd[2139]: 2025-01-30 14:04:36.178 [INFO][6223] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:36.180571 containerd[2139]: time="2025-01-30T14:04:36.180514549Z" level=info msg="TearDown network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\" successfully" Jan 30 14:04:36.180571 containerd[2139]: time="2025-01-30T14:04:36.180552781Z" level=info msg="StopPodSandbox for \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\" returns successfully" Jan 30 14:04:36.183417 containerd[2139]: time="2025-01-30T14:04:36.182712169Z" level=info msg="RemovePodSandbox for \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\"" Jan 30 14:04:36.183417 containerd[2139]: time="2025-01-30T14:04:36.182787073Z" level=info msg="Forcibly stopping sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\"" Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.250 [WARNING][6247] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0", GenerateName:"calico-apiserver-5979bcf7dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba1c580d-2c05-4884-a289-a60900c388b5", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5979bcf7dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"8ed4eca4fd49ed422c46ebc75ec827bfd8283938bb4949850cf3305a8445e20e", Pod:"calico-apiserver-5979bcf7dd-sqhjj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.15.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8796e0b155d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.250 [INFO][6247] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.250 [INFO][6247] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" iface="eth0" netns="" Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.251 [INFO][6247] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.251 [INFO][6247] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.292 [INFO][6253] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" HandleID="k8s-pod-network.9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.293 [INFO][6253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.293 [INFO][6253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.307 [WARNING][6253] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" HandleID="k8s-pod-network.9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.307 [INFO][6253] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" HandleID="k8s-pod-network.9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Workload="ip--172--31--23--237-k8s-calico--apiserver--5979bcf7dd--sqhjj-eth0" Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.311 [INFO][6253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:36.320460 containerd[2139]: 2025-01-30 14:04:36.315 [INFO][6247] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07" Jan 30 14:04:36.321336 containerd[2139]: time="2025-01-30T14:04:36.320521142Z" level=info msg="TearDown network for sandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\" successfully" Jan 30 14:04:36.332982 containerd[2139]: time="2025-01-30T14:04:36.332874278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:04:36.332982 containerd[2139]: time="2025-01-30T14:04:36.332973710Z" level=info msg="RemovePodSandbox \"9559745276afc50b9d7db6f032fb9c6c9adfbf39135d6f28caa63f63f93f8c07\" returns successfully" Jan 30 14:04:36.334144 containerd[2139]: time="2025-01-30T14:04:36.333781754Z" level=info msg="StopPodSandbox for \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\"" Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.425 [WARNING][6272] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd", Pod:"coredns-7db6d8ff4d-8tv4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid70a33a9691", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.426 [INFO][6272] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.426 [INFO][6272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" iface="eth0" netns="" Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.426 [INFO][6272] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.426 [INFO][6272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.471 [INFO][6278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" HandleID="k8s-pod-network.92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.472 [INFO][6278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.472 [INFO][6278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.485 [WARNING][6278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" HandleID="k8s-pod-network.92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.485 [INFO][6278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" HandleID="k8s-pod-network.92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.487 [INFO][6278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:36.492984 containerd[2139]: 2025-01-30 14:04:36.490 [INFO][6272] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:36.492984 containerd[2139]: time="2025-01-30T14:04:36.492936914Z" level=info msg="TearDown network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\" successfully" Jan 30 14:04:36.493948 containerd[2139]: time="2025-01-30T14:04:36.492986918Z" level=info msg="StopPodSandbox for \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\" returns successfully" Jan 30 14:04:36.494874 containerd[2139]: time="2025-01-30T14:04:36.494298662Z" level=info msg="RemovePodSandbox for \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\"" Jan 30 14:04:36.494874 containerd[2139]: time="2025-01-30T14:04:36.494354858Z" level=info msg="Forcibly stopping sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\"" Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.559 [WARNING][6296] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0ca3bfdf-d2a0-4c68-9823-3c99fa248e2e", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"46adf445e2f9ac749dfaf5ad602bdd2e5242a411e4e226c02bfd6cc81cde90cd", Pod:"coredns-7db6d8ff4d-8tv4w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid70a33a9691", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.560 [INFO][6296] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.560 [INFO][6296] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" iface="eth0" netns="" Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.560 [INFO][6296] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.560 [INFO][6296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.600 [INFO][6302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" HandleID="k8s-pod-network.92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.601 [INFO][6302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.601 [INFO][6302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.613 [WARNING][6302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" HandleID="k8s-pod-network.92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.614 [INFO][6302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" HandleID="k8s-pod-network.92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--8tv4w-eth0" Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.616 [INFO][6302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:36.621394 containerd[2139]: 2025-01-30 14:04:36.618 [INFO][6296] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25" Jan 30 14:04:36.622210 containerd[2139]: time="2025-01-30T14:04:36.621442143Z" level=info msg="TearDown network for sandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\" successfully" Jan 30 14:04:36.628146 containerd[2139]: time="2025-01-30T14:04:36.627996975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:04:36.628146 containerd[2139]: time="2025-01-30T14:04:36.628120167Z" level=info msg="RemovePodSandbox \"92f996946a9a26daed14673f166a60be24dd97c15c97b9b6e582938908b0ff25\" returns successfully" Jan 30 14:04:36.629137 containerd[2139]: time="2025-01-30T14:04:36.629089563Z" level=info msg="StopPodSandbox for \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\"" Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.699 [WARNING][6321] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42c1d2cd-78ec-4529-88c1-93534af9989c", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d", Pod:"coredns-7db6d8ff4d-n6wgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic68bcb08305", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.699 [INFO][6321] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.699 [INFO][6321] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" iface="eth0" netns="" Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.699 [INFO][6321] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.699 [INFO][6321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.738 [INFO][6328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" HandleID="k8s-pod-network.de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.739 [INFO][6328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.739 [INFO][6328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.751 [WARNING][6328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" HandleID="k8s-pod-network.de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.751 [INFO][6328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" HandleID="k8s-pod-network.de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.753 [INFO][6328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:36.761895 containerd[2139]: 2025-01-30 14:04:36.757 [INFO][6321] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:36.763775 containerd[2139]: time="2025-01-30T14:04:36.762055648Z" level=info msg="TearDown network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\" successfully" Jan 30 14:04:36.763775 containerd[2139]: time="2025-01-30T14:04:36.762095536Z" level=info msg="StopPodSandbox for \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\" returns successfully" Jan 30 14:04:36.763775 containerd[2139]: time="2025-01-30T14:04:36.762954808Z" level=info msg="RemovePodSandbox for \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\"" Jan 30 14:04:36.763775 containerd[2139]: time="2025-01-30T14:04:36.763003660Z" level=info msg="Forcibly stopping sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\"" Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.834 [WARNING][6346] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"42c1d2cd-78ec-4529-88c1-93534af9989c", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 14, 3, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-237", ContainerID:"96e76aff55d658a5af65216447365f3a1491f4f6b45d0edcc6c3cf1f4c53fe5d", Pod:"coredns-7db6d8ff4d-n6wgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.15.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic68bcb08305", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.835 [INFO][6346] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.835 [INFO][6346] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" iface="eth0" netns="" Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.835 [INFO][6346] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.835 [INFO][6346] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.874 [INFO][6353] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" HandleID="k8s-pod-network.de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.874 [INFO][6353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.874 [INFO][6353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.886 [WARNING][6353] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" HandleID="k8s-pod-network.de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.886 [INFO][6353] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" HandleID="k8s-pod-network.de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Workload="ip--172--31--23--237-k8s-coredns--7db6d8ff4d--n6wgq-eth0" Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.889 [INFO][6353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 14:04:36.894574 containerd[2139]: 2025-01-30 14:04:36.891 [INFO][6346] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a" Jan 30 14:04:36.895372 containerd[2139]: time="2025-01-30T14:04:36.894656344Z" level=info msg="TearDown network for sandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\" successfully" Jan 30 14:04:36.902382 containerd[2139]: time="2025-01-30T14:04:36.902283172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 14:04:36.902556 containerd[2139]: time="2025-01-30T14:04:36.902467492Z" level=info msg="RemovePodSandbox \"de88f99a227b917a8d15cd66115f3fd6d9cb063bcca70c895b225c779943765a\" returns successfully" Jan 30 14:04:38.572287 systemd[1]: Started sshd@12-172.31.23.237:22-139.178.89.65:43380.service - OpenSSH per-connection server daemon (139.178.89.65:43380). Jan 30 14:04:38.773727 sshd[6386]: Accepted publickey for core from 139.178.89.65 port 43380 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:38.777798 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:38.786786 systemd-logind[2109]: New session 13 of user core. Jan 30 14:04:38.794045 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:04:39.057710 sshd[6386]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:39.063583 systemd[1]: sshd@12-172.31.23.237:22-139.178.89.65:43380.service: Deactivated successfully. Jan 30 14:04:39.073298 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:04:39.077995 systemd-logind[2109]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:04:39.082455 systemd-logind[2109]: Removed session 13. Jan 30 14:04:44.091396 systemd[1]: Started sshd@13-172.31.23.237:22-139.178.89.65:37836.service - OpenSSH per-connection server daemon (139.178.89.65:37836). Jan 30 14:04:44.268116 sshd[6406]: Accepted publickey for core from 139.178.89.65 port 37836 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:44.270762 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:44.280830 systemd-logind[2109]: New session 14 of user core. Jan 30 14:04:44.288991 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:04:44.543367 sshd[6406]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:44.548883 systemd[1]: sshd@13-172.31.23.237:22-139.178.89.65:37836.service: Deactivated successfully. Jan 30 14:04:44.556293 systemd-logind[2109]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:04:44.557668 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:04:44.560331 systemd-logind[2109]: Removed session 14. Jan 30 14:04:49.572940 systemd[1]: Started sshd@14-172.31.23.237:22-139.178.89.65:37840.service - OpenSSH per-connection server daemon (139.178.89.65:37840). Jan 30 14:04:49.767170 sshd[6423]: Accepted publickey for core from 139.178.89.65 port 37840 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:49.769922 sshd[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:49.778205 systemd-logind[2109]: New session 15 of user core. Jan 30 14:04:49.785404 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:04:50.035879 sshd[6423]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:50.044307 systemd[1]: sshd@14-172.31.23.237:22-139.178.89.65:37840.service: Deactivated successfully. Jan 30 14:04:50.053279 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:04:50.054981 systemd-logind[2109]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:04:50.056953 systemd-logind[2109]: Removed session 15. Jan 30 14:04:55.068038 systemd[1]: Started sshd@15-172.31.23.237:22-139.178.89.65:60376.service - OpenSSH per-connection server daemon (139.178.89.65:60376). Jan 30 14:04:55.246915 sshd[6440]: Accepted publickey for core from 139.178.89.65 port 60376 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:55.249691 sshd[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:55.257810 systemd-logind[2109]: New session 16 of user core. Jan 30 14:04:55.267142 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:04:55.543538 sshd[6440]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:55.550754 systemd[1]: sshd@15-172.31.23.237:22-139.178.89.65:60376.service: Deactivated successfully. Jan 30 14:04:55.558275 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:04:55.560684 systemd-logind[2109]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:04:55.564547 systemd-logind[2109]: Removed session 16. Jan 30 14:04:55.578932 systemd[1]: Started sshd@16-172.31.23.237:22-139.178.89.65:60390.service - OpenSSH per-connection server daemon (139.178.89.65:60390). Jan 30 14:04:55.763761 sshd[6453]: Accepted publickey for core from 139.178.89.65 port 60390 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:55.767062 sshd[6453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:55.776064 systemd-logind[2109]: New session 17 of user core. Jan 30 14:04:55.782071 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:04:56.315060 sshd[6453]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:56.321185 systemd[1]: sshd@16-172.31.23.237:22-139.178.89.65:60390.service: Deactivated successfully. Jan 30 14:04:56.329968 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:04:56.332007 systemd-logind[2109]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:04:56.334777 systemd-logind[2109]: Removed session 17. Jan 30 14:04:56.344982 systemd[1]: Started sshd@17-172.31.23.237:22-139.178.89.65:60398.service - OpenSSH per-connection server daemon (139.178.89.65:60398). Jan 30 14:04:56.530495 sshd[6465]: Accepted publickey for core from 139.178.89.65 port 60398 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:04:56.533342 sshd[6465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:04:56.543808 systemd-logind[2109]: New session 18 of user core. Jan 30 14:04:56.548954 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:04:58.919809 update_engine[2112]: I20250130 14:04:58.918740 2112 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 14:04:58.919809 update_engine[2112]: I20250130 14:04:58.918819 2112 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 14:04:58.919809 update_engine[2112]: I20250130 14:04:58.919167 2112 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 14:04:58.924505 update_engine[2112]: I20250130 14:04:58.924386 2112 omaha_request_params.cc:62] Current group set to lts Jan 30 14:04:58.926559 update_engine[2112]: I20250130 14:04:58.925784 2112 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 14:04:58.926559 update_engine[2112]: I20250130 14:04:58.925838 2112 update_attempter.cc:643] Scheduling an action processor start. Jan 30 14:04:58.926559 update_engine[2112]: I20250130 14:04:58.925877 2112 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:04:58.926559 update_engine[2112]: I20250130 14:04:58.925956 2112 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 14:04:58.926559 update_engine[2112]: I20250130 14:04:58.926079 2112 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:04:58.926559 update_engine[2112]: I20250130 14:04:58.926100 2112 omaha_request_action.cc:272] Request: Jan 30 14:04:58.926559 update_engine[2112]: Jan 30 14:04:58.926559 update_engine[2112]: Jan 30 14:04:58.926559 update_engine[2112]: Jan 30 14:04:58.926559 update_engine[2112]: Jan 30 14:04:58.926559 update_engine[2112]: Jan 30 14:04:58.926559 update_engine[2112]: Jan 30 14:04:58.926559 update_engine[2112]: Jan 30 14:04:58.926559 update_engine[2112]: Jan 30 14:04:58.926559 update_engine[2112]: I20250130 14:04:58.926117 2112 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:04:58.928871 locksmithd[2167]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 14:04:58.931559 update_engine[2112]: I20250130 14:04:58.930886 2112 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:04:58.931779 update_engine[2112]: I20250130 14:04:58.931722 2112 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:04:58.933935 update_engine[2112]: E20250130 14:04:58.933873 2112 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:04:58.934211 update_engine[2112]: I20250130 14:04:58.934176 2112 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 14:04:59.928579 sshd[6465]: pam_unix(sshd:session): session closed for user core Jan 30 14:04:59.938647 systemd[1]: sshd@17-172.31.23.237:22-139.178.89.65:60398.service: Deactivated successfully. Jan 30 14:04:59.943043 systemd-logind[2109]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:04:59.979239 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:04:59.982111 systemd-logind[2109]: Removed session 18. Jan 30 14:04:59.989991 systemd[1]: Started sshd@18-172.31.23.237:22-139.178.89.65:60406.service - OpenSSH per-connection server daemon (139.178.89.65:60406). Jan 30 14:05:00.185181 sshd[6511]: Accepted publickey for core from 139.178.89.65 port 60406 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:05:00.188543 sshd[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:00.196276 systemd-logind[2109]: New session 19 of user core. Jan 30 14:05:00.205051 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:05:00.788367 sshd[6511]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:00.797021 systemd[1]: sshd@18-172.31.23.237:22-139.178.89.65:60406.service: Deactivated successfully. Jan 30 14:05:00.805857 systemd-logind[2109]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:05:00.812939 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:05:00.829446 systemd[1]: Started sshd@19-172.31.23.237:22-139.178.89.65:60414.service - OpenSSH per-connection server daemon (139.178.89.65:60414). Jan 30 14:05:00.830267 systemd-logind[2109]: Removed session 19. Jan 30 14:05:01.018866 sshd[6523]: Accepted publickey for core from 139.178.89.65 port 60414 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:05:01.022532 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:01.030986 systemd-logind[2109]: New session 20 of user core. Jan 30 14:05:01.040087 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:05:01.279091 sshd[6523]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:01.288395 systemd[1]: sshd@19-172.31.23.237:22-139.178.89.65:60414.service: Deactivated successfully. Jan 30 14:05:01.295729 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:05:01.298024 systemd-logind[2109]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:05:01.300333 systemd-logind[2109]: Removed session 20. Jan 30 14:05:06.312964 systemd[1]: Started sshd@20-172.31.23.237:22-139.178.89.65:47194.service - OpenSSH per-connection server daemon (139.178.89.65:47194). Jan 30 14:05:06.500225 sshd[6557]: Accepted publickey for core from 139.178.89.65 port 47194 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:05:06.503311 sshd[6557]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:06.510948 systemd-logind[2109]: New session 21 of user core. Jan 30 14:05:06.520120 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 14:05:06.762103 sshd[6557]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:06.768841 systemd[1]: sshd@20-172.31.23.237:22-139.178.89.65:47194.service: Deactivated successfully. Jan 30 14:05:06.776121 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 14:05:06.778097 systemd-logind[2109]: Session 21 logged out. Waiting for processes to exit. Jan 30 14:05:06.780264 systemd-logind[2109]: Removed session 21. Jan 30 14:05:08.917833 update_engine[2112]: I20250130 14:05:08.917734 2112 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:05:08.918425 update_engine[2112]: I20250130 14:05:08.918092 2112 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:05:08.918527 update_engine[2112]: I20250130 14:05:08.918418 2112 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:05:08.919204 update_engine[2112]: E20250130 14:05:08.918857 2112 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:05:08.919204 update_engine[2112]: I20250130 14:05:08.918953 2112 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 14:05:11.793002 systemd[1]: Started sshd@21-172.31.23.237:22-139.178.89.65:50238.service - OpenSSH per-connection server daemon (139.178.89.65:50238). Jan 30 14:05:11.970780 sshd[6595]: Accepted publickey for core from 139.178.89.65 port 50238 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:05:11.973651 sshd[6595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:11.981125 systemd-logind[2109]: New session 22 of user core. Jan 30 14:05:11.988993 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 14:05:12.226803 sshd[6595]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:12.232555 systemd[1]: sshd@21-172.31.23.237:22-139.178.89.65:50238.service: Deactivated successfully. Jan 30 14:05:12.240140 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 14:05:12.242730 systemd-logind[2109]: Session 22 logged out. Waiting for processes to exit. Jan 30 14:05:12.246564 systemd-logind[2109]: Removed session 22. Jan 30 14:05:17.257757 systemd[1]: Started sshd@22-172.31.23.237:22-139.178.89.65:50250.service - OpenSSH per-connection server daemon (139.178.89.65:50250). Jan 30 14:05:17.443725 sshd[6609]: Accepted publickey for core from 139.178.89.65 port 50250 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:05:17.446387 sshd[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:17.454341 systemd-logind[2109]: New session 23 of user core. Jan 30 14:05:17.462610 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 14:05:17.769119 sshd[6609]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:17.782755 systemd[1]: sshd@22-172.31.23.237:22-139.178.89.65:50250.service: Deactivated successfully. Jan 30 14:05:17.793544 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 14:05:17.798970 systemd-logind[2109]: Session 23 logged out. Waiting for processes to exit. Jan 30 14:05:17.801304 systemd-logind[2109]: Removed session 23. Jan 30 14:05:18.917530 update_engine[2112]: I20250130 14:05:18.917294 2112 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:05:18.918132 update_engine[2112]: I20250130 14:05:18.917860 2112 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:05:18.918267 update_engine[2112]: I20250130 14:05:18.918193 2112 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:05:18.918694 update_engine[2112]: E20250130 14:05:18.918639 2112 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:05:18.918761 update_engine[2112]: I20250130 14:05:18.918727 2112 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 14:05:22.800543 systemd[1]: Started sshd@23-172.31.23.237:22-139.178.89.65:44754.service - OpenSSH per-connection server daemon (139.178.89.65:44754). Jan 30 14:05:22.983108 sshd[6625]: Accepted publickey for core from 139.178.89.65 port 44754 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:05:22.985217 sshd[6625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:22.997002 systemd-logind[2109]: New session 24 of user core. Jan 30 14:05:23.003206 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 14:05:23.274797 sshd[6625]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:23.282457 systemd[1]: sshd@23-172.31.23.237:22-139.178.89.65:44754.service: Deactivated successfully. Jan 30 14:05:23.293792 systemd-logind[2109]: Session 24 logged out. Waiting for processes to exit. Jan 30 14:05:23.295356 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 14:05:23.300388 systemd-logind[2109]: Removed session 24. Jan 30 14:05:28.307970 systemd[1]: Started sshd@24-172.31.23.237:22-139.178.89.65:44768.service - OpenSSH per-connection server daemon (139.178.89.65:44768). Jan 30 14:05:28.489433 sshd[6638]: Accepted publickey for core from 139.178.89.65 port 44768 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:05:28.492302 sshd[6638]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:28.500650 systemd-logind[2109]: New session 25 of user core. Jan 30 14:05:28.509985 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 14:05:28.747966 sshd[6638]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:28.753119 systemd[1]: sshd@24-172.31.23.237:22-139.178.89.65:44768.service: Deactivated successfully. Jan 30 14:05:28.761358 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 14:05:28.763925 systemd-logind[2109]: Session 25 logged out. Waiting for processes to exit. Jan 30 14:05:28.765608 systemd-logind[2109]: Removed session 25. Jan 30 14:05:28.916530 update_engine[2112]: I20250130 14:05:28.916345 2112 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:05:28.917153 update_engine[2112]: I20250130 14:05:28.916724 2112 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:05:28.917153 update_engine[2112]: I20250130 14:05:28.917097 2112 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:05:28.917622 update_engine[2112]: E20250130 14:05:28.917565 2112 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:05:28.917725 update_engine[2112]: I20250130 14:05:28.917654 2112 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:05:28.917725 update_engine[2112]: I20250130 14:05:28.917675 2112 omaha_request_action.cc:617] Omaha request response: Jan 30 14:05:28.917833 update_engine[2112]: E20250130 14:05:28.917788 2112 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 14:05:28.917998 update_engine[2112]: I20250130 14:05:28.917829 2112 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 14:05:28.917998 update_engine[2112]: I20250130 14:05:28.917846 2112 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:05:28.917998 update_engine[2112]: I20250130 14:05:28.917929 2112 update_attempter.cc:306] Processing Done. Jan 30 14:05:28.918170 update_engine[2112]: E20250130 14:05:28.917994 2112 update_attempter.cc:619] Update failed. Jan 30 14:05:28.918170 update_engine[2112]: I20250130 14:05:28.918015 2112 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 14:05:28.918170 update_engine[2112]: I20250130 14:05:28.918030 2112 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 14:05:28.918170 update_engine[2112]: I20250130 14:05:28.918045 2112 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 14:05:28.918409 update_engine[2112]: I20250130 14:05:28.918174 2112 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 14:05:28.918409 update_engine[2112]: I20250130 14:05:28.918217 2112 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 14:05:28.918409 update_engine[2112]: I20250130 14:05:28.918236 2112 omaha_request_action.cc:272] Request: Jan 30 14:05:28.918409 update_engine[2112]: Jan 30 14:05:28.918409 update_engine[2112]: Jan 30 14:05:28.918409 update_engine[2112]: Jan 30 14:05:28.918409 update_engine[2112]: Jan 30 14:05:28.918409 update_engine[2112]: Jan 30 14:05:28.918409 update_engine[2112]: Jan 30 14:05:28.918409 update_engine[2112]: I20250130 14:05:28.918253 2112 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 14:05:28.918990 update_engine[2112]: I20250130 14:05:28.918815 2112 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 14:05:28.919430 update_engine[2112]: I20250130 14:05:28.919159 2112 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 14:05:28.919545 locksmithd[2167]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 14:05:28.920082 update_engine[2112]: E20250130 14:05:28.919574 2112 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 14:05:28.920082 update_engine[2112]: I20250130 14:05:28.919652 2112 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 14:05:28.920082 update_engine[2112]: I20250130 14:05:28.919672 2112 omaha_request_action.cc:617] Omaha request response: Jan 30 14:05:28.920082 update_engine[2112]: I20250130 14:05:28.919689 2112 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:05:28.920082 update_engine[2112]: I20250130 14:05:28.919706 2112 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 14:05:28.920082 update_engine[2112]: I20250130 14:05:28.919721 2112 update_attempter.cc:306] Processing Done. Jan 30 14:05:28.920082 update_engine[2112]: I20250130 14:05:28.919739 2112 update_attempter.cc:310] Error event sent. Jan 30 14:05:28.920082 update_engine[2112]: I20250130 14:05:28.919759 2112 update_check_scheduler.cc:74] Next update check in 42m51s Jan 30 14:05:28.920585 locksmithd[2167]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 14:05:33.779507 systemd[1]: Started sshd@25-172.31.23.237:22-139.178.89.65:41250.service - OpenSSH per-connection server daemon (139.178.89.65:41250). Jan 30 14:05:33.956635 sshd[6652]: Accepted publickey for core from 139.178.89.65 port 41250 ssh2: RSA SHA256:gRn6z0KbdU+P7yMIlOZipkUtLq/1gbxnw9j88KTcRNE Jan 30 14:05:33.958979 sshd[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:05:33.966190 systemd-logind[2109]: New session 26 of user core. Jan 30 14:05:33.974075 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 14:05:34.207260 sshd[6652]: pam_unix(sshd:session): session closed for user core Jan 30 14:05:34.213360 systemd-logind[2109]: Session 26 logged out. Waiting for processes to exit. Jan 30 14:05:34.213776 systemd[1]: sshd@25-172.31.23.237:22-139.178.89.65:41250.service: Deactivated successfully. Jan 30 14:05:34.222167 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 14:05:34.224190 systemd-logind[2109]: Removed session 26. Jan 30 14:05:47.355305 containerd[2139]: time="2025-01-30T14:05:47.355167490Z" level=info msg="shim disconnected" id=0fa76f116b26397a98d89c66d448483456235a479cf59b5044c7402ba2581f52 namespace=k8s.io Jan 30 14:05:47.358245 containerd[2139]: time="2025-01-30T14:05:47.357521338Z" level=warning msg="cleaning up after shim disconnected" id=0fa76f116b26397a98d89c66d448483456235a479cf59b5044c7402ba2581f52 namespace=k8s.io Jan 30 14:05:47.358245 containerd[2139]: time="2025-01-30T14:05:47.357569002Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:05:47.359925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fa76f116b26397a98d89c66d448483456235a479cf59b5044c7402ba2581f52-rootfs.mount: Deactivated successfully. Jan 30 14:05:47.606513 kubelet[3642]: I0130 14:05:47.606330 3642 scope.go:117] "RemoveContainer" containerID="0fa76f116b26397a98d89c66d448483456235a479cf59b5044c7402ba2581f52" Jan 30 14:05:47.612548 containerd[2139]: time="2025-01-30T14:05:47.612453720Z" level=info msg="CreateContainer within sandbox \"4289da83ba7cb887f141ed6b0c44886898eb0b26ca3dbfc3432028062f614127\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 30 14:05:47.651335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount36402592.mount: Deactivated successfully. Jan 30 14:05:47.652096 containerd[2139]: time="2025-01-30T14:05:47.652038768Z" level=info msg="CreateContainer within sandbox \"4289da83ba7cb887f141ed6b0c44886898eb0b26ca3dbfc3432028062f614127\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"0eaee7decd63640791fc879b61f0c0df45b8b2f6fb6777d1778ac192dd405739\"" Jan 30 14:05:47.654938 containerd[2139]: time="2025-01-30T14:05:47.654846084Z" level=info msg="StartContainer for \"0eaee7decd63640791fc879b61f0c0df45b8b2f6fb6777d1778ac192dd405739\"" Jan 30 14:05:47.780128 containerd[2139]: time="2025-01-30T14:05:47.779971489Z" level=info msg="StartContainer for \"0eaee7decd63640791fc879b61f0c0df45b8b2f6fb6777d1778ac192dd405739\" returns successfully" Jan 30 14:05:47.848510 kubelet[3642]: E0130 14:05:47.848149 3642 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-23-237)" Jan 30 14:05:49.134827 containerd[2139]: time="2025-01-30T14:05:49.134728079Z" level=info msg="shim disconnected" id=2a35a4a978482bc9359223f1172b6523f7f897bbee4aa6f817bb3da06f97c062 namespace=k8s.io Jan 30 14:05:49.134827 containerd[2139]: time="2025-01-30T14:05:49.134814659Z" level=warning msg="cleaning up after shim disconnected" id=2a35a4a978482bc9359223f1172b6523f7f897bbee4aa6f817bb3da06f97c062 namespace=k8s.io Jan 30 14:05:49.135431 containerd[2139]: time="2025-01-30T14:05:49.134838155Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:05:49.139389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a35a4a978482bc9359223f1172b6523f7f897bbee4aa6f817bb3da06f97c062-rootfs.mount: Deactivated successfully. Jan 30 14:05:49.167228 containerd[2139]: time="2025-01-30T14:05:49.165269075Z" level=warning msg="cleanup warnings time=\"2025-01-30T14:05:49Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 14:05:49.620463 kubelet[3642]: I0130 14:05:49.620413 3642 scope.go:117] "RemoveContainer" containerID="2a35a4a978482bc9359223f1172b6523f7f897bbee4aa6f817bb3da06f97c062" Jan 30 14:05:49.630688 containerd[2139]: time="2025-01-30T14:05:49.630623906Z" level=info msg="CreateContainer within sandbox \"8316a491434f5b97efac1f5a24642b772906fc7e8101fbe28ebf30228168c785\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 30 14:05:49.654765 containerd[2139]: time="2025-01-30T14:05:49.654695474Z" level=info msg="CreateContainer within sandbox \"8316a491434f5b97efac1f5a24642b772906fc7e8101fbe28ebf30228168c785\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e1c61f10d0c2f44858f2ebe92840c2031fa1400758f498524b9bf2d33c94a0ae\"" Jan 30 14:05:49.656967 containerd[2139]: time="2025-01-30T14:05:49.656861582Z" level=info msg="StartContainer for \"e1c61f10d0c2f44858f2ebe92840c2031fa1400758f498524b9bf2d33c94a0ae\"" Jan 30 14:05:49.745657 systemd[1]: run-containerd-runc-k8s.io-e1c61f10d0c2f44858f2ebe92840c2031fa1400758f498524b9bf2d33c94a0ae-runc.BuzoFv.mount: Deactivated successfully. Jan 30 14:05:49.816234 containerd[2139]: time="2025-01-30T14:05:49.816165507Z" level=info msg="StartContainer for \"e1c61f10d0c2f44858f2ebe92840c2031fa1400758f498524b9bf2d33c94a0ae\" returns successfully" Jan 30 14:05:52.611066 containerd[2139]: time="2025-01-30T14:05:52.610968533Z" level=info msg="shim disconnected" id=6b19f33363b9a5d336f55d528a808d8c5ebca808d754fa741e33f3f85809da3b namespace=k8s.io Jan 30 14:05:52.611066 containerd[2139]: time="2025-01-30T14:05:52.611047877Z" level=warning msg="cleaning up after shim disconnected" id=6b19f33363b9a5d336f55d528a808d8c5ebca808d754fa741e33f3f85809da3b namespace=k8s.io Jan 30 14:05:52.612043 containerd[2139]: time="2025-01-30T14:05:52.611072333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:05:52.612946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b19f33363b9a5d336f55d528a808d8c5ebca808d754fa741e33f3f85809da3b-rootfs.mount: Deactivated successfully. Jan 30 14:05:53.643559 kubelet[3642]: I0130 14:05:53.642891 3642 scope.go:117] "RemoveContainer" containerID="6b19f33363b9a5d336f55d528a808d8c5ebca808d754fa741e33f3f85809da3b" Jan 30 14:05:53.648160 containerd[2139]: time="2025-01-30T14:05:53.647905338Z" level=info msg="CreateContainer within sandbox \"ab7c63b19dbc1035bbc0e94801e82aca6bad635c5bc4d9b09a2350072d3cd1de\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 30 14:05:53.676811 containerd[2139]: time="2025-01-30T14:05:53.676275990Z" level=info msg="CreateContainer within sandbox \"ab7c63b19dbc1035bbc0e94801e82aca6bad635c5bc4d9b09a2350072d3cd1de\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8b4bd792aba98b73dafe9c49bbaf4258ae2e7d3c103c90d8bb93824859b40821\"" Jan 30 14:05:53.677906 containerd[2139]: time="2025-01-30T14:05:53.677848770Z" level=info msg="StartContainer for \"8b4bd792aba98b73dafe9c49bbaf4258ae2e7d3c103c90d8bb93824859b40821\"" Jan 30 14:05:53.804698 containerd[2139]: time="2025-01-30T14:05:53.804582450Z" level=info msg="StartContainer for \"8b4bd792aba98b73dafe9c49bbaf4258ae2e7d3c103c90d8bb93824859b40821\" returns successfully" Jan 30 14:05:57.849530 kubelet[3642]: E0130 14:05:57.849296 3642 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.237:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-237?timeout=10s\": context deadline exceeded" Jan 30 14:06:07.850849 kubelet[3642]: E0130 14:06:07.850769 3642 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-23-237)"