Dec 13 01:53:30.194731 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:53:30.194777 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:53:30.194804 kernel: KASLR disabled due to lack of seed Dec 13 01:53:30.194821 kernel: efi: EFI v2.7 by EDK II Dec 13 01:53:30.194837 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:53:30.194854 kernel: ACPI: Early table checksum verification disabled Dec 13 01:53:30.194871 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:53:30.194887 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:53:30.194903 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:53:30.194919 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:53:30.194941 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:53:30.194957 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:53:30.194973 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:53:30.194989 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:53:30.195008 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:53:30.195029 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:53:30.195047 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:53:30.195063 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:53:30.195080 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:53:30.195096 kernel: printk: bootconsole [uart0] enabled Dec 13 01:53:30.195113 kernel: NUMA: Failed to initialise from firmware Dec 13 01:53:30.195131 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:53:30.195148 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:53:30.195166 kernel: Zone ranges: Dec 13 01:53:30.195183 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:53:30.195228 kernel: DMA32 empty Dec 13 01:53:30.195258 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:53:30.195275 kernel: Movable zone start for each node Dec 13 01:53:30.195292 kernel: Early memory node ranges Dec 13 01:53:30.195309 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:53:30.195326 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:53:30.195342 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:53:30.195360 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:53:30.195376 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:53:30.195392 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:53:30.195409 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:53:30.195425 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:53:30.195442 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:53:30.195486 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:53:30.195504 kernel: psci: probing for conduit method from ACPI. Dec 13 01:53:30.195529 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:53:30.195547 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:53:30.195565 kernel: psci: Trusted OS migration not required Dec 13 01:53:30.195586 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:53:30.195604 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:53:30.195622 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:53:30.195639 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:53:30.195657 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:53:30.195674 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:53:30.195692 kernel: CPU features: detected: Spectre-v2 Dec 13 01:53:30.195710 kernel: CPU features: detected: Spectre-v3a Dec 13 01:53:30.195727 kernel: CPU features: detected: Spectre-BHB Dec 13 01:53:30.195745 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:53:30.195763 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:53:30.195785 kernel: alternatives: applying boot alternatives Dec 13 01:53:30.195806 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:53:30.195825 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:53:30.195844 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:53:30.195862 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:53:30.195880 kernel: Fallback order for Node 0: 0 Dec 13 01:53:30.195897 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:53:30.195914 kernel: Policy zone: Normal Dec 13 01:53:30.195931 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:53:30.195948 kernel: software IO TLB: area num 2. Dec 13 01:53:30.195965 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:53:30.195988 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:53:30.196006 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:53:30.196023 kernel: trace event string verifier disabled Dec 13 01:53:30.196040 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:53:30.196058 kernel: rcu: RCU event tracing is enabled. Dec 13 01:53:30.196077 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:53:30.196094 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:53:30.196112 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:53:30.196129 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:53:30.196147 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:53:30.196164 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:53:30.196185 kernel: GICv3: 96 SPIs implemented Dec 13 01:53:30.198469 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:53:30.198500 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:53:30.198518 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:53:30.198536 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:53:30.198553 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:53:30.198570 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:53:30.198589 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:53:30.198606 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:53:30.198623 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:53:30.198641 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:53:30.198658 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:53:30.198685 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:53:30.198703 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:53:30.198720 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:53:30.198738 kernel: Console: colour dummy device 80x25 Dec 13 01:53:30.198756 kernel: printk: console [tty1] enabled Dec 13 01:53:30.198774 kernel: ACPI: Core revision 20230628 Dec 13 01:53:30.198792 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:53:30.198810 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:53:30.198827 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:53:30.198845 kernel: landlock: Up and running. Dec 13 01:53:30.198868 kernel: SELinux: Initializing. Dec 13 01:53:30.198885 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:30.198903 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:30.198921 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:53:30.198939 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:53:30.198956 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:53:30.198975 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:53:30.198992 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:53:30.199014 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:53:30.199032 kernel: Remapping and enabling EFI services. Dec 13 01:53:30.199049 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:53:30.199066 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:53:30.199084 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:53:30.199102 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:53:30.199119 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:53:30.199137 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:53:30.199154 kernel: SMP: Total of 2 processors activated. Dec 13 01:53:30.199172 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:53:30.199193 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:53:30.199232 kernel: CPU features: detected: CRC32 instructions Dec 13 01:53:30.199269 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:53:30.199294 kernel: alternatives: applying system-wide alternatives Dec 13 01:53:30.199314 kernel: devtmpfs: initialized Dec 13 01:53:30.199333 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:53:30.199352 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:53:30.199371 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:53:30.199390 kernel: SMBIOS 3.0.0 present. Dec 13 01:53:30.199414 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:53:30.199432 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:53:30.199471 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:53:30.199493 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:53:30.199512 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:53:30.199531 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:53:30.199549 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Dec 13 01:53:30.199573 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:53:30.199592 kernel: cpuidle: using governor menu Dec 13 01:53:30.199611 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:53:30.199629 kernel: ASID allocator initialised with 65536 entries Dec 13 01:53:30.199647 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:53:30.199666 kernel: Serial: AMBA PL011 UART driver Dec 13 01:53:30.199684 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:53:30.199702 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:53:30.199721 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:53:30.199743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:53:30.199763 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:53:30.199781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:53:30.199800 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:53:30.199818 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:53:30.199837 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:53:30.199855 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:53:30.199874 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:53:30.199892 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:53:30.199915 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:53:30.199934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:53:30.199952 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:53:30.199970 kernel: ACPI: Interpreter enabled Dec 13 01:53:30.199989 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:53:30.200007 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:53:30.200026 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:53:30.202467 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:53:30.202751 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:53:30.202982 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:53:30.207260 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:53:30.207571 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:53:30.207600 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:53:30.207620 kernel: acpiphp: Slot [1] registered Dec 13 01:53:30.207639 kernel: acpiphp: Slot [2] registered Dec 13 01:53:30.207657 kernel: acpiphp: Slot [3] registered Dec 13 01:53:30.207685 kernel: acpiphp: Slot [4] registered Dec 13 01:53:30.207704 kernel: acpiphp: Slot [5] registered Dec 13 01:53:30.207723 kernel: acpiphp: Slot [6] registered Dec 13 01:53:30.207741 kernel: acpiphp: Slot [7] registered Dec 13 01:53:30.207760 kernel: acpiphp: Slot [8] registered Dec 13 01:53:30.207778 kernel: acpiphp: Slot [9] registered Dec 13 01:53:30.207796 kernel: acpiphp: Slot [10] registered Dec 13 01:53:30.207814 kernel: acpiphp: Slot [11] registered Dec 13 01:53:30.207833 kernel: acpiphp: Slot [12] registered Dec 13 01:53:30.207851 kernel: acpiphp: Slot [13] registered Dec 13 01:53:30.207874 kernel: acpiphp: Slot [14] registered Dec 13 01:53:30.207892 kernel: acpiphp: Slot [15] registered Dec 13 01:53:30.207910 kernel: acpiphp: Slot [16] registered Dec 13 01:53:30.207929 kernel: acpiphp: Slot [17] registered Dec 13 01:53:30.207947 kernel: acpiphp: Slot [18] registered Dec 13 01:53:30.207965 kernel: acpiphp: Slot [19] registered Dec 13 01:53:30.207983 kernel: acpiphp: Slot [20] registered Dec 13 01:53:30.208001 kernel: acpiphp: Slot [21] registered Dec 13 01:53:30.208020 kernel: acpiphp: Slot [22] registered Dec 13 01:53:30.208042 kernel: acpiphp: Slot [23] registered Dec 13 01:53:30.208061 kernel: acpiphp: Slot [24] registered Dec 13 01:53:30.208079 kernel: acpiphp: Slot [25] registered Dec 13 01:53:30.208097 kernel: acpiphp: Slot [26] registered Dec 13 01:53:30.208115 kernel: acpiphp: Slot [27] registered Dec 13 01:53:30.208134 kernel: acpiphp: Slot [28] registered Dec 13 01:53:30.208152 kernel: acpiphp: Slot [29] registered Dec 13 01:53:30.208170 kernel: acpiphp: Slot [30] registered Dec 13 01:53:30.208188 kernel: acpiphp: Slot [31] registered Dec 13 01:53:30.208263 kernel: PCI host bridge to bus 0000:00 Dec 13 01:53:30.208484 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:53:30.208699 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:53:30.208900 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:53:30.209096 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:53:30.211719 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:53:30.212069 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:53:30.212434 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:53:30.212724 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:53:30.212969 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:53:30.213236 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:53:30.213504 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:53:30.213727 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:53:30.213980 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:53:30.214270 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:53:30.214492 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:53:30.214705 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:53:30.214913 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:53:30.215126 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:53:30.215434 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:53:30.215665 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:53:30.215861 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:53:30.216039 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:53:30.216249 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:53:30.216277 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:53:30.216297 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:53:30.216316 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:53:30.216335 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:53:30.216353 kernel: iommu: Default domain type: Translated Dec 13 01:53:30.216379 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:53:30.216398 kernel: efivars: Registered efivars operations Dec 13 01:53:30.216416 kernel: vgaarb: loaded Dec 13 01:53:30.216434 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:53:30.216453 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:53:30.216471 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:53:30.216490 kernel: pnp: PnP ACPI init Dec 13 01:53:30.216728 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:53:30.216763 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:53:30.216783 kernel: NET: Registered PF_INET protocol family Dec 13 01:53:30.216801 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:53:30.216820 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:53:30.216839 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:53:30.216858 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:53:30.216877 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:53:30.216896 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:53:30.216914 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:30.216937 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:30.216956 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:53:30.216975 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:53:30.216994 kernel: kvm [1]: HYP mode not available Dec 13 01:53:30.217012 kernel: Initialise system trusted keyrings Dec 13 01:53:30.217031 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:53:30.217049 kernel: Key type asymmetric registered Dec 13 01:53:30.217067 kernel: Asymmetric key parser 'x509' registered Dec 13 01:53:30.217086 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:53:30.217108 kernel: io scheduler mq-deadline registered Dec 13 01:53:30.217127 kernel: io scheduler kyber registered Dec 13 01:53:30.217146 kernel: io scheduler bfq registered Dec 13 01:53:30.217410 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:53:30.217442 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:53:30.217462 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:53:30.217482 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:53:30.217501 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:53:30.217528 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:53:30.217548 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:53:30.217878 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:53:30.217907 kernel: printk: console [ttyS0] disabled Dec 13 01:53:30.217927 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:53:30.217946 kernel: printk: console [ttyS0] enabled Dec 13 01:53:30.217965 kernel: printk: bootconsole [uart0] disabled Dec 13 01:53:30.217984 kernel: thunder_xcv, ver 1.0 Dec 13 01:53:30.218003 kernel: thunder_bgx, ver 1.0 Dec 13 01:53:30.218021 kernel: nicpf, ver 1.0 Dec 13 01:53:30.218047 kernel: nicvf, ver 1.0 Dec 13 01:53:30.218354 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:53:30.218549 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:53:29 UTC (1734054809) Dec 13 01:53:30.218575 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:53:30.218594 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:53:30.218613 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:53:30.218632 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:53:30.218656 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:53:30.218675 kernel: Segment Routing with IPv6 Dec 13 01:53:30.218693 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:53:30.218711 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:53:30.218729 kernel: Key type dns_resolver registered Dec 13 01:53:30.218747 kernel: registered taskstats version 1 Dec 13 01:53:30.218766 kernel: Loading compiled-in X.509 certificates Dec 13 01:53:30.218784 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:53:30.218802 kernel: Key type .fscrypt registered Dec 13 01:53:30.218820 kernel: Key type fscrypt-provisioning registered Dec 13 01:53:30.218843 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:53:30.218861 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:53:30.218880 kernel: ima: No architecture policies found Dec 13 01:53:30.218898 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:53:30.218917 kernel: clk: Disabling unused clocks Dec 13 01:53:30.218935 kernel: Freeing unused kernel memory: 39360K Dec 13 01:53:30.218954 kernel: Run /init as init process Dec 13 01:53:30.218972 kernel: with arguments: Dec 13 01:53:30.218990 kernel: /init Dec 13 01:53:30.219011 kernel: with environment: Dec 13 01:53:30.219029 kernel: HOME=/ Dec 13 01:53:30.219048 kernel: TERM=linux Dec 13 01:53:30.219065 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:53:30.219088 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:53:30.219111 systemd[1]: Detected virtualization amazon. Dec 13 01:53:30.219131 systemd[1]: Detected architecture arm64. Dec 13 01:53:30.219155 systemd[1]: Running in initrd. Dec 13 01:53:30.219175 systemd[1]: No hostname configured, using default hostname. Dec 13 01:53:30.219208 systemd[1]: Hostname set to . Dec 13 01:53:30.219235 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:30.219256 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:53:30.219276 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:30.219297 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:30.219319 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:53:30.219345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:53:30.219366 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:53:30.219387 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:53:30.219410 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:53:30.219431 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:53:30.219470 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:30.219494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:30.219520 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:53:30.219541 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:53:30.219561 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:53:30.219581 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:53:30.219601 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:53:30.219621 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:53:30.219641 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:53:30.219661 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:53:30.219682 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:30.219707 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:30.219727 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:30.219747 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:53:30.219767 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:53:30.219787 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:53:30.219808 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:53:30.219828 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:53:30.219848 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:53:30.219872 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:53:30.219893 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:30.219913 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:53:30.219933 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:30.219990 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 01:53:30.220039 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:53:30.220062 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:53:30.220082 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:53:30.220102 systemd-journald[251]: Journal started Dec 13 01:53:30.220144 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2454ba6371bc3be1ef2f05ead30722) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:53:30.186795 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 01:53:30.231470 kernel: Bridge firewalling registered Dec 13 01:53:30.231558 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:53:30.227690 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 01:53:30.234237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:30.240116 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:30.244774 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:30.257567 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:30.268518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:53:30.275513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:53:30.288174 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:53:30.314703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:30.315415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:30.324288 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:30.330825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:53:30.351946 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:30.358839 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:53:30.381522 dracut-cmdline[287]: dracut-dracut-053 Dec 13 01:53:30.388134 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:53:30.436010 systemd-resolved[289]: Positive Trust Anchors: Dec 13 01:53:30.436044 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:30.436108 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:53:30.550233 kernel: SCSI subsystem initialized Dec 13 01:53:30.558242 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:53:30.572284 kernel: iscsi: registered transport (tcp) Dec 13 01:53:30.595238 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:53:30.595308 kernel: QLogic iSCSI HBA Driver Dec 13 01:53:30.674256 kernel: random: crng init done Dec 13 01:53:30.674481 systemd-resolved[289]: Defaulting to hostname 'linux'. Dec 13 01:53:30.678377 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:53:30.683024 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:30.712275 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:53:30.720500 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:53:30.760002 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:53:30.760082 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:53:30.762306 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:53:30.849240 kernel: raid6: neonx8 gen() 6606 MB/s Dec 13 01:53:30.850290 kernel: raid6: neonx4 gen() 6427 MB/s Dec 13 01:53:30.867268 kernel: raid6: neonx2 gen() 5378 MB/s Dec 13 01:53:30.884257 kernel: raid6: neonx1 gen() 3902 MB/s Dec 13 01:53:30.901263 kernel: raid6: int64x8 gen() 3750 MB/s Dec 13 01:53:30.918252 kernel: raid6: int64x4 gen() 3667 MB/s Dec 13 01:53:30.935258 kernel: raid6: int64x2 gen() 3581 MB/s Dec 13 01:53:30.953028 kernel: raid6: int64x1 gen() 2761 MB/s Dec 13 01:53:30.953096 kernel: raid6: using algorithm neonx8 gen() 6606 MB/s Dec 13 01:53:30.971032 kernel: raid6: .... xor() 4887 MB/s, rmw enabled Dec 13 01:53:30.971108 kernel: raid6: using neon recovery algorithm Dec 13 01:53:30.979654 kernel: xor: measuring software checksum speed Dec 13 01:53:30.979733 kernel: 8regs : 11031 MB/sec Dec 13 01:53:30.980759 kernel: 32regs : 11892 MB/sec Dec 13 01:53:30.981953 kernel: arm64_neon : 9566 MB/sec Dec 13 01:53:30.982005 kernel: xor: using function: 32regs (11892 MB/sec) Dec 13 01:53:31.070280 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:53:31.093267 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:53:31.104553 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:31.149289 systemd-udevd[471]: Using default interface naming scheme 'v255'. Dec 13 01:53:31.159706 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:31.171566 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:53:31.212872 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Dec 13 01:53:31.281007 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:53:31.290516 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:53:31.432099 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:31.451689 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:53:31.492182 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:53:31.501544 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:53:31.504456 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:31.510016 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:53:31.534058 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:53:31.580297 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:53:31.664315 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:53:31.664392 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:53:31.705953 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:53:31.705996 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:53:31.706467 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:53:31.706766 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:53:31.707106 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:53:31.707394 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:91:e0:10:a9:41 Dec 13 01:53:31.668342 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:53:31.668601 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:31.671501 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:31.676359 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:53:31.676659 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:31.678975 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:31.691712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:31.733254 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:53:31.733333 kernel: GPT:9289727 != 16777215 Dec 13 01:53:31.733361 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:53:31.734406 kernel: GPT:9289727 != 16777215 Dec 13 01:53:31.734496 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:53:31.736331 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:31.740906 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:31.748724 (udev-worker)[519]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:31.755631 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:31.828253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:31.865706 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (519) Dec 13 01:53:31.876243 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (526) Dec 13 01:53:31.938033 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:53:31.976976 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:53:32.020111 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:53:32.022877 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:53:32.042701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:53:32.048581 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:53:32.071244 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:32.072463 disk-uuid[660]: Primary Header is updated. Dec 13 01:53:32.072463 disk-uuid[660]: Secondary Entries is updated. Dec 13 01:53:32.072463 disk-uuid[660]: Secondary Header is updated. Dec 13 01:53:32.098238 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:32.105239 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:33.115225 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:33.116287 disk-uuid[661]: The operation has completed successfully. Dec 13 01:53:33.310481 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:53:33.312404 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:53:33.349153 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:53:33.357767 sh[1004]: Success Dec 13 01:53:33.382259 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:53:33.504653 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:53:33.519408 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:53:33.526345 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:53:33.574408 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:53:33.574473 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:33.576215 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:53:33.577453 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:53:33.578501 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:53:33.608217 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:53:33.611356 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:53:33.615313 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:53:33.623533 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:53:33.634694 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:53:33.667941 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:33.668030 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:33.668060 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:33.688280 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:33.705169 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:53:33.708166 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:33.719900 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:53:33.731595 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:53:33.811229 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:53:33.822556 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:53:33.894085 systemd-networkd[1206]: lo: Link UP Dec 13 01:53:33.894106 systemd-networkd[1206]: lo: Gained carrier Dec 13 01:53:33.902933 systemd-networkd[1206]: Enumeration completed Dec 13 01:53:33.903101 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:53:33.906836 systemd[1]: Reached target network.target - Network. Dec 13 01:53:33.911779 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:33.911786 systemd-networkd[1206]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:33.935068 systemd-networkd[1206]: eth0: Link UP Dec 13 01:53:33.935087 systemd-networkd[1206]: eth0: Gained carrier Dec 13 01:53:33.935107 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:33.956076 ignition[1131]: Ignition 2.19.0 Dec 13 01:53:33.956105 ignition[1131]: Stage: fetch-offline Dec 13 01:53:33.957311 ignition[1131]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:33.959296 systemd-networkd[1206]: eth0: DHCPv4 address 172.31.22.156/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:53:33.957620 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:33.961673 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:53:33.958564 ignition[1131]: Ignition finished successfully Dec 13 01:53:33.975017 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:53:34.010860 ignition[1218]: Ignition 2.19.0 Dec 13 01:53:34.010888 ignition[1218]: Stage: fetch Dec 13 01:53:34.012667 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:34.012693 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:34.013027 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.023116 ignition[1218]: PUT result: OK Dec 13 01:53:34.029327 ignition[1218]: parsed url from cmdline: "" Dec 13 01:53:34.029344 ignition[1218]: no config URL provided Dec 13 01:53:34.029362 ignition[1218]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:53:34.029389 ignition[1218]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:53:34.029423 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.033483 ignition[1218]: PUT result: OK Dec 13 01:53:34.033605 ignition[1218]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:53:34.040741 ignition[1218]: GET result: OK Dec 13 01:53:34.041137 ignition[1218]: parsing config with SHA512: 4c62bec35714059761883d95ef9b52fda54cba878e5035d471454aa46e2541dc8adcb3156b4da5a5c24d75cc488cca58ccb54df7686b03130e1e3a512d0b9050 Dec 13 01:53:34.049998 unknown[1218]: fetched base config from "system" Dec 13 01:53:34.050739 ignition[1218]: fetch: fetch complete Dec 13 01:53:34.050024 unknown[1218]: fetched base config from "system" Dec 13 01:53:34.050750 ignition[1218]: fetch: fetch passed Dec 13 01:53:34.050037 unknown[1218]: fetched user config from "aws" Dec 13 01:53:34.050832 ignition[1218]: Ignition finished successfully Dec 13 01:53:34.056474 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:53:34.086634 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:53:34.110300 ignition[1224]: Ignition 2.19.0 Dec 13 01:53:34.110329 ignition[1224]: Stage: kargs Dec 13 01:53:34.110948 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:34.110999 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:34.111145 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.117463 ignition[1224]: PUT result: OK Dec 13 01:53:34.125364 ignition[1224]: kargs: kargs passed Dec 13 01:53:34.125533 ignition[1224]: Ignition finished successfully Dec 13 01:53:34.129805 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:53:34.137518 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:53:34.175126 ignition[1231]: Ignition 2.19.0 Dec 13 01:53:34.175152 ignition[1231]: Stage: disks Dec 13 01:53:34.176845 ignition[1231]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:34.176873 ignition[1231]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:34.177038 ignition[1231]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.180483 ignition[1231]: PUT result: OK Dec 13 01:53:34.189488 ignition[1231]: disks: disks passed Dec 13 01:53:34.189641 ignition[1231]: Ignition finished successfully Dec 13 01:53:34.193871 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:53:34.198021 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:53:34.200391 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:53:34.204259 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:53:34.206126 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:53:34.208286 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:53:34.222497 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:53:34.277696 systemd-fsck[1239]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:53:34.282697 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:53:34.295569 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:53:34.371235 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:53:34.372018 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:53:34.375988 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:53:34.396384 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:53:34.402156 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:53:34.404425 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:53:34.404509 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:53:34.404558 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:53:34.427234 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1258) Dec 13 01:53:34.432241 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:34.432299 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:34.432327 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:34.439392 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:53:34.452240 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:34.461580 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:53:34.468082 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:53:34.550109 initrd-setup-root[1282]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:53:34.559089 initrd-setup-root[1289]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:53:34.568122 initrd-setup-root[1296]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:53:34.576771 initrd-setup-root[1303]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:53:34.749743 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:53:34.759399 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:53:34.771559 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:53:34.787670 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:53:34.790187 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:34.827638 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:53:34.836006 ignition[1371]: INFO : Ignition 2.19.0 Dec 13 01:53:34.836006 ignition[1371]: INFO : Stage: mount Dec 13 01:53:34.839180 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:34.839180 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:34.843340 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.846089 ignition[1371]: INFO : PUT result: OK Dec 13 01:53:34.851008 ignition[1371]: INFO : mount: mount passed Dec 13 01:53:34.854677 ignition[1371]: INFO : Ignition finished successfully Dec 13 01:53:34.853259 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:53:34.869421 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:53:34.893588 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:53:34.918222 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1382) Dec 13 01:53:34.922306 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:34.922358 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:34.922386 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:34.929241 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:34.932704 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:53:34.968112 ignition[1399]: INFO : Ignition 2.19.0 Dec 13 01:53:34.968112 ignition[1399]: INFO : Stage: files Dec 13 01:53:34.972152 ignition[1399]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:34.972152 ignition[1399]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:34.972152 ignition[1399]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.972152 ignition[1399]: INFO : PUT result: OK Dec 13 01:53:34.982110 ignition[1399]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:53:34.985005 ignition[1399]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:53:34.985005 ignition[1399]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:53:34.992348 ignition[1399]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:53:34.995228 ignition[1399]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:53:34.998040 unknown[1399]: wrote ssh authorized keys file for user: core Dec 13 01:53:35.000893 ignition[1399]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:53:35.003340 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:53:35.003340 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:53:35.003340 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:53:35.003340 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:53:35.021544 systemd-networkd[1206]: eth0: Gained IPv6LL Dec 13 01:53:35.115934 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:53:35.236750 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:53:35.241332 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:35.241332 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:35.241332 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:35.241332 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:35.241332 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:35.241332 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:35.241332 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:35.241332 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:35.267155 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:35.267155 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:35.267155 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:35.267155 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:35.267155 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:35.267155 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:53:35.722387 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:53:36.129700 ignition[1399]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:36.129700 ignition[1399]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:36.136248 ignition[1399]: INFO : files: files passed Dec 13 01:53:36.136248 ignition[1399]: INFO : Ignition finished successfully Dec 13 01:53:36.174164 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:53:36.182524 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:53:36.189496 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:53:36.201740 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:53:36.202066 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:53:36.226594 initrd-setup-root-after-ignition[1427]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:36.226594 initrd-setup-root-after-ignition[1427]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:36.234145 initrd-setup-root-after-ignition[1431]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:36.238887 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:53:36.241917 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:53:36.255540 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:53:36.300097 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:53:36.302284 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:53:36.305364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:53:36.305793 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:53:36.306464 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:53:36.326610 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:53:36.353751 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:53:36.364583 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:53:36.396097 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:36.398768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:36.403023 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:53:36.406489 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:53:36.406724 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:53:36.414716 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:53:36.416966 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:53:36.421786 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:53:36.423885 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:53:36.426147 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:53:36.428730 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:53:36.438841 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:53:36.443184 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:53:36.447448 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:53:36.450672 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:53:36.454543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:53:36.454785 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:53:36.458071 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:36.465921 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:36.468448 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:53:36.470323 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:36.473169 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:53:36.473594 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:53:36.483565 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:53:36.484102 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:53:36.488222 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:53:36.488440 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:53:36.507448 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:53:36.516697 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:53:36.525177 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:53:36.527334 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:36.539177 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:53:36.541837 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:53:36.546404 ignition[1451]: INFO : Ignition 2.19.0 Dec 13 01:53:36.553588 ignition[1451]: INFO : Stage: umount Dec 13 01:53:36.553588 ignition[1451]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:36.553588 ignition[1451]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:36.553588 ignition[1451]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:36.553588 ignition[1451]: INFO : PUT result: OK Dec 13 01:53:36.574965 ignition[1451]: INFO : umount: umount passed Dec 13 01:53:36.574965 ignition[1451]: INFO : Ignition finished successfully Dec 13 01:53:36.562474 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:53:36.562695 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:53:36.574587 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:53:36.574776 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:53:36.587557 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:53:36.589297 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:53:36.589410 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:53:36.594830 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:53:36.594923 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:53:36.596857 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:53:36.596937 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:53:36.599841 systemd[1]: Stopped target network.target - Network. Dec 13 01:53:36.612607 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:53:36.614311 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:53:36.624129 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:53:36.625901 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:53:36.630390 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:36.632660 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:53:36.634311 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:53:36.636120 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:53:36.636215 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:53:36.638065 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:53:36.638134 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:53:36.639997 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:53:36.640080 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:53:36.641926 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:53:36.641999 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:53:36.645811 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:53:36.649370 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:53:36.651582 systemd-networkd[1206]: eth0: DHCPv6 lease lost Dec 13 01:53:36.662473 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:53:36.662808 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:53:36.668132 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:53:36.668471 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:53:36.676772 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:53:36.676884 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:36.715728 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:53:36.719423 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:53:36.719560 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:53:36.731090 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:53:36.731192 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:36.735600 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:53:36.735703 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:36.739638 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:53:36.739733 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:36.740055 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:36.773660 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:53:36.775356 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:53:36.783575 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:53:36.784050 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:36.799116 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:53:36.799253 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:36.801482 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:53:36.801563 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:36.803680 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:53:36.804190 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:53:36.807792 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:53:36.807894 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:53:36.809980 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:53:36.810064 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:36.812497 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:53:36.812577 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:53:36.828610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:53:36.833454 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:53:36.833574 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:36.838827 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:53:36.838937 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:36.844173 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:53:36.844289 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:36.848855 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:53:36.848957 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:36.853051 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:53:36.853390 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:53:36.863755 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:53:36.866384 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:53:36.875337 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:53:36.892591 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:53:36.942915 systemd[1]: Switching root. Dec 13 01:53:36.977906 systemd-journald[251]: Journal stopped Dec 13 01:53:38.765066 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 01:53:38.765255 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:53:38.765389 kernel: SELinux: policy capability open_perms=1 Dec 13 01:53:38.765424 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:53:38.765455 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:53:38.765487 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:53:38.765517 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:53:38.765547 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:53:38.766161 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:53:38.766238 kernel: audit: type=1403 audit(1734054817.283:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:53:38.766277 systemd[1]: Successfully loaded SELinux policy in 48.593ms. Dec 13 01:53:38.766332 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.895ms. Dec 13 01:53:38.766370 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:53:38.766445 systemd[1]: Detected virtualization amazon. Dec 13 01:53:38.766483 systemd[1]: Detected architecture arm64. Dec 13 01:53:38.766514 systemd[1]: Detected first boot. Dec 13 01:53:38.766548 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:38.766580 zram_generator::config[1514]: No configuration found. Dec 13 01:53:38.766619 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:53:38.766660 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:53:38.766692 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:53:38.766734 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:53:38.766769 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:53:38.766798 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:53:38.766831 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:53:38.766864 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:53:38.766901 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:53:38.766931 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:53:38.766964 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:53:38.766997 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:38.767026 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:38.767058 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:53:38.767089 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:53:38.767122 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:53:38.767156 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:53:38.767189 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:53:38.768620 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:38.768661 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:53:38.768695 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:38.768728 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:53:38.768760 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:53:38.768793 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:53:38.768825 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:53:38.768863 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:53:38.768895 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:53:38.768927 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:53:38.768956 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:38.768985 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:38.769018 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:38.769047 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:53:38.769079 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:53:38.769112 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:53:38.769146 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:53:38.769176 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:53:38.769237 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:53:38.769275 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:53:38.769306 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:53:38.769338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:38.769368 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:53:38.769399 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:53:38.769430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:53:38.769467 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:53:38.769499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:53:38.769532 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:53:38.769563 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:53:38.769595 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:53:38.769626 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:53:38.769662 kernel: ACPI: bus type drm_connector registered Dec 13 01:53:38.769694 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:53:38.769732 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:53:38.769761 kernel: fuse: init (API version 7.39) Dec 13 01:53:38.769789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:53:38.769818 kernel: loop: module loaded Dec 13 01:53:38.769847 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:53:38.769878 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:53:38.769908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:53:38.769940 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:53:38.769971 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:53:38.770064 systemd-journald[1611]: Collecting audit messages is disabled. Dec 13 01:53:38.770118 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:53:38.770149 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:53:38.770178 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:53:38.774295 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:53:38.774357 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:38.774389 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:53:38.774430 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:53:38.774460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:38.774492 systemd-journald[1611]: Journal started Dec 13 01:53:38.774544 systemd-journald[1611]: Runtime Journal (/run/log/journal/ec2454ba6371bc3be1ef2f05ead30722) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:53:38.778193 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:53:38.785667 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:53:38.788091 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:38.789673 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:53:38.793191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:38.793607 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:53:38.796887 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:53:38.798614 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:53:38.801555 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:38.802083 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:53:38.805188 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:38.808299 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:53:38.811229 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:53:38.814402 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:53:38.845399 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:53:38.860590 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:53:38.869550 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:53:38.872387 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:53:38.886537 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:53:38.899060 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:53:38.901423 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:38.917644 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:53:38.922396 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:53:38.926697 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:53:38.944274 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:53:38.960872 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:53:38.966032 systemd-journald[1611]: Time spent on flushing to /var/log/journal/ec2454ba6371bc3be1ef2f05ead30722 is 97.120ms for 896 entries. Dec 13 01:53:38.966032 systemd-journald[1611]: System Journal (/var/log/journal/ec2454ba6371bc3be1ef2f05ead30722) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:53:39.081518 systemd-journald[1611]: Received client request to flush runtime journal. Dec 13 01:53:38.967656 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:53:39.001035 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:53:39.006395 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:53:39.058978 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:39.070567 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:53:39.081927 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:39.096614 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:53:39.107729 systemd-tmpfiles[1662]: ACLs are not supported, ignoring. Dec 13 01:53:39.107770 systemd-tmpfiles[1662]: ACLs are not supported, ignoring. Dec 13 01:53:39.129354 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:39.141922 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:53:39.150766 udevadm[1673]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:53:39.212919 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:53:39.224525 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:53:39.261746 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Dec 13 01:53:39.262339 systemd-tmpfiles[1684]: ACLs are not supported, ignoring. Dec 13 01:53:39.270858 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:39.992825 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:53:40.003548 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:40.061721 systemd-udevd[1690]: Using default interface naming scheme 'v255'. Dec 13 01:53:40.107061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:40.124145 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:53:40.151530 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:53:40.244435 (udev-worker)[1708]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:40.250975 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:53:40.300383 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:53:40.329394 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1706) Dec 13 01:53:40.359329 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1706) Dec 13 01:53:40.476422 systemd-networkd[1694]: lo: Link UP Dec 13 01:53:40.476907 systemd-networkd[1694]: lo: Gained carrier Dec 13 01:53:40.479983 systemd-networkd[1694]: Enumeration completed Dec 13 01:53:40.480381 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:53:40.482984 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:40.483000 systemd-networkd[1694]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:40.485914 systemd-networkd[1694]: eth0: Link UP Dec 13 01:53:40.486410 systemd-networkd[1694]: eth0: Gained carrier Dec 13 01:53:40.486611 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:40.526398 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1698) Dec 13 01:53:40.544502 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:53:40.562653 systemd-networkd[1694]: eth0: DHCPv4 address 172.31.22.156/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:53:40.623835 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:40.748810 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:53:40.790713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:53:40.806483 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:53:40.824127 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:40.839439 lvm[1816]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:40.882257 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:53:40.885479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:40.895584 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:53:40.916185 lvm[1822]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:40.956039 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:53:40.959248 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:53:40.961729 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:53:40.961945 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:53:40.964220 systemd[1]: Reached target machines.target - Containers. Dec 13 01:53:40.968103 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:53:40.984576 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:53:40.990478 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:53:40.993690 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:41.003615 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:53:41.010582 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:53:41.024367 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:53:41.031586 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:53:41.064916 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:53:41.068772 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:53:41.072661 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:53:41.087446 kernel: loop0: detected capacity change from 0 to 114328 Dec 13 01:53:41.128713 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:53:41.152253 kernel: loop1: detected capacity change from 0 to 52536 Dec 13 01:53:41.271227 kernel: loop2: detected capacity change from 0 to 114432 Dec 13 01:53:41.327236 kernel: loop3: detected capacity change from 0 to 194512 Dec 13 01:53:41.430989 kernel: loop4: detected capacity change from 0 to 114328 Dec 13 01:53:41.452236 kernel: loop5: detected capacity change from 0 to 52536 Dec 13 01:53:41.469421 kernel: loop6: detected capacity change from 0 to 114432 Dec 13 01:53:41.496259 kernel: loop7: detected capacity change from 0 to 194512 Dec 13 01:53:41.522947 (sd-merge)[1843]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:53:41.526636 (sd-merge)[1843]: Merged extensions into '/usr'. Dec 13 01:53:41.533587 systemd[1]: Reloading requested from client PID 1830 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:53:41.533821 systemd[1]: Reloading... Dec 13 01:53:41.612450 systemd-networkd[1694]: eth0: Gained IPv6LL Dec 13 01:53:41.672234 ldconfig[1827]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:53:41.688235 zram_generator::config[1872]: No configuration found. Dec 13 01:53:41.942243 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:42.086145 systemd[1]: Reloading finished in 551 ms. Dec 13 01:53:42.111837 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:53:42.115107 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:53:42.118145 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:53:42.135571 systemd[1]: Starting ensure-sysext.service... Dec 13 01:53:42.151632 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:53:42.167126 systemd[1]: Reloading requested from client PID 1932 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:53:42.167150 systemd[1]: Reloading... Dec 13 01:53:42.202977 systemd-tmpfiles[1933]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:53:42.204871 systemd-tmpfiles[1933]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:53:42.206980 systemd-tmpfiles[1933]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:53:42.207917 systemd-tmpfiles[1933]: ACLs are not supported, ignoring. Dec 13 01:53:42.208267 systemd-tmpfiles[1933]: ACLs are not supported, ignoring. Dec 13 01:53:42.218054 systemd-tmpfiles[1933]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:53:42.218075 systemd-tmpfiles[1933]: Skipping /boot Dec 13 01:53:42.242167 systemd-tmpfiles[1933]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:53:42.242446 systemd-tmpfiles[1933]: Skipping /boot Dec 13 01:53:42.331232 zram_generator::config[1962]: No configuration found. Dec 13 01:53:42.574971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:42.716500 systemd[1]: Reloading finished in 548 ms. Dec 13 01:53:42.740469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:42.765548 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:42.780521 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:53:42.795122 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:53:42.805950 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:53:42.814858 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:53:42.838972 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:42.852220 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:53:42.869682 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:53:42.886589 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:53:42.888754 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:42.905986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:42.907696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:42.911141 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:53:42.918440 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:53:42.927947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:42.933532 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:53:42.944641 augenrules[2049]: No rules Dec 13 01:53:42.945085 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:42.945493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:53:42.954916 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:42.964447 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:42.965047 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:53:42.989867 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:42.996767 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:53:43.008734 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:53:43.016848 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:53:43.030411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:53:43.035616 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:43.036665 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:53:43.058662 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:53:43.074151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:43.077667 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:53:43.081813 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:43.082176 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:53:43.090706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:43.091094 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:53:43.112970 systemd[1]: Finished ensure-sysext.service. Dec 13 01:53:43.124755 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:53:43.133326 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:43.133712 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:53:43.145334 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:43.145497 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:53:43.145548 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:53:43.156138 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:53:43.163282 systemd-resolved[2030]: Positive Trust Anchors: Dec 13 01:53:43.163320 systemd-resolved[2030]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:43.163405 systemd-resolved[2030]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:53:43.172917 systemd-resolved[2030]: Defaulting to hostname 'linux'. Dec 13 01:53:43.176306 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:53:43.178562 systemd[1]: Reached target network.target - Network. Dec 13 01:53:43.180310 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:53:43.182301 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:43.184528 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:53:43.186676 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:53:43.189013 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:53:43.191704 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:53:43.193945 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:53:43.196304 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:53:43.198635 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:53:43.198684 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:53:43.200414 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:53:43.204307 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:53:43.209997 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:53:43.214239 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:53:43.217313 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:53:43.219474 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:53:43.221339 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:53:43.223385 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:53:43.223454 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:53:43.223498 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:53:43.227390 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:53:43.235939 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:53:43.248570 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:53:43.260987 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:53:43.277779 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:53:43.281075 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:53:43.304236 jq[2090]: false Dec 13 01:53:43.295487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:53:43.308248 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:53:43.329509 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:53:43.348476 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:53:43.362395 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:53:43.369644 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:53:43.385527 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:53:43.406555 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:53:43.412834 dbus-daemon[2088]: [system] SELinux support is enabled Dec 13 01:53:43.428432 dbus-daemon[2088]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1694 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:53:43.433271 extend-filesystems[2091]: Found loop4 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found loop5 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found loop6 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found loop7 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found nvme0n1 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found nvme0n1p1 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found nvme0n1p2 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found nvme0n1p3 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found usr Dec 13 01:53:43.433271 extend-filesystems[2091]: Found nvme0n1p4 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found nvme0n1p6 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found nvme0n1p7 Dec 13 01:53:43.433271 extend-filesystems[2091]: Found nvme0n1p9 Dec 13 01:53:43.433271 extend-filesystems[2091]: Checking size of /dev/nvme0n1p9 Dec 13 01:53:43.446464 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:53:43.462427 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:53:43.473962 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:53:43.473962 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:53:43.473962 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: ---------------------------------------------------- Dec 13 01:53:43.473962 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:53:43.473962 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:53:43.473962 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: corporation. Support and training for ntp-4 are Dec 13 01:53:43.473962 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: available at https://www.nwtime.org/support Dec 13 01:53:43.473962 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: ---------------------------------------------------- Dec 13 01:53:43.472765 ntpd[2098]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:53:43.472810 ntpd[2098]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:53:43.472831 ntpd[2098]: ---------------------------------------------------- Dec 13 01:53:43.472850 ntpd[2098]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:53:43.472869 ntpd[2098]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:53:43.472887 ntpd[2098]: corporation. Support and training for ntp-4 are Dec 13 01:53:43.472906 ntpd[2098]: available at https://www.nwtime.org/support Dec 13 01:53:43.472925 ntpd[2098]: ---------------------------------------------------- Dec 13 01:53:43.476695 ntpd[2098]: proto: precision = 0.096 usec (-23) Dec 13 01:53:43.480392 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: proto: precision = 0.096 usec (-23) Dec 13 01:53:43.480392 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: basedate set to 2024-11-30 Dec 13 01:53:43.480392 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: gps base set to 2024-12-01 (week 2343) Dec 13 01:53:43.477113 ntpd[2098]: basedate set to 2024-11-30 Dec 13 01:53:43.477139 ntpd[2098]: gps base set to 2024-12-01 (week 2343) Dec 13 01:53:43.484766 ntpd[2098]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:53:43.485255 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:53:43.485255 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:53:43.485008 ntpd[2098]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:53:43.485597 ntpd[2098]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:53:43.485738 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:53:43.485882 ntpd[2098]: Listen normally on 3 eth0 172.31.22.156:123 Dec 13 01:53:43.486020 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: Listen normally on 3 eth0 172.31.22.156:123 Dec 13 01:53:43.486146 ntpd[2098]: Listen normally on 4 lo [::1]:123 Dec 13 01:53:43.486293 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: Listen normally on 4 lo [::1]:123 Dec 13 01:53:43.486431 ntpd[2098]: Listen normally on 5 eth0 [fe80::491:e0ff:fe10:a941%2]:123 Dec 13 01:53:43.486538 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: Listen normally on 5 eth0 [fe80::491:e0ff:fe10:a941%2]:123 Dec 13 01:53:43.486663 ntpd[2098]: Listening on routing socket on fd #22 for interface updates Dec 13 01:53:43.486799 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: Listening on routing socket on fd #22 for interface updates Dec 13 01:53:43.489981 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:53:43.492677 ntpd[2098]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:43.497723 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:43.498274 ntpd[2098]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:43.498780 ntpd[2098]: 13 Dec 01:53:43 ntpd[2098]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:43.504652 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:53:43.541045 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:53:43.558417 extend-filesystems[2091]: Resized partition /dev/nvme0n1p9 Dec 13 01:53:43.557764 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:53:43.558308 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:53:43.564808 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:53:43.569461 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:53:43.580413 jq[2123]: true Dec 13 01:53:43.597471 extend-filesystems[2133]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:53:43.617702 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:53:43.616363 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:53:43.616934 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:53:43.656748 coreos-metadata[2087]: Dec 13 01:53:43.656 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:53:43.667301 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:53:43.684056 coreos-metadata[2087]: Dec 13 01:53:43.675 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:53:43.685297 dbus-daemon[2088]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:53:43.695959 coreos-metadata[2087]: Dec 13 01:53:43.695 INFO Fetch successful Dec 13 01:53:43.695959 coreos-metadata[2087]: Dec 13 01:53:43.695 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:53:43.699798 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:53:43.699876 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:53:43.710483 coreos-metadata[2087]: Dec 13 01:53:43.710 INFO Fetch successful Dec 13 01:53:43.710483 coreos-metadata[2087]: Dec 13 01:53:43.710 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:53:43.712514 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:53:43.714560 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:53:43.714613 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:53:43.718504 coreos-metadata[2087]: Dec 13 01:53:43.718 INFO Fetch successful Dec 13 01:53:43.718504 coreos-metadata[2087]: Dec 13 01:53:43.718 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:53:43.724340 update_engine[2120]: I20241213 01:53:43.722349 2120 main.cc:92] Flatcar Update Engine starting Dec 13 01:53:43.744375 jq[2138]: true Dec 13 01:53:43.742639 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:53:43.744789 coreos-metadata[2087]: Dec 13 01:53:43.727 INFO Fetch successful Dec 13 01:53:43.744789 coreos-metadata[2087]: Dec 13 01:53:43.727 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:53:43.746193 coreos-metadata[2087]: Dec 13 01:53:43.745 INFO Fetch failed with 404: resource not found Dec 13 01:53:43.746193 coreos-metadata[2087]: Dec 13 01:53:43.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:53:43.747952 update_engine[2120]: I20241213 01:53:43.746124 2120 update_check_scheduler.cc:74] Next update check in 11m43s Dec 13 01:53:43.749633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:53:43.751525 coreos-metadata[2087]: Dec 13 01:53:43.751 INFO Fetch successful Dec 13 01:53:43.751525 coreos-metadata[2087]: Dec 13 01:53:43.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:53:43.757371 coreos-metadata[2087]: Dec 13 01:53:43.753 INFO Fetch successful Dec 13 01:53:43.757371 coreos-metadata[2087]: Dec 13 01:53:43.753 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:53:43.767191 coreos-metadata[2087]: Dec 13 01:53:43.766 INFO Fetch successful Dec 13 01:53:43.767191 coreos-metadata[2087]: Dec 13 01:53:43.767 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:53:43.772441 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:53:43.776471 coreos-metadata[2087]: Dec 13 01:53:43.776 INFO Fetch successful Dec 13 01:53:43.776471 coreos-metadata[2087]: Dec 13 01:53:43.776 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:53:43.789279 coreos-metadata[2087]: Dec 13 01:53:43.779 INFO Fetch successful Dec 13 01:53:43.789403 tar[2132]: linux-arm64/helm Dec 13 01:53:43.795937 (ntainerd)[2155]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:53:43.836396 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:53:43.861510 extend-filesystems[2133]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:53:43.861510 extend-filesystems[2133]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:53:43.861510 extend-filesystems[2133]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:53:43.872194 extend-filesystems[2091]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:53:43.887863 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:53:43.888418 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:53:43.909935 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:53:43.919314 systemd-logind[2111]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:53:43.919379 systemd-logind[2111]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:53:43.919722 systemd-logind[2111]: New seat seat0. Dec 13 01:53:43.927619 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:53:43.943516 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:53:43.990795 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:53:43.993622 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:53:44.016240 bash[2197]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:53:44.020879 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:53:44.098773 systemd[1]: Starting sshkeys.service... Dec 13 01:53:44.179273 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:53:44.202502 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (2199) Dec 13 01:53:44.204470 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: Initializing new seelog logger Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: New Seelog Logger Creation Complete Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: 2024/12/13 01:53:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: 2024/12/13 01:53:44 processing appconfig overrides Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: 2024/12/13 01:53:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: 2024/12/13 01:53:44 processing appconfig overrides Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: 2024/12/13 01:53:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO Proxy environment variables: Dec 13 01:53:44.238057 amazon-ssm-agent[2182]: 2024/12/13 01:53:44 processing appconfig overrides Dec 13 01:53:44.255601 amazon-ssm-agent[2182]: 2024/12/13 01:53:44 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:44.255601 amazon-ssm-agent[2182]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:44.255601 amazon-ssm-agent[2182]: 2024/12/13 01:53:44 processing appconfig overrides Dec 13 01:53:44.341313 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO https_proxy: Dec 13 01:53:44.440038 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO http_proxy: Dec 13 01:53:44.554427 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO no_proxy: Dec 13 01:53:44.555229 dbus-daemon[2088]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:53:44.555557 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:53:44.562313 dbus-daemon[2088]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2154 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:53:44.567710 locksmithd[2161]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:53:44.571904 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:53:44.640317 coreos-metadata[2215]: Dec 13 01:53:44.638 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:53:44.641664 coreos-metadata[2215]: Dec 13 01:53:44.641 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:53:44.642460 coreos-metadata[2215]: Dec 13 01:53:44.642 INFO Fetch successful Dec 13 01:53:44.642460 coreos-metadata[2215]: Dec 13 01:53:44.642 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:53:44.643529 coreos-metadata[2215]: Dec 13 01:53:44.642 INFO Fetch successful Dec 13 01:53:44.646948 unknown[2215]: wrote ssh authorized keys file for user: core Dec 13 01:53:44.674785 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:53:44.739296 polkitd[2288]: Started polkitd version 121 Dec 13 01:53:44.747462 update-ssh-keys[2298]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:53:44.752618 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:53:44.775051 systemd[1]: Finished sshkeys.service. Dec 13 01:53:44.780573 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:53:44.816336 polkitd[2288]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:53:44.816488 polkitd[2288]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:53:44.821910 polkitd[2288]: Finished loading, compiling and executing 2 rules Dec 13 01:53:44.838784 dbus-daemon[2088]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:53:44.839263 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:53:44.846791 polkitd[2288]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:53:44.880331 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO Agent will take identity from EC2 Dec 13 01:53:44.914932 containerd[2155]: time="2024-12-13T01:53:44.914808684Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:53:44.924711 systemd-hostnamed[2154]: Hostname set to (transient) Dec 13 01:53:44.925289 systemd-resolved[2030]: System hostname changed to 'ip-172-31-22-156'. Dec 13 01:53:44.986116 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:45.085300 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:45.131813 containerd[2155]: time="2024-12-13T01:53:45.131709981Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.140753 containerd[2155]: time="2024-12-13T01:53:45.140676681Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.140922 containerd[2155]: time="2024-12-13T01:53:45.140893725Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:53:45.141050 containerd[2155]: time="2024-12-13T01:53:45.141023289Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:53:45.144230 containerd[2155]: time="2024-12-13T01:53:45.143366277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:53:45.144230 containerd[2155]: time="2024-12-13T01:53:45.143455557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.144795 containerd[2155]: time="2024-12-13T01:53:45.144527865Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.144795 containerd[2155]: time="2024-12-13T01:53:45.144594105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.149420 containerd[2155]: time="2024-12-13T01:53:45.147394281Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.149420 containerd[2155]: time="2024-12-13T01:53:45.147448797Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.149420 containerd[2155]: time="2024-12-13T01:53:45.147506781Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.149420 containerd[2155]: time="2024-12-13T01:53:45.147535413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.149420 containerd[2155]: time="2024-12-13T01:53:45.147832077Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.150106 containerd[2155]: time="2024-12-13T01:53:45.150012237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.151216 containerd[2155]: time="2024-12-13T01:53:45.151158513Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.151357 containerd[2155]: time="2024-12-13T01:53:45.151311993Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:53:45.152361 containerd[2155]: time="2024-12-13T01:53:45.152330757Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:53:45.152583 containerd[2155]: time="2024-12-13T01:53:45.152557797Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:53:45.160552 containerd[2155]: time="2024-12-13T01:53:45.159837945Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:53:45.160552 containerd[2155]: time="2024-12-13T01:53:45.159939693Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:53:45.160552 containerd[2155]: time="2024-12-13T01:53:45.159974037Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:53:45.160552 containerd[2155]: time="2024-12-13T01:53:45.160100229Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:53:45.160552 containerd[2155]: time="2024-12-13T01:53:45.160139241Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:53:45.160552 containerd[2155]: time="2024-12-13T01:53:45.160422129Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.163703193Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.163988445Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.164021901Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.164051817Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.164083653Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.164112825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.164141877Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.164173461Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.165432825Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.165480345Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.165510993Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.165543621Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.165584841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166331 containerd[2155]: time="2024-12-13T01:53:45.165619821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165649581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165680685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165721581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165754161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165782361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165812901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165844941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165878493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165907125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165937017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.165965901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.166007697Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.166053417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.166084941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.166966 containerd[2155]: time="2024-12-13T01:53:45.166111425Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:53:45.172556 containerd[2155]: time="2024-12-13T01:53:45.170911077Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:53:45.172556 containerd[2155]: time="2024-12-13T01:53:45.171003321Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:53:45.172556 containerd[2155]: time="2024-12-13T01:53:45.171140013Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:53:45.172556 containerd[2155]: time="2024-12-13T01:53:45.171174345Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:53:45.172556 containerd[2155]: time="2024-12-13T01:53:45.171243993Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.172556 containerd[2155]: time="2024-12-13T01:53:45.171317493Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:53:45.172556 containerd[2155]: time="2024-12-13T01:53:45.171362493Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:53:45.172556 containerd[2155]: time="2024-12-13T01:53:45.171411705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.176898 containerd[2155]: time="2024-12-13T01:53:45.172995621Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:53:45.176898 containerd[2155]: time="2024-12-13T01:53:45.173732205Z" level=info msg="Connect containerd service" Dec 13 01:53:45.176898 containerd[2155]: time="2024-12-13T01:53:45.176248953Z" level=info msg="using legacy CRI server" Dec 13 01:53:45.178565 containerd[2155]: time="2024-12-13T01:53:45.176273373Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:53:45.180232 containerd[2155]: time="2024-12-13T01:53:45.178887093Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:53:45.180232 containerd[2155]: time="2024-12-13T01:53:45.180147369Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:53:45.182992 containerd[2155]: time="2024-12-13T01:53:45.182899089Z" level=info msg="Start subscribing containerd event" Dec 13 01:53:45.183276 containerd[2155]: time="2024-12-13T01:53:45.183243465Z" level=info msg="Start recovering state" Dec 13 01:53:45.183428 containerd[2155]: time="2024-12-13T01:53:45.183378561Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:53:45.183528 containerd[2155]: time="2024-12-13T01:53:45.183493149Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:53:45.183884 containerd[2155]: time="2024-12-13T01:53:45.183854241Z" level=info msg="Start event monitor" Dec 13 01:53:45.184015 containerd[2155]: time="2024-12-13T01:53:45.183989013Z" level=info msg="Start snapshots syncer" Dec 13 01:53:45.184153 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:45.200542 containerd[2155]: time="2024-12-13T01:53:45.197063649Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:53:45.200542 containerd[2155]: time="2024-12-13T01:53:45.197115681Z" level=info msg="Start streaming server" Dec 13 01:53:45.200542 containerd[2155]: time="2024-12-13T01:53:45.197329857Z" level=info msg="containerd successfully booted in 0.289664s" Dec 13 01:53:45.197840 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:53:45.288952 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:53:45.393279 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:53:45.492339 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:53:45.595360 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:53:45.610720 sshd_keygen[2139]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:53:45.697184 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO [Registrar] Starting registrar module Dec 13 01:53:45.711126 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:53:45.727824 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:53:45.750870 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:53:45.751955 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:53:45.768847 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:53:45.797439 amazon-ssm-agent[2182]: 2024-12-13 01:53:44 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:53:45.823967 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:53:45.843631 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:53:45.860171 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:53:45.866585 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:53:45.964703 tar[2132]: linux-arm64/LICENSE Dec 13 01:53:45.965798 tar[2132]: linux-arm64/README.md Dec 13 01:53:46.003836 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:53:46.263281 amazon-ssm-agent[2182]: 2024-12-13 01:53:46 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:53:46.301170 amazon-ssm-agent[2182]: 2024-12-13 01:53:46 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:53:46.302749 amazon-ssm-agent[2182]: 2024-12-13 01:53:46 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:53:46.302749 amazon-ssm-agent[2182]: 2024-12-13 01:53:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:53:46.333622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:53:46.337986 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:53:46.341150 systemd[1]: Startup finished in 8.647s (kernel) + 9.104s (userspace) = 17.751s. Dec 13 01:53:46.349071 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:53:46.363226 amazon-ssm-agent[2182]: 2024-12-13 01:53:46 INFO [CredentialRefresher] Next credential rotation will be in 31.708299712433334 minutes Dec 13 01:53:47.330317 amazon-ssm-agent[2182]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:53:47.431239 amazon-ssm-agent[2182]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2383) started Dec 13 01:53:47.531552 amazon-ssm-agent[2182]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:53:47.657647 kubelet[2373]: E1213 01:53:47.657444 2373 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:47.662450 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:47.663010 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:50.628676 systemd-resolved[2030]: Clock change detected. Flushing caches. Dec 13 01:53:51.178865 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:53:51.187770 systemd[1]: Started sshd@0-172.31.22.156:22-139.178.68.195:38846.service - OpenSSH per-connection server daemon (139.178.68.195:38846). Dec 13 01:53:51.375475 sshd[2397]: Accepted publickey for core from 139.178.68.195 port 38846 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:51.378643 sshd[2397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:51.396240 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:53:51.404719 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:53:51.409666 systemd-logind[2111]: New session 1 of user core. Dec 13 01:53:51.438583 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:53:51.452715 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:53:51.461520 (systemd)[2403]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:51.670315 systemd[2403]: Queued start job for default target default.target. Dec 13 01:53:51.670988 systemd[2403]: Created slice app.slice - User Application Slice. Dec 13 01:53:51.671042 systemd[2403]: Reached target paths.target - Paths. Dec 13 01:53:51.671073 systemd[2403]: Reached target timers.target - Timers. Dec 13 01:53:51.681385 systemd[2403]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:53:51.694625 systemd[2403]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:53:51.694743 systemd[2403]: Reached target sockets.target - Sockets. Dec 13 01:53:51.694775 systemd[2403]: Reached target basic.target - Basic System. Dec 13 01:53:51.694862 systemd[2403]: Reached target default.target - Main User Target. Dec 13 01:53:51.694922 systemd[2403]: Startup finished in 221ms. Dec 13 01:53:51.695697 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:53:51.709750 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:53:51.859987 systemd[1]: Started sshd@1-172.31.22.156:22-139.178.68.195:38860.service - OpenSSH per-connection server daemon (139.178.68.195:38860). Dec 13 01:53:52.037301 sshd[2415]: Accepted publickey for core from 139.178.68.195 port 38860 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:52.039830 sshd[2415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:52.047770 systemd-logind[2111]: New session 2 of user core. Dec 13 01:53:52.060788 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:53:52.189531 sshd[2415]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:52.195590 systemd[1]: sshd@1-172.31.22.156:22-139.178.68.195:38860.service: Deactivated successfully. Dec 13 01:53:52.201966 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:53:52.203474 systemd-logind[2111]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:53:52.205080 systemd-logind[2111]: Removed session 2. Dec 13 01:53:52.222676 systemd[1]: Started sshd@2-172.31.22.156:22-139.178.68.195:38866.service - OpenSSH per-connection server daemon (139.178.68.195:38866). Dec 13 01:53:52.388782 sshd[2423]: Accepted publickey for core from 139.178.68.195 port 38866 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:52.391694 sshd[2423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:52.400892 systemd-logind[2111]: New session 3 of user core. Dec 13 01:53:52.407819 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:53:52.532586 sshd[2423]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:52.538987 systemd[1]: sshd@2-172.31.22.156:22-139.178.68.195:38866.service: Deactivated successfully. Dec 13 01:53:52.545471 systemd-logind[2111]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:53:52.546253 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:53:52.548986 systemd-logind[2111]: Removed session 3. Dec 13 01:53:52.562760 systemd[1]: Started sshd@3-172.31.22.156:22-139.178.68.195:38882.service - OpenSSH per-connection server daemon (139.178.68.195:38882). Dec 13 01:53:52.735686 sshd[2431]: Accepted publickey for core from 139.178.68.195 port 38882 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:52.737907 sshd[2431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:52.749366 systemd-logind[2111]: New session 4 of user core. Dec 13 01:53:52.753782 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:53:52.886598 sshd[2431]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:52.891824 systemd[1]: sshd@3-172.31.22.156:22-139.178.68.195:38882.service: Deactivated successfully. Dec 13 01:53:52.898783 systemd-logind[2111]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:53:52.900733 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:53:52.902778 systemd-logind[2111]: Removed session 4. Dec 13 01:53:52.919719 systemd[1]: Started sshd@4-172.31.22.156:22-139.178.68.195:38894.service - OpenSSH per-connection server daemon (139.178.68.195:38894). Dec 13 01:53:53.085577 sshd[2439]: Accepted publickey for core from 139.178.68.195 port 38894 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:53.088632 sshd[2439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:53.096269 systemd-logind[2111]: New session 5 of user core. Dec 13 01:53:53.104734 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:53:53.223929 sudo[2443]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:53:53.224575 sudo[2443]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:53.239288 sudo[2443]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:53.262688 sshd[2439]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:53.271075 systemd[1]: sshd@4-172.31.22.156:22-139.178.68.195:38894.service: Deactivated successfully. Dec 13 01:53:53.276023 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:53:53.277575 systemd-logind[2111]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:53:53.279522 systemd-logind[2111]: Removed session 5. Dec 13 01:53:53.294720 systemd[1]: Started sshd@5-172.31.22.156:22-139.178.68.195:38910.service - OpenSSH per-connection server daemon (139.178.68.195:38910). Dec 13 01:53:53.464017 sshd[2448]: Accepted publickey for core from 139.178.68.195 port 38910 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:53.466600 sshd[2448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:53.473966 systemd-logind[2111]: New session 6 of user core. Dec 13 01:53:53.487705 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:53:53.593800 sudo[2453]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:53:53.594502 sudo[2453]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:53.600968 sudo[2453]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:53.610902 sudo[2452]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:53:53.611562 sudo[2452]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:53.631734 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:53.648193 auditctl[2456]: No rules Dec 13 01:53:53.651120 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:53:53.651720 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:53.660523 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:53.709478 augenrules[2475]: No rules Dec 13 01:53:53.712866 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:53.717727 sudo[2452]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:53.743384 sshd[2448]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:53.750493 systemd[1]: sshd@5-172.31.22.156:22-139.178.68.195:38910.service: Deactivated successfully. Dec 13 01:53:53.750999 systemd-logind[2111]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:53:53.756748 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:53:53.759127 systemd-logind[2111]: Removed session 6. Dec 13 01:53:53.769720 systemd[1]: Started sshd@6-172.31.22.156:22-139.178.68.195:38912.service - OpenSSH per-connection server daemon (139.178.68.195:38912). Dec 13 01:53:53.943148 sshd[2484]: Accepted publickey for core from 139.178.68.195 port 38912 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:53.945642 sshd[2484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:53.954143 systemd-logind[2111]: New session 7 of user core. Dec 13 01:53:53.960837 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:53:54.065454 sudo[2488]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:53:54.066055 sudo[2488]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:54.497676 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:53:54.499925 (dockerd)[2504]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:53:54.852345 dockerd[2504]: time="2024-12-13T01:53:54.851571744Z" level=info msg="Starting up" Dec 13 01:53:55.192308 dockerd[2504]: time="2024-12-13T01:53:55.192048430Z" level=info msg="Loading containers: start." Dec 13 01:53:55.344455 kernel: Initializing XFRM netlink socket Dec 13 01:53:55.376961 (udev-worker)[2527]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:55.466945 systemd-networkd[1694]: docker0: Link UP Dec 13 01:53:55.489722 dockerd[2504]: time="2024-12-13T01:53:55.489654336Z" level=info msg="Loading containers: done." Dec 13 01:53:55.513730 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4117185285-merged.mount: Deactivated successfully. Dec 13 01:53:55.516425 dockerd[2504]: time="2024-12-13T01:53:55.515944164Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:53:55.516425 dockerd[2504]: time="2024-12-13T01:53:55.516078576Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:53:55.516425 dockerd[2504]: time="2024-12-13T01:53:55.516390120Z" level=info msg="Daemon has completed initialization" Dec 13 01:53:55.575750 dockerd[2504]: time="2024-12-13T01:53:55.575318364Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:53:55.575616 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:53:56.738897 containerd[2155]: time="2024-12-13T01:53:56.738825230Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:53:57.383667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219846728.mount: Deactivated successfully. Dec 13 01:53:58.068181 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:53:58.076564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:53:58.396626 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:53:58.401856 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:53:58.538462 kubelet[2715]: E1213 01:53:58.537373 2715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:58.549363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:58.549758 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:59.155150 containerd[2155]: time="2024-12-13T01:53:59.154923878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:59.157564 containerd[2155]: time="2024-12-13T01:53:59.157197698Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 01:53:59.158730 containerd[2155]: time="2024-12-13T01:53:59.158627630Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:59.164741 containerd[2155]: time="2024-12-13T01:53:59.164615642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:59.167540 containerd[2155]: time="2024-12-13T01:53:59.167045954Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.428152504s" Dec 13 01:53:59.167540 containerd[2155]: time="2024-12-13T01:53:59.167119634Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:53:59.211298 containerd[2155]: time="2024-12-13T01:53:59.211239554Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:54:00.843265 containerd[2155]: time="2024-12-13T01:54:00.842864094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:00.845003 containerd[2155]: time="2024-12-13T01:54:00.844948614Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 01:54:00.845823 containerd[2155]: time="2024-12-13T01:54:00.845742030Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:00.853328 containerd[2155]: time="2024-12-13T01:54:00.853240590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:00.855809 containerd[2155]: time="2024-12-13T01:54:00.854297742Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 1.642645052s" Dec 13 01:54:00.855809 containerd[2155]: time="2024-12-13T01:54:00.854357814Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:54:00.895752 containerd[2155]: time="2024-12-13T01:54:00.895447398Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:54:01.974917 containerd[2155]: time="2024-12-13T01:54:01.974619956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:01.976794 containerd[2155]: time="2024-12-13T01:54:01.976730252Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 01:54:01.977626 containerd[2155]: time="2024-12-13T01:54:01.977220752Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:01.982879 containerd[2155]: time="2024-12-13T01:54:01.982797500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:01.986698 containerd[2155]: time="2024-12-13T01:54:01.985110896Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.089604458s" Dec 13 01:54:01.986698 containerd[2155]: time="2024-12-13T01:54:01.985174856Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:54:02.023712 containerd[2155]: time="2024-12-13T01:54:02.023359324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:54:03.269446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1434804863.mount: Deactivated successfully. Dec 13 01:54:03.774298 containerd[2155]: time="2024-12-13T01:54:03.773787021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:03.775598 containerd[2155]: time="2024-12-13T01:54:03.775522941Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:54:03.776920 containerd[2155]: time="2024-12-13T01:54:03.776823105Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:03.780735 containerd[2155]: time="2024-12-13T01:54:03.780644277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:03.782452 containerd[2155]: time="2024-12-13T01:54:03.782258589Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.758831885s" Dec 13 01:54:03.782452 containerd[2155]: time="2024-12-13T01:54:03.782311677Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:54:03.823608 containerd[2155]: time="2024-12-13T01:54:03.823540533Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:54:04.421665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount596534934.mount: Deactivated successfully. Dec 13 01:54:05.477651 containerd[2155]: time="2024-12-13T01:54:05.477439077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:05.479690 containerd[2155]: time="2024-12-13T01:54:05.479615685Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:54:05.480571 containerd[2155]: time="2024-12-13T01:54:05.480117801Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:05.486038 containerd[2155]: time="2024-12-13T01:54:05.485939181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:05.488607 containerd[2155]: time="2024-12-13T01:54:05.488388657Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.664765372s" Dec 13 01:54:05.488607 containerd[2155]: time="2024-12-13T01:54:05.488452509Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:54:05.526273 containerd[2155]: time="2024-12-13T01:54:05.526065885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:54:06.043822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount161752486.mount: Deactivated successfully. Dec 13 01:54:06.054190 containerd[2155]: time="2024-12-13T01:54:06.054066464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:06.055718 containerd[2155]: time="2024-12-13T01:54:06.055632788Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:54:06.056736 containerd[2155]: time="2024-12-13T01:54:06.056651612Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:06.062067 containerd[2155]: time="2024-12-13T01:54:06.061979312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:06.064017 containerd[2155]: time="2024-12-13T01:54:06.063794972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 537.672999ms" Dec 13 01:54:06.064017 containerd[2155]: time="2024-12-13T01:54:06.063858992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:54:06.103032 containerd[2155]: time="2024-12-13T01:54:06.102951128Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:54:06.750851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1967725310.mount: Deactivated successfully. Dec 13 01:54:08.636240 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:54:08.647555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:09.192885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:09.205938 (kubelet)[2872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:54:09.306258 kubelet[2872]: E1213 01:54:09.305275 2872 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:54:09.313136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:54:09.313587 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:54:10.096320 containerd[2155]: time="2024-12-13T01:54:10.096240552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:10.098644 containerd[2155]: time="2024-12-13T01:54:10.098568024Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 01:54:10.101083 containerd[2155]: time="2024-12-13T01:54:10.101006592Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:10.107387 containerd[2155]: time="2024-12-13T01:54:10.107307048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:10.109951 containerd[2155]: time="2024-12-13T01:54:10.109693668Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.006685252s" Dec 13 01:54:10.109951 containerd[2155]: time="2024-12-13T01:54:10.109762440Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:54:15.115670 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:54:16.125797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:16.134715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:16.184510 systemd[1]: Reloading requested from client PID 2953 ('systemctl') (unit session-7.scope)... Dec 13 01:54:16.184536 systemd[1]: Reloading... Dec 13 01:54:16.402278 zram_generator::config[2996]: No configuration found. Dec 13 01:54:16.658949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:16.816930 systemd[1]: Reloading finished in 631 ms. Dec 13 01:54:16.899571 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:54:16.899815 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:54:16.900457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:16.910397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:17.197569 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:17.210961 (kubelet)[3068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:54:17.287732 kubelet[3068]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:17.288309 kubelet[3068]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:54:17.288309 kubelet[3068]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:17.289502 kubelet[3068]: I1213 01:54:17.289412 3068 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:54:18.495450 kubelet[3068]: I1213 01:54:18.495407 3068 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:54:18.495450 kubelet[3068]: I1213 01:54:18.495453 3068 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:54:18.496081 kubelet[3068]: I1213 01:54:18.495785 3068 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:54:18.529486 kubelet[3068]: E1213 01:54:18.529412 3068 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.156:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:18.530014 kubelet[3068]: I1213 01:54:18.529827 3068 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:18.548261 kubelet[3068]: I1213 01:54:18.548194 3068 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:54:18.549974 kubelet[3068]: I1213 01:54:18.549168 3068 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:54:18.549974 kubelet[3068]: I1213 01:54:18.549517 3068 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:54:18.549974 kubelet[3068]: I1213 01:54:18.549554 3068 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:54:18.549974 kubelet[3068]: I1213 01:54:18.549574 3068 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:54:18.549974 kubelet[3068]: I1213 01:54:18.549754 3068 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:18.554152 kubelet[3068]: I1213 01:54:18.554106 3068 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:54:18.554823 kubelet[3068]: I1213 01:54:18.554799 3068 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:54:18.555232 kubelet[3068]: I1213 01:54:18.554975 3068 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:54:18.555232 kubelet[3068]: I1213 01:54:18.555012 3068 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:54:18.555968 kubelet[3068]: W1213 01:54:18.555506 3068 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.22.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-156&limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:18.555968 kubelet[3068]: E1213 01:54:18.555587 3068 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-156&limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:18.559233 kubelet[3068]: W1213 01:54:18.559131 3068 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.22.156:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:18.559399 kubelet[3068]: E1213 01:54:18.559245 3068 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.156:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:18.560240 kubelet[3068]: I1213 01:54:18.559756 3068 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:54:18.560340 kubelet[3068]: I1213 01:54:18.560296 3068 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:54:18.562318 kubelet[3068]: W1213 01:54:18.562266 3068 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:54:18.563431 kubelet[3068]: I1213 01:54:18.563385 3068 server.go:1256] "Started kubelet" Dec 13 01:54:18.567981 kubelet[3068]: I1213 01:54:18.567946 3068 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:54:18.569744 kubelet[3068]: I1213 01:54:18.568796 3068 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:54:18.569744 kubelet[3068]: I1213 01:54:18.569325 3068 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:54:18.569744 kubelet[3068]: I1213 01:54:18.569573 3068 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:54:18.573300 kubelet[3068]: E1213 01:54:18.573189 3068 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.156:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.156:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-156.181099b4d9b55b36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-156,UID:ip-172-31-22-156,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-156,},FirstTimestamp:2024-12-13 01:54:18.563345206 +0000 UTC m=+1.344940051,LastTimestamp:2024-12-13 01:54:18.563345206 +0000 UTC m=+1.344940051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-156,}" Dec 13 01:54:18.575350 kubelet[3068]: I1213 01:54:18.575296 3068 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:54:18.584251 kubelet[3068]: E1213 01:54:18.583930 3068 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:54:18.586901 kubelet[3068]: E1213 01:54:18.586493 3068 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-22-156\" not found" Dec 13 01:54:18.586901 kubelet[3068]: I1213 01:54:18.586563 3068 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:54:18.586901 kubelet[3068]: I1213 01:54:18.586737 3068 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:54:18.590631 kubelet[3068]: I1213 01:54:18.590589 3068 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:54:18.594297 kubelet[3068]: W1213 01:54:18.593823 3068 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.22.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:18.594297 kubelet[3068]: E1213 01:54:18.593897 3068 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:18.594297 kubelet[3068]: E1213 01:54:18.594026 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-156?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" interval="200ms" Dec 13 01:54:18.597618 kubelet[3068]: I1213 01:54:18.597572 3068 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:54:18.599151 kubelet[3068]: I1213 01:54:18.599106 3068 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:54:18.602597 kubelet[3068]: I1213 01:54:18.602470 3068 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:54:18.625483 kubelet[3068]: I1213 01:54:18.625306 3068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:54:18.630543 kubelet[3068]: I1213 01:54:18.630372 3068 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:54:18.630543 kubelet[3068]: I1213 01:54:18.630418 3068 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:54:18.630543 kubelet[3068]: I1213 01:54:18.630448 3068 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:54:18.630543 kubelet[3068]: E1213 01:54:18.630521 3068 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:54:18.635262 kubelet[3068]: W1213 01:54:18.634921 3068 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.22.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:18.635262 kubelet[3068]: E1213 01:54:18.635013 3068 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:18.661853 kubelet[3068]: I1213 01:54:18.661800 3068 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:54:18.661853 kubelet[3068]: I1213 01:54:18.661839 3068 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:54:18.662040 kubelet[3068]: I1213 01:54:18.661872 3068 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:18.667514 kubelet[3068]: I1213 01:54:18.667467 3068 policy_none.go:49] "None policy: Start" Dec 13 01:54:18.669291 kubelet[3068]: I1213 01:54:18.668907 3068 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:54:18.669291 kubelet[3068]: I1213 01:54:18.668975 3068 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:54:18.681257 kubelet[3068]: I1213 01:54:18.680403 3068 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:54:18.681257 kubelet[3068]: I1213 01:54:18.680784 3068 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:54:18.686465 kubelet[3068]: E1213 01:54:18.686414 3068 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-156\" not found" Dec 13 01:54:18.689608 kubelet[3068]: I1213 01:54:18.689555 3068 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-156" Dec 13 01:54:18.690199 kubelet[3068]: E1213 01:54:18.690146 3068 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.156:6443/api/v1/nodes\": dial tcp 172.31.22.156:6443: connect: connection refused" node="ip-172-31-22-156" Dec 13 01:54:18.730759 kubelet[3068]: I1213 01:54:18.730699 3068 topology_manager.go:215] "Topology Admit Handler" podUID="7bfe60aa92dfa4a10834d68e777c3b13" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-156" Dec 13 01:54:18.732803 kubelet[3068]: I1213 01:54:18.732753 3068 topology_manager.go:215] "Topology Admit Handler" podUID="b1a221894a9ef8af33f9eb110e90c7d0" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:18.735232 kubelet[3068]: I1213 01:54:18.734780 3068 topology_manager.go:215] "Topology Admit Handler" podUID="52a4b1c956bc242df55f794bf4646173" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-156" Dec 13 01:54:18.795325 kubelet[3068]: I1213 01:54:18.794566 3068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bfe60aa92dfa4a10834d68e777c3b13-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-156\" (UID: \"7bfe60aa92dfa4a10834d68e777c3b13\") " pod="kube-system/kube-apiserver-ip-172-31-22-156" Dec 13 01:54:18.795325 kubelet[3068]: I1213 01:54:18.794635 3068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:18.795325 kubelet[3068]: I1213 01:54:18.794699 3068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:18.795325 kubelet[3068]: I1213 01:54:18.794752 3068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:18.795325 kubelet[3068]: I1213 01:54:18.794800 3068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:18.795662 kubelet[3068]: I1213 01:54:18.794842 3068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52a4b1c956bc242df55f794bf4646173-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-156\" (UID: \"52a4b1c956bc242df55f794bf4646173\") " pod="kube-system/kube-scheduler-ip-172-31-22-156" Dec 13 01:54:18.795662 kubelet[3068]: I1213 01:54:18.794890 3068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bfe60aa92dfa4a10834d68e777c3b13-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-156\" (UID: \"7bfe60aa92dfa4a10834d68e777c3b13\") " pod="kube-system/kube-apiserver-ip-172-31-22-156" Dec 13 01:54:18.795662 kubelet[3068]: I1213 01:54:18.794934 3068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:18.795662 kubelet[3068]: I1213 01:54:18.794979 3068 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bfe60aa92dfa4a10834d68e777c3b13-ca-certs\") pod \"kube-apiserver-ip-172-31-22-156\" (UID: \"7bfe60aa92dfa4a10834d68e777c3b13\") " pod="kube-system/kube-apiserver-ip-172-31-22-156" Dec 13 01:54:18.795662 kubelet[3068]: E1213 01:54:18.795500 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-156?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" interval="400ms" Dec 13 01:54:18.892826 kubelet[3068]: I1213 01:54:18.892784 3068 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-156" Dec 13 01:54:18.893369 kubelet[3068]: E1213 01:54:18.893337 3068 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.156:6443/api/v1/nodes\": dial tcp 172.31.22.156:6443: connect: connection refused" node="ip-172-31-22-156" Dec 13 01:54:19.040367 containerd[2155]: time="2024-12-13T01:54:19.040239429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-156,Uid:7bfe60aa92dfa4a10834d68e777c3b13,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:19.046633 containerd[2155]: time="2024-12-13T01:54:19.046157337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-156,Uid:b1a221894a9ef8af33f9eb110e90c7d0,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:19.055246 containerd[2155]: time="2024-12-13T01:54:19.055131249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-156,Uid:52a4b1c956bc242df55f794bf4646173,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:19.197152 kubelet[3068]: E1213 01:54:19.197110 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-156?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" interval="800ms" Dec 13 01:54:19.295509 kubelet[3068]: I1213 01:54:19.295473 3068 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-156" Dec 13 01:54:19.296343 kubelet[3068]: E1213 01:54:19.296309 3068 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.156:6443/api/v1/nodes\": dial tcp 172.31.22.156:6443: connect: connection refused" node="ip-172-31-22-156" Dec 13 01:54:19.454733 kubelet[3068]: W1213 01:54:19.454605 3068 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.22.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:19.454733 kubelet[3068]: E1213 01:54:19.454700 3068 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.156:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:19.587958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110173368.mount: Deactivated successfully. Dec 13 01:54:19.603296 containerd[2155]: time="2024-12-13T01:54:19.603193559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:19.605449 containerd[2155]: time="2024-12-13T01:54:19.605379251Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:19.607709 containerd[2155]: time="2024-12-13T01:54:19.607655447Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:54:19.609440 containerd[2155]: time="2024-12-13T01:54:19.609377915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:54:19.611618 containerd[2155]: time="2024-12-13T01:54:19.611565635Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:19.614534 containerd[2155]: time="2024-12-13T01:54:19.614382419Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:19.616169 containerd[2155]: time="2024-12-13T01:54:19.616067291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:54:19.620484 containerd[2155]: time="2024-12-13T01:54:19.620380116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:19.624830 containerd[2155]: time="2024-12-13T01:54:19.624460104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.112579ms" Dec 13 01:54:19.628855 containerd[2155]: time="2024-12-13T01:54:19.628779744Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 582.490071ms" Dec 13 01:54:19.632891 containerd[2155]: time="2024-12-13T01:54:19.632643516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 577.377843ms" Dec 13 01:54:19.677003 kubelet[3068]: W1213 01:54:19.676880 3068 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.22.156:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:19.677003 kubelet[3068]: E1213 01:54:19.676963 3068 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.156:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:19.835994 containerd[2155]: time="2024-12-13T01:54:19.835004161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:19.835994 containerd[2155]: time="2024-12-13T01:54:19.835135261Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:19.835994 containerd[2155]: time="2024-12-13T01:54:19.835183333Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:19.837883 containerd[2155]: time="2024-12-13T01:54:19.837686569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:19.839663 containerd[2155]: time="2024-12-13T01:54:19.838733137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:19.839663 containerd[2155]: time="2024-12-13T01:54:19.838840489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:19.839663 containerd[2155]: time="2024-12-13T01:54:19.838877029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:19.839663 containerd[2155]: time="2024-12-13T01:54:19.839068237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:19.841410 containerd[2155]: time="2024-12-13T01:54:19.841271065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:19.841672 containerd[2155]: time="2024-12-13T01:54:19.841372897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:19.841672 containerd[2155]: time="2024-12-13T01:54:19.841414405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:19.841672 containerd[2155]: time="2024-12-13T01:54:19.841582621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:19.944229 kubelet[3068]: W1213 01:54:19.943828 3068 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.22.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-156&limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:19.945314 kubelet[3068]: E1213 01:54:19.945125 3068 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.156:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-156&limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:19.967641 containerd[2155]: time="2024-12-13T01:54:19.967271473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-156,Uid:b1a221894a9ef8af33f9eb110e90c7d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"26cb3f20e74d5130e06af4d15acfa34e6fae58079e75df62df1aff4fbceb0b5a\"" Dec 13 01:54:19.978402 containerd[2155]: time="2024-12-13T01:54:19.978161065Z" level=info msg="CreateContainer within sandbox \"26cb3f20e74d5130e06af4d15acfa34e6fae58079e75df62df1aff4fbceb0b5a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:54:19.999180 kubelet[3068]: E1213 01:54:19.999050 3068 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-156?timeout=10s\": dial tcp 172.31.22.156:6443: connect: connection refused" interval="1.6s" Dec 13 01:54:20.010332 containerd[2155]: time="2024-12-13T01:54:20.010118469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-156,Uid:7bfe60aa92dfa4a10834d68e777c3b13,Namespace:kube-system,Attempt:0,} returns sandbox id \"f45f1a3c8d405093b134eb87d7114548cf8599baa7437ad453decc88a428c673\"" Dec 13 01:54:20.020141 containerd[2155]: time="2024-12-13T01:54:20.020007129Z" level=info msg="CreateContainer within sandbox \"f45f1a3c8d405093b134eb87d7114548cf8599baa7437ad453decc88a428c673\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:54:20.022666 containerd[2155]: time="2024-12-13T01:54:20.022608670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-156,Uid:52a4b1c956bc242df55f794bf4646173,Namespace:kube-system,Attempt:0,} returns sandbox id \"61feb8663c6b8e6ff9c1d811ae4ef894139aa11e03be7f159520d6ce999937ed\"" Dec 13 01:54:20.028616 containerd[2155]: time="2024-12-13T01:54:20.028406566Z" level=info msg="CreateContainer within sandbox \"61feb8663c6b8e6ff9c1d811ae4ef894139aa11e03be7f159520d6ce999937ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:54:20.029239 containerd[2155]: time="2024-12-13T01:54:20.028894282Z" level=info msg="CreateContainer within sandbox \"26cb3f20e74d5130e06af4d15acfa34e6fae58079e75df62df1aff4fbceb0b5a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fb286871df7d009ba0c6aba2dfa05f9d245a7d101bced4a13c5630524045f902\"" Dec 13 01:54:20.030323 containerd[2155]: time="2024-12-13T01:54:20.030272242Z" level=info msg="StartContainer for \"fb286871df7d009ba0c6aba2dfa05f9d245a7d101bced4a13c5630524045f902\"" Dec 13 01:54:20.062316 containerd[2155]: time="2024-12-13T01:54:20.061191286Z" level=info msg="CreateContainer within sandbox \"f45f1a3c8d405093b134eb87d7114548cf8599baa7437ad453decc88a428c673\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0501b8511d056db513b49edfe9b5c2b385e7d6408766e9cc0ca9b41b452038a1\"" Dec 13 01:54:20.065297 containerd[2155]: time="2024-12-13T01:54:20.063742090Z" level=info msg="StartContainer for \"0501b8511d056db513b49edfe9b5c2b385e7d6408766e9cc0ca9b41b452038a1\"" Dec 13 01:54:20.076177 containerd[2155]: time="2024-12-13T01:54:20.075957322Z" level=info msg="CreateContainer within sandbox \"61feb8663c6b8e6ff9c1d811ae4ef894139aa11e03be7f159520d6ce999937ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6ee61ff23ecaba4182e866c0d244f85e3188dc09470cc1a11e2fc5abf7d4e201\"" Dec 13 01:54:20.077953 containerd[2155]: time="2024-12-13T01:54:20.077542234Z" level=info msg="StartContainer for \"6ee61ff23ecaba4182e866c0d244f85e3188dc09470cc1a11e2fc5abf7d4e201\"" Dec 13 01:54:20.101355 kubelet[3068]: I1213 01:54:20.100666 3068 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-156" Dec 13 01:54:20.101355 kubelet[3068]: E1213 01:54:20.101144 3068 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.156:6443/api/v1/nodes\": dial tcp 172.31.22.156:6443: connect: connection refused" node="ip-172-31-22-156" Dec 13 01:54:20.184876 kubelet[3068]: W1213 01:54:20.184772 3068 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.22.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:20.184876 kubelet[3068]: E1213 01:54:20.184871 3068 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.156:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.156:6443: connect: connection refused Dec 13 01:54:20.194258 containerd[2155]: time="2024-12-13T01:54:20.193245946Z" level=info msg="StartContainer for \"fb286871df7d009ba0c6aba2dfa05f9d245a7d101bced4a13c5630524045f902\" returns successfully" Dec 13 01:54:20.303169 containerd[2155]: time="2024-12-13T01:54:20.302908655Z" level=info msg="StartContainer for \"6ee61ff23ecaba4182e866c0d244f85e3188dc09470cc1a11e2fc5abf7d4e201\" returns successfully" Dec 13 01:54:20.309552 containerd[2155]: time="2024-12-13T01:54:20.309471143Z" level=info msg="StartContainer for \"0501b8511d056db513b49edfe9b5c2b385e7d6408766e9cc0ca9b41b452038a1\" returns successfully" Dec 13 01:54:21.704419 kubelet[3068]: I1213 01:54:21.704371 3068 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-156" Dec 13 01:54:24.194262 kubelet[3068]: E1213 01:54:24.191608 3068 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-156\" not found" node="ip-172-31-22-156" Dec 13 01:54:24.194262 kubelet[3068]: I1213 01:54:24.192081 3068 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-156" Dec 13 01:54:24.563872 kubelet[3068]: I1213 01:54:24.562663 3068 apiserver.go:52] "Watching apiserver" Dec 13 01:54:24.591684 kubelet[3068]: I1213 01:54:24.591601 3068 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:54:26.971501 systemd[1]: Reloading requested from client PID 3340 ('systemctl') (unit session-7.scope)... Dec 13 01:54:26.971987 systemd[1]: Reloading... Dec 13 01:54:27.281274 zram_generator::config[3383]: No configuration found. Dec 13 01:54:27.543224 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:27.721219 systemd[1]: Reloading finished in 748 ms. Dec 13 01:54:27.783453 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:27.802281 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:54:27.802948 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:27.813810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:28.134101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:28.154914 (kubelet)[3450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:54:28.275799 kubelet[3450]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:28.276949 kubelet[3450]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:54:28.276949 kubelet[3450]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:28.276949 kubelet[3450]: I1213 01:54:28.276651 3450 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:54:28.290120 kubelet[3450]: I1213 01:54:28.289453 3450 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:54:28.290120 kubelet[3450]: I1213 01:54:28.289499 3450 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:54:28.290120 kubelet[3450]: I1213 01:54:28.289857 3450 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:54:28.293954 kubelet[3450]: I1213 01:54:28.293918 3450 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:54:28.297919 kubelet[3450]: I1213 01:54:28.297872 3450 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:28.309066 kubelet[3450]: I1213 01:54:28.309028 3450 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:54:28.310292 kubelet[3450]: I1213 01:54:28.310259 3450 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:54:28.310699 kubelet[3450]: I1213 01:54:28.310661 3450 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:54:28.311404 kubelet[3450]: I1213 01:54:28.310909 3450 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:54:28.311404 kubelet[3450]: I1213 01:54:28.310938 3450 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:54:28.311404 kubelet[3450]: I1213 01:54:28.311006 3450 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:28.311404 kubelet[3450]: I1213 01:54:28.311228 3450 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:54:28.311404 kubelet[3450]: I1213 01:54:28.311261 3450 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:54:28.311404 kubelet[3450]: I1213 01:54:28.311306 3450 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:54:28.311404 kubelet[3450]: I1213 01:54:28.311328 3450 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:54:28.317411 kubelet[3450]: I1213 01:54:28.317369 3450 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:54:28.317869 kubelet[3450]: I1213 01:54:28.317847 3450 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:54:28.320260 kubelet[3450]: I1213 01:54:28.318628 3450 server.go:1256] "Started kubelet" Dec 13 01:54:28.324474 kubelet[3450]: I1213 01:54:28.324435 3450 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:54:28.355434 kubelet[3450]: I1213 01:54:28.355394 3450 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:54:28.380644 kubelet[3450]: I1213 01:54:28.380336 3450 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:54:28.383147 kubelet[3450]: I1213 01:54:28.356177 3450 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:54:28.386264 kubelet[3450]: I1213 01:54:28.385052 3450 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:54:28.386264 kubelet[3450]: I1213 01:54:28.364674 3450 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:54:28.387440 kubelet[3450]: I1213 01:54:28.364713 3450 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:54:28.391858 kubelet[3450]: I1213 01:54:28.387655 3450 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:54:28.394401 kubelet[3450]: I1213 01:54:28.393955 3450 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:54:28.408830 kubelet[3450]: E1213 01:54:28.408372 3450 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:54:28.408830 kubelet[3450]: I1213 01:54:28.408682 3450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:54:28.416756 kubelet[3450]: I1213 01:54:28.416687 3450 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:54:28.416756 kubelet[3450]: I1213 01:54:28.416746 3450 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:54:28.423242 kubelet[3450]: I1213 01:54:28.422612 3450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:54:28.423242 kubelet[3450]: I1213 01:54:28.422653 3450 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:54:28.423242 kubelet[3450]: I1213 01:54:28.422684 3450 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:54:28.423242 kubelet[3450]: E1213 01:54:28.422779 3450 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:54:28.477504 kubelet[3450]: I1213 01:54:28.477472 3450 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-156" Dec 13 01:54:28.501664 kubelet[3450]: I1213 01:54:28.501624 3450 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-22-156" Dec 13 01:54:28.502236 kubelet[3450]: I1213 01:54:28.502113 3450 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-156" Dec 13 01:54:28.524235 kubelet[3450]: E1213 01:54:28.523029 3450 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:54:28.628400 kubelet[3450]: I1213 01:54:28.628192 3450 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:54:28.628400 kubelet[3450]: I1213 01:54:28.628311 3450 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:54:28.628400 kubelet[3450]: I1213 01:54:28.628345 3450 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:28.629445 kubelet[3450]: I1213 01:54:28.629399 3450 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:54:28.629525 kubelet[3450]: I1213 01:54:28.629459 3450 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:54:28.629525 kubelet[3450]: I1213 01:54:28.629478 3450 policy_none.go:49] "None policy: Start" Dec 13 01:54:28.631815 kubelet[3450]: I1213 01:54:28.631325 3450 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:54:28.631815 kubelet[3450]: I1213 01:54:28.631378 3450 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:54:28.631815 kubelet[3450]: I1213 01:54:28.631668 3450 state_mem.go:75] "Updated machine memory state" Dec 13 01:54:28.638198 kubelet[3450]: I1213 01:54:28.635970 3450 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:54:28.639854 kubelet[3450]: I1213 01:54:28.639809 3450 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:54:28.724238 kubelet[3450]: I1213 01:54:28.724153 3450 topology_manager.go:215] "Topology Admit Handler" podUID="7bfe60aa92dfa4a10834d68e777c3b13" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-156" Dec 13 01:54:28.724379 kubelet[3450]: I1213 01:54:28.724319 3450 topology_manager.go:215] "Topology Admit Handler" podUID="b1a221894a9ef8af33f9eb110e90c7d0" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:28.725846 kubelet[3450]: I1213 01:54:28.724478 3450 topology_manager.go:215] "Topology Admit Handler" podUID="52a4b1c956bc242df55f794bf4646173" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-156" Dec 13 01:54:28.757928 update_engine[2120]: I20241213 01:54:28.757390 2120 update_attempter.cc:509] Updating boot flags... Dec 13 01:54:28.792533 kubelet[3450]: I1213 01:54:28.792428 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:28.793437 kubelet[3450]: I1213 01:54:28.792540 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:28.793437 kubelet[3450]: I1213 01:54:28.792594 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7bfe60aa92dfa4a10834d68e777c3b13-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-156\" (UID: \"7bfe60aa92dfa4a10834d68e777c3b13\") " pod="kube-system/kube-apiserver-ip-172-31-22-156" Dec 13 01:54:28.793437 kubelet[3450]: I1213 01:54:28.792639 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7bfe60aa92dfa4a10834d68e777c3b13-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-156\" (UID: \"7bfe60aa92dfa4a10834d68e777c3b13\") " pod="kube-system/kube-apiserver-ip-172-31-22-156" Dec 13 01:54:28.793437 kubelet[3450]: I1213 01:54:28.792685 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:28.793437 kubelet[3450]: I1213 01:54:28.792797 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:28.793693 kubelet[3450]: I1213 01:54:28.792852 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1a221894a9ef8af33f9eb110e90c7d0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-156\" (UID: \"b1a221894a9ef8af33f9eb110e90c7d0\") " pod="kube-system/kube-controller-manager-ip-172-31-22-156" Dec 13 01:54:28.793693 kubelet[3450]: I1213 01:54:28.792896 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52a4b1c956bc242df55f794bf4646173-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-156\" (UID: \"52a4b1c956bc242df55f794bf4646173\") " pod="kube-system/kube-scheduler-ip-172-31-22-156" Dec 13 01:54:28.793693 kubelet[3450]: I1213 01:54:28.792938 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7bfe60aa92dfa4a10834d68e777c3b13-ca-certs\") pod \"kube-apiserver-ip-172-31-22-156\" (UID: \"7bfe60aa92dfa4a10834d68e777c3b13\") " pod="kube-system/kube-apiserver-ip-172-31-22-156" Dec 13 01:54:28.881715 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3501) Dec 13 01:54:29.315365 kubelet[3450]: I1213 01:54:29.312679 3450 apiserver.go:52] "Watching apiserver" Dec 13 01:54:29.360013 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3500) Dec 13 01:54:29.388388 kubelet[3450]: I1213 01:54:29.388320 3450 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:54:29.485236 kubelet[3450]: I1213 01:54:29.484459 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-156" podStartSLOduration=1.484370324 podStartE2EDuration="1.484370324s" podCreationTimestamp="2024-12-13 01:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:29.483473648 +0000 UTC m=+1.317322471" watchObservedRunningTime="2024-12-13 01:54:29.484370324 +0000 UTC m=+1.318219123" Dec 13 01:54:29.537691 kubelet[3450]: I1213 01:54:29.531551 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-156" podStartSLOduration=1.5314565610000002 podStartE2EDuration="1.531456561s" podCreationTimestamp="2024-12-13 01:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:29.504402153 +0000 UTC m=+1.338250952" watchObservedRunningTime="2024-12-13 01:54:29.531456561 +0000 UTC m=+1.365305360" Dec 13 01:54:29.564638 kubelet[3450]: E1213 01:54:29.562579 3450 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-156\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-156" Dec 13 01:54:29.581562 kubelet[3450]: I1213 01:54:29.581095 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-156" podStartSLOduration=1.581035473 podStartE2EDuration="1.581035473s" podCreationTimestamp="2024-12-13 01:54:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:29.547731573 +0000 UTC m=+1.381580360" watchObservedRunningTime="2024-12-13 01:54:29.581035473 +0000 UTC m=+1.414884260" Dec 13 01:54:33.662349 sudo[2488]: pam_unix(sudo:session): session closed for user root Dec 13 01:54:33.685605 sshd[2484]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:33.691697 systemd[1]: sshd@6-172.31.22.156:22-139.178.68.195:38912.service: Deactivated successfully. Dec 13 01:54:33.700175 systemd-logind[2111]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:54:33.701032 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:54:33.704946 systemd-logind[2111]: Removed session 7. Dec 13 01:54:40.657708 kubelet[3450]: I1213 01:54:40.657463 3450 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:54:40.660727 kubelet[3450]: I1213 01:54:40.660047 3450 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:54:40.660818 containerd[2155]: time="2024-12-13T01:54:40.658565564Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:54:40.732501 kubelet[3450]: I1213 01:54:40.728898 3450 topology_manager.go:215] "Topology Admit Handler" podUID="da46a6d2-ce1d-4852-a42f-fd0237c94b44" podNamespace="kube-system" podName="kube-proxy-k7mzb" Dec 13 01:54:40.776649 kubelet[3450]: I1213 01:54:40.776610 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da46a6d2-ce1d-4852-a42f-fd0237c94b44-lib-modules\") pod \"kube-proxy-k7mzb\" (UID: \"da46a6d2-ce1d-4852-a42f-fd0237c94b44\") " pod="kube-system/kube-proxy-k7mzb" Dec 13 01:54:40.776917 kubelet[3450]: I1213 01:54:40.776896 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da46a6d2-ce1d-4852-a42f-fd0237c94b44-xtables-lock\") pod \"kube-proxy-k7mzb\" (UID: \"da46a6d2-ce1d-4852-a42f-fd0237c94b44\") " pod="kube-system/kube-proxy-k7mzb" Dec 13 01:54:40.777175 kubelet[3450]: I1213 01:54:40.777092 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da46a6d2-ce1d-4852-a42f-fd0237c94b44-kube-proxy\") pod \"kube-proxy-k7mzb\" (UID: \"da46a6d2-ce1d-4852-a42f-fd0237c94b44\") " pod="kube-system/kube-proxy-k7mzb" Dec 13 01:54:40.778111 kubelet[3450]: I1213 01:54:40.777547 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2j4w\" (UniqueName: \"kubernetes.io/projected/da46a6d2-ce1d-4852-a42f-fd0237c94b44-kube-api-access-w2j4w\") pod \"kube-proxy-k7mzb\" (UID: \"da46a6d2-ce1d-4852-a42f-fd0237c94b44\") " pod="kube-system/kube-proxy-k7mzb" Dec 13 01:54:41.042748 containerd[2155]: time="2024-12-13T01:54:41.042576570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7mzb,Uid:da46a6d2-ce1d-4852-a42f-fd0237c94b44,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:41.072727 kubelet[3450]: I1213 01:54:41.072654 3450 topology_manager.go:215] "Topology Admit Handler" podUID="72c350b8-3244-418d-981f-dcd0a7f4f1c9" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-4jf7s" Dec 13 01:54:41.130790 containerd[2155]: time="2024-12-13T01:54:41.130485138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:41.130790 containerd[2155]: time="2024-12-13T01:54:41.130676370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:41.130790 containerd[2155]: time="2024-12-13T01:54:41.130715586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:41.131270 containerd[2155]: time="2024-12-13T01:54:41.131008362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:41.182329 kubelet[3450]: I1213 01:54:41.182002 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/72c350b8-3244-418d-981f-dcd0a7f4f1c9-var-lib-calico\") pod \"tigera-operator-c7ccbd65-4jf7s\" (UID: \"72c350b8-3244-418d-981f-dcd0a7f4f1c9\") " pod="tigera-operator/tigera-operator-c7ccbd65-4jf7s" Dec 13 01:54:41.182329 kubelet[3450]: I1213 01:54:41.182175 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6dw\" (UniqueName: \"kubernetes.io/projected/72c350b8-3244-418d-981f-dcd0a7f4f1c9-kube-api-access-6q6dw\") pod \"tigera-operator-c7ccbd65-4jf7s\" (UID: \"72c350b8-3244-418d-981f-dcd0a7f4f1c9\") " pod="tigera-operator/tigera-operator-c7ccbd65-4jf7s" Dec 13 01:54:41.207608 containerd[2155]: time="2024-12-13T01:54:41.207473467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k7mzb,Uid:da46a6d2-ce1d-4852-a42f-fd0237c94b44,Namespace:kube-system,Attempt:0,} returns sandbox id \"6045ec419be7e26eb37dc9ab956e7fb51312f5414a0b2b7df7c4ee741a1985e0\"" Dec 13 01:54:41.216668 containerd[2155]: time="2024-12-13T01:54:41.216186439Z" level=info msg="CreateContainer within sandbox \"6045ec419be7e26eb37dc9ab956e7fb51312f5414a0b2b7df7c4ee741a1985e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:54:41.257732 containerd[2155]: time="2024-12-13T01:54:41.257523211Z" level=info msg="CreateContainer within sandbox \"6045ec419be7e26eb37dc9ab956e7fb51312f5414a0b2b7df7c4ee741a1985e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b068017841d6988795e0c3f1981be0dd973cb1dd6989fb3b5f5ea09d9cb43a5d\"" Dec 13 01:54:41.259790 containerd[2155]: time="2024-12-13T01:54:41.259293187Z" level=info msg="StartContainer for \"b068017841d6988795e0c3f1981be0dd973cb1dd6989fb3b5f5ea09d9cb43a5d\"" Dec 13 01:54:41.382077 containerd[2155]: time="2024-12-13T01:54:41.381825164Z" level=info msg="StartContainer for \"b068017841d6988795e0c3f1981be0dd973cb1dd6989fb3b5f5ea09d9cb43a5d\" returns successfully" Dec 13 01:54:41.396783 containerd[2155]: time="2024-12-13T01:54:41.396715136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-4jf7s,Uid:72c350b8-3244-418d-981f-dcd0a7f4f1c9,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:54:41.463228 containerd[2155]: time="2024-12-13T01:54:41.460800392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:41.464318 containerd[2155]: time="2024-12-13T01:54:41.463744748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:41.464318 containerd[2155]: time="2024-12-13T01:54:41.463877720Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:41.464318 containerd[2155]: time="2024-12-13T01:54:41.464069120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:41.571998 containerd[2155]: time="2024-12-13T01:54:41.571889697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-4jf7s,Uid:72c350b8-3244-418d-981f-dcd0a7f4f1c9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e330fc0a64823e8237a5ea81f0c88752454cf693bd5738ceb654eb0e8a9eb056\"" Dec 13 01:54:41.586837 containerd[2155]: time="2024-12-13T01:54:41.586599321Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:54:43.523565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274463111.mount: Deactivated successfully. Dec 13 01:54:44.500428 containerd[2155]: time="2024-12-13T01:54:44.500346251Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:44.502974 containerd[2155]: time="2024-12-13T01:54:44.502851467Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19126016" Dec 13 01:54:44.503711 containerd[2155]: time="2024-12-13T01:54:44.503312627Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:44.507637 containerd[2155]: time="2024-12-13T01:54:44.507523511Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:44.509622 containerd[2155]: time="2024-12-13T01:54:44.509423795Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.921706386s" Dec 13 01:54:44.509622 containerd[2155]: time="2024-12-13T01:54:44.509480003Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:54:44.513987 containerd[2155]: time="2024-12-13T01:54:44.513759335Z" level=info msg="CreateContainer within sandbox \"e330fc0a64823e8237a5ea81f0c88752454cf693bd5738ceb654eb0e8a9eb056\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:54:44.540449 containerd[2155]: time="2024-12-13T01:54:44.540388871Z" level=info msg="CreateContainer within sandbox \"e330fc0a64823e8237a5ea81f0c88752454cf693bd5738ceb654eb0e8a9eb056\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ca63ab311e894f7f7b17cfec482e6754ff46f33057e9762bc50e2d34b329528e\"" Dec 13 01:54:44.542910 containerd[2155]: time="2024-12-13T01:54:44.541837655Z" level=info msg="StartContainer for \"ca63ab311e894f7f7b17cfec482e6754ff46f33057e9762bc50e2d34b329528e\"" Dec 13 01:54:44.611503 systemd[1]: run-containerd-runc-k8s.io-ca63ab311e894f7f7b17cfec482e6754ff46f33057e9762bc50e2d34b329528e-runc.2KNKAt.mount: Deactivated successfully. Dec 13 01:54:44.658005 containerd[2155]: time="2024-12-13T01:54:44.657938592Z" level=info msg="StartContainer for \"ca63ab311e894f7f7b17cfec482e6754ff46f33057e9762bc50e2d34b329528e\" returns successfully" Dec 13 01:54:45.640011 kubelet[3450]: I1213 01:54:45.639529 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-k7mzb" podStartSLOduration=5.639466921 podStartE2EDuration="5.639466921s" podCreationTimestamp="2024-12-13 01:54:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:41.618804993 +0000 UTC m=+13.452653792" watchObservedRunningTime="2024-12-13 01:54:45.639466921 +0000 UTC m=+17.473315708" Dec 13 01:54:45.640011 kubelet[3450]: I1213 01:54:45.639707 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-4jf7s" podStartSLOduration=1.710475599 podStartE2EDuration="4.639671617s" podCreationTimestamp="2024-12-13 01:54:41 +0000 UTC" firstStartedPulling="2024-12-13 01:54:41.581084265 +0000 UTC m=+13.414933052" lastFinishedPulling="2024-12-13 01:54:44.510280283 +0000 UTC m=+16.344129070" observedRunningTime="2024-12-13 01:54:45.639066565 +0000 UTC m=+17.472915376" watchObservedRunningTime="2024-12-13 01:54:45.639671617 +0000 UTC m=+17.473520692" Dec 13 01:54:51.319375 kubelet[3450]: I1213 01:54:51.319315 3450 topology_manager.go:215] "Topology Admit Handler" podUID="a1bdff0e-4f68-4c80-a43e-78a436c145ed" podNamespace="calico-system" podName="calico-typha-66dc655765-gf2xj" Dec 13 01:54:51.345886 kubelet[3450]: I1213 01:54:51.345724 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a1bdff0e-4f68-4c80-a43e-78a436c145ed-tigera-ca-bundle\") pod \"calico-typha-66dc655765-gf2xj\" (UID: \"a1bdff0e-4f68-4c80-a43e-78a436c145ed\") " pod="calico-system/calico-typha-66dc655765-gf2xj" Dec 13 01:54:51.345886 kubelet[3450]: I1213 01:54:51.345832 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a1bdff0e-4f68-4c80-a43e-78a436c145ed-typha-certs\") pod \"calico-typha-66dc655765-gf2xj\" (UID: \"a1bdff0e-4f68-4c80-a43e-78a436c145ed\") " pod="calico-system/calico-typha-66dc655765-gf2xj" Dec 13 01:54:51.345886 kubelet[3450]: I1213 01:54:51.345888 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lhdd\" (UniqueName: \"kubernetes.io/projected/a1bdff0e-4f68-4c80-a43e-78a436c145ed-kube-api-access-6lhdd\") pod \"calico-typha-66dc655765-gf2xj\" (UID: \"a1bdff0e-4f68-4c80-a43e-78a436c145ed\") " pod="calico-system/calico-typha-66dc655765-gf2xj" Dec 13 01:54:51.634134 containerd[2155]: time="2024-12-13T01:54:51.633967747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66dc655765-gf2xj,Uid:a1bdff0e-4f68-4c80-a43e-78a436c145ed,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:51.669317 kubelet[3450]: I1213 01:54:51.666869 3450 topology_manager.go:215] "Topology Admit Handler" podUID="86098b00-e131-4df6-a746-19a4682542bf" podNamespace="calico-system" podName="calico-node-w7kbj" Dec 13 01:54:51.745322 containerd[2155]: time="2024-12-13T01:54:51.742889155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:51.745322 containerd[2155]: time="2024-12-13T01:54:51.743027107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:51.745322 containerd[2155]: time="2024-12-13T01:54:51.743070715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:51.745322 containerd[2155]: time="2024-12-13T01:54:51.743300155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:51.751174 kubelet[3450]: I1213 01:54:51.750819 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/86098b00-e131-4df6-a746-19a4682542bf-node-certs\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.751174 kubelet[3450]: I1213 01:54:51.750945 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86098b00-e131-4df6-a746-19a4682542bf-xtables-lock\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.751408 kubelet[3450]: I1213 01:54:51.751261 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/86098b00-e131-4df6-a746-19a4682542bf-policysync\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.751408 kubelet[3450]: I1213 01:54:51.751377 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/86098b00-e131-4df6-a746-19a4682542bf-cni-net-dir\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.753843 kubelet[3450]: I1213 01:54:51.751559 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86098b00-e131-4df6-a746-19a4682542bf-lib-modules\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.753843 kubelet[3450]: I1213 01:54:51.752007 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/86098b00-e131-4df6-a746-19a4682542bf-var-run-calico\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.753843 kubelet[3450]: I1213 01:54:51.752300 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86098b00-e131-4df6-a746-19a4682542bf-var-lib-calico\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.753843 kubelet[3450]: I1213 01:54:51.753039 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/86098b00-e131-4df6-a746-19a4682542bf-cni-log-dir\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.753843 kubelet[3450]: I1213 01:54:51.753295 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbp67\" (UniqueName: \"kubernetes.io/projected/86098b00-e131-4df6-a746-19a4682542bf-kube-api-access-sbp67\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.756291 kubelet[3450]: I1213 01:54:51.753570 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86098b00-e131-4df6-a746-19a4682542bf-tigera-ca-bundle\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.756291 kubelet[3450]: I1213 01:54:51.753749 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/86098b00-e131-4df6-a746-19a4682542bf-cni-bin-dir\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.756291 kubelet[3450]: I1213 01:54:51.754109 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/86098b00-e131-4df6-a746-19a4682542bf-flexvol-driver-host\") pod \"calico-node-w7kbj\" (UID: \"86098b00-e131-4df6-a746-19a4682542bf\") " pod="calico-system/calico-node-w7kbj" Dec 13 01:54:51.832258 kubelet[3450]: I1213 01:54:51.832180 3450 topology_manager.go:215] "Topology Admit Handler" podUID="283caadd-1af1-4d62-bdf3-ec7850179f30" podNamespace="calico-system" podName="csi-node-driver-xsw8v" Dec 13 01:54:51.833238 kubelet[3450]: E1213 01:54:51.832606 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsw8v" podUID="283caadd-1af1-4d62-bdf3-ec7850179f30" Dec 13 01:54:51.856243 kubelet[3450]: I1213 01:54:51.855894 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtfm5\" (UniqueName: \"kubernetes.io/projected/283caadd-1af1-4d62-bdf3-ec7850179f30-kube-api-access-mtfm5\") pod \"csi-node-driver-xsw8v\" (UID: \"283caadd-1af1-4d62-bdf3-ec7850179f30\") " pod="calico-system/csi-node-driver-xsw8v" Dec 13 01:54:51.856243 kubelet[3450]: I1213 01:54:51.856077 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/283caadd-1af1-4d62-bdf3-ec7850179f30-varrun\") pod \"csi-node-driver-xsw8v\" (UID: \"283caadd-1af1-4d62-bdf3-ec7850179f30\") " pod="calico-system/csi-node-driver-xsw8v" Dec 13 01:54:51.856243 kubelet[3450]: I1213 01:54:51.856125 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/283caadd-1af1-4d62-bdf3-ec7850179f30-kubelet-dir\") pod \"csi-node-driver-xsw8v\" (UID: \"283caadd-1af1-4d62-bdf3-ec7850179f30\") " pod="calico-system/csi-node-driver-xsw8v" Dec 13 01:54:51.858251 kubelet[3450]: I1213 01:54:51.856197 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/283caadd-1af1-4d62-bdf3-ec7850179f30-registration-dir\") pod \"csi-node-driver-xsw8v\" (UID: \"283caadd-1af1-4d62-bdf3-ec7850179f30\") " pod="calico-system/csi-node-driver-xsw8v" Dec 13 01:54:51.863248 kubelet[3450]: I1213 01:54:51.862554 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/283caadd-1af1-4d62-bdf3-ec7850179f30-socket-dir\") pod \"csi-node-driver-xsw8v\" (UID: \"283caadd-1af1-4d62-bdf3-ec7850179f30\") " pod="calico-system/csi-node-driver-xsw8v" Dec 13 01:54:51.875067 kubelet[3450]: E1213 01:54:51.873710 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.876235 kubelet[3450]: W1213 01:54:51.875285 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.876235 kubelet[3450]: E1213 01:54:51.875665 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.877073 kubelet[3450]: E1213 01:54:51.877028 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.878148 kubelet[3450]: W1213 01:54:51.877072 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.878148 kubelet[3450]: E1213 01:54:51.878127 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.879526 kubelet[3450]: E1213 01:54:51.878591 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.879526 kubelet[3450]: W1213 01:54:51.878650 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.879526 kubelet[3450]: E1213 01:54:51.878686 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.880824 kubelet[3450]: E1213 01:54:51.879838 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.880824 kubelet[3450]: W1213 01:54:51.879996 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.882056 kubelet[3450]: E1213 01:54:51.881172 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.882056 kubelet[3450]: W1213 01:54:51.881235 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.882056 kubelet[3450]: E1213 01:54:51.881277 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.882056 kubelet[3450]: E1213 01:54:51.881791 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.884931 kubelet[3450]: E1213 01:54:51.883681 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.884931 kubelet[3450]: W1213 01:54:51.883753 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.884931 kubelet[3450]: E1213 01:54:51.883902 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.887426 kubelet[3450]: E1213 01:54:51.885360 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.887426 kubelet[3450]: W1213 01:54:51.885388 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.887426 kubelet[3450]: E1213 01:54:51.887065 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.887426 kubelet[3450]: E1213 01:54:51.886835 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.887426 kubelet[3450]: W1213 01:54:51.887358 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.888841 kubelet[3450]: E1213 01:54:51.887976 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.892238 kubelet[3450]: E1213 01:54:51.889782 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.892238 kubelet[3450]: W1213 01:54:51.889841 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.892238 kubelet[3450]: E1213 01:54:51.891036 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.892238 kubelet[3450]: E1213 01:54:51.891321 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.892238 kubelet[3450]: W1213 01:54:51.891339 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.892238 kubelet[3450]: E1213 01:54:51.891614 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.893814 kubelet[3450]: E1213 01:54:51.893742 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.893814 kubelet[3450]: W1213 01:54:51.893782 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.896120 kubelet[3450]: E1213 01:54:51.895610 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.896326 kubelet[3450]: W1213 01:54:51.896128 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.897966 kubelet[3450]: E1213 01:54:51.896894 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.897966 kubelet[3450]: E1213 01:54:51.896989 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.897966 kubelet[3450]: E1213 01:54:51.897196 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.897966 kubelet[3450]: W1213 01:54:51.897263 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.897966 kubelet[3450]: E1213 01:54:51.897296 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.899051 kubelet[3450]: E1213 01:54:51.898957 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.899051 kubelet[3450]: W1213 01:54:51.899003 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.899051 kubelet[3450]: E1213 01:54:51.899041 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.903268 kubelet[3450]: E1213 01:54:51.903180 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.903268 kubelet[3450]: W1213 01:54:51.903238 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.903268 kubelet[3450]: E1213 01:54:51.903278 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.905439 kubelet[3450]: E1213 01:54:51.905388 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.905439 kubelet[3450]: W1213 01:54:51.905428 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.905649 kubelet[3450]: E1213 01:54:51.905465 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.909295 kubelet[3450]: E1213 01:54:51.908138 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.909295 kubelet[3450]: W1213 01:54:51.908230 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.909295 kubelet[3450]: E1213 01:54:51.908278 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.917625 kubelet[3450]: E1213 01:54:51.916684 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.917625 kubelet[3450]: W1213 01:54:51.916725 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.917625 kubelet[3450]: E1213 01:54:51.916763 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.991736 kubelet[3450]: E1213 01:54:51.988334 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.991736 kubelet[3450]: W1213 01:54:51.988411 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.991736 kubelet[3450]: E1213 01:54:51.988448 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.991736 kubelet[3450]: E1213 01:54:51.988961 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.991736 kubelet[3450]: W1213 01:54:51.988984 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.991736 kubelet[3450]: E1213 01:54:51.989076 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.999417 containerd[2155]: time="2024-12-13T01:54:51.999336284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w7kbj,Uid:86098b00-e131-4df6-a746-19a4682542bf,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:52.003940 kubelet[3450]: E1213 01:54:51.993711 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.003940 kubelet[3450]: W1213 01:54:52.003377 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.003940 kubelet[3450]: E1213 01:54:52.003730 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.009007 kubelet[3450]: E1213 01:54:52.008940 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.009144 kubelet[3450]: W1213 01:54:52.009104 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.009199 kubelet[3450]: E1213 01:54:52.009149 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.013905 kubelet[3450]: E1213 01:54:52.013284 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.016467 kubelet[3450]: W1213 01:54:52.015876 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.019551 kubelet[3450]: E1213 01:54:52.019280 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.020292 kubelet[3450]: E1213 01:54:52.020236 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.020292 kubelet[3450]: W1213 01:54:52.020270 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.024019 kubelet[3450]: E1213 01:54:52.021322 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.024019 kubelet[3450]: W1213 01:54:52.021361 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.024019 kubelet[3450]: E1213 01:54:52.021918 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.024019 kubelet[3450]: W1213 01:54:52.022063 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.024019 kubelet[3450]: E1213 01:54:52.023113 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.024019 kubelet[3450]: W1213 01:54:52.023163 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.024019 kubelet[3450]: E1213 01:54:52.023765 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.024019 kubelet[3450]: W1213 01:54:52.023814 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.024019 kubelet[3450]: E1213 01:54:52.023847 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.028237 kubelet[3450]: E1213 01:54:52.028159 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.028404 kubelet[3450]: W1213 01:54:52.028257 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.028404 kubelet[3450]: E1213 01:54:52.028299 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.037902 kubelet[3450]: E1213 01:54:52.037824 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.038081 kubelet[3450]: E1213 01:54:52.037917 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.038175 kubelet[3450]: E1213 01:54:52.038143 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.038290 kubelet[3450]: E1213 01:54:52.038256 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.038401 kubelet[3450]: E1213 01:54:52.038373 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.038401 kubelet[3450]: W1213 01:54:52.038390 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.038505 kubelet[3450]: E1213 01:54:52.038420 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.052128 kubelet[3450]: E1213 01:54:52.049526 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.052128 kubelet[3450]: W1213 01:54:52.049570 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.052128 kubelet[3450]: E1213 01:54:52.049613 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.052128 kubelet[3450]: E1213 01:54:52.051498 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.052128 kubelet[3450]: W1213 01:54:52.051526 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.052128 kubelet[3450]: E1213 01:54:52.051561 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.062379 kubelet[3450]: E1213 01:54:52.061772 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.062379 kubelet[3450]: W1213 01:54:52.061830 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.064064 kubelet[3450]: E1213 01:54:52.063836 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.064064 kubelet[3450]: W1213 01:54:52.063879 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.067485 kubelet[3450]: E1213 01:54:52.066350 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.067485 kubelet[3450]: E1213 01:54:52.066438 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.067485 kubelet[3450]: E1213 01:54:52.067015 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.067485 kubelet[3450]: W1213 01:54:52.067043 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.069790 kubelet[3450]: E1213 01:54:52.069119 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.070258 kubelet[3450]: E1213 01:54:52.070170 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.070258 kubelet[3450]: W1213 01:54:52.070252 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.072884 kubelet[3450]: E1213 01:54:52.072825 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.072884 kubelet[3450]: W1213 01:54:52.072865 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.075463 kubelet[3450]: E1213 01:54:52.075406 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.075626 kubelet[3450]: E1213 01:54:52.075487 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.075940 kubelet[3450]: E1213 01:54:52.075901 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.075940 kubelet[3450]: W1213 01:54:52.075935 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.078883 kubelet[3450]: E1213 01:54:52.078358 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.079015 kubelet[3450]: W1213 01:54:52.078887 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.079350 kubelet[3450]: E1213 01:54:52.078536 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.079428 kubelet[3450]: E1213 01:54:52.079385 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.080517 kubelet[3450]: E1213 01:54:52.080467 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.080517 kubelet[3450]: W1213 01:54:52.080506 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.080782 kubelet[3450]: E1213 01:54:52.080746 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.083465 kubelet[3450]: E1213 01:54:52.083416 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.083465 kubelet[3450]: W1213 01:54:52.083456 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.086944 kubelet[3450]: E1213 01:54:52.085494 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.086944 kubelet[3450]: E1213 01:54:52.086034 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.086944 kubelet[3450]: W1213 01:54:52.086060 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.089151 kubelet[3450]: E1213 01:54:52.087443 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.090842 kubelet[3450]: E1213 01:54:52.090764 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.091146 kubelet[3450]: W1213 01:54:52.090795 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.091684 kubelet[3450]: E1213 01:54:52.091656 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.093371 kubelet[3450]: E1213 01:54:52.093263 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.093371 kubelet[3450]: W1213 01:54:52.093299 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.093589 kubelet[3450]: E1213 01:54:52.093446 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.103066 containerd[2155]: time="2024-12-13T01:54:52.102889265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:52.103494 containerd[2155]: time="2024-12-13T01:54:52.103017245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:52.103494 containerd[2155]: time="2024-12-13T01:54:52.103071149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:52.103494 containerd[2155]: time="2024-12-13T01:54:52.103267217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:52.118303 containerd[2155]: time="2024-12-13T01:54:52.118175849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66dc655765-gf2xj,Uid:a1bdff0e-4f68-4c80-a43e-78a436c145ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"f8d88e73ddd3cf7e5d4f26d2661b6245b72ce6aeb25ddcaeec658153424c1edb\"" Dec 13 01:54:52.127580 containerd[2155]: time="2024-12-13T01:54:52.127475393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:54:52.209464 containerd[2155]: time="2024-12-13T01:54:52.209393621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w7kbj,Uid:86098b00-e131-4df6-a746-19a4682542bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb8f1397c2dfd37b51334a1425b506bb943d17c7a08e8d78e37db2c5ec322bbc\"" Dec 13 01:54:53.423989 kubelet[3450]: E1213 01:54:53.423451 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsw8v" podUID="283caadd-1af1-4d62-bdf3-ec7850179f30" Dec 13 01:54:53.502331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056043660.mount: Deactivated successfully. Dec 13 01:54:54.316523 containerd[2155]: time="2024-12-13T01:54:54.316467248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:54.318763 containerd[2155]: time="2024-12-13T01:54:54.318716696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:54:54.320833 containerd[2155]: time="2024-12-13T01:54:54.320787068Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:54.325683 containerd[2155]: time="2024-12-13T01:54:54.325600760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:54.327282 containerd[2155]: time="2024-12-13T01:54:54.327199088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.199651599s" Dec 13 01:54:54.327524 containerd[2155]: time="2024-12-13T01:54:54.327281732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:54:54.328140 containerd[2155]: time="2024-12-13T01:54:54.328056692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:54:54.348940 containerd[2155]: time="2024-12-13T01:54:54.348168128Z" level=info msg="CreateContainer within sandbox \"f8d88e73ddd3cf7e5d4f26d2661b6245b72ce6aeb25ddcaeec658153424c1edb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:54:54.380549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366528025.mount: Deactivated successfully. Dec 13 01:54:54.382683 containerd[2155]: time="2024-12-13T01:54:54.382502204Z" level=info msg="CreateContainer within sandbox \"f8d88e73ddd3cf7e5d4f26d2661b6245b72ce6aeb25ddcaeec658153424c1edb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e9469a934949d7ba906b575f761f276928ef7fdbdc89b2425534b2398031eee4\"" Dec 13 01:54:54.384465 containerd[2155]: time="2024-12-13T01:54:54.384393884Z" level=info msg="StartContainer for \"e9469a934949d7ba906b575f761f276928ef7fdbdc89b2425534b2398031eee4\"" Dec 13 01:54:54.519244 containerd[2155]: time="2024-12-13T01:54:54.518837877Z" level=info msg="StartContainer for \"e9469a934949d7ba906b575f761f276928ef7fdbdc89b2425534b2398031eee4\" returns successfully" Dec 13 01:54:54.756079 kubelet[3450]: E1213 01:54:54.756022 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.760587 kubelet[3450]: W1213 01:54:54.756098 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.760587 kubelet[3450]: E1213 01:54:54.756141 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.760587 kubelet[3450]: E1213 01:54:54.756680 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.760587 kubelet[3450]: W1213 01:54:54.756704 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.760587 kubelet[3450]: E1213 01:54:54.756797 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.760587 kubelet[3450]: E1213 01:54:54.757295 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.760587 kubelet[3450]: W1213 01:54:54.757342 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.760587 kubelet[3450]: E1213 01:54:54.757369 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.760587 kubelet[3450]: E1213 01:54:54.757910 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.760587 kubelet[3450]: W1213 01:54:54.757935 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.763606 kubelet[3450]: E1213 01:54:54.757991 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.763606 kubelet[3450]: E1213 01:54:54.758535 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.763606 kubelet[3450]: W1213 01:54:54.758563 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.763606 kubelet[3450]: E1213 01:54:54.758592 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.763606 kubelet[3450]: E1213 01:54:54.759039 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.763606 kubelet[3450]: W1213 01:54:54.759064 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.763606 kubelet[3450]: E1213 01:54:54.759120 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.763606 kubelet[3450]: E1213 01:54:54.759609 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.763606 kubelet[3450]: W1213 01:54:54.759630 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.763606 kubelet[3450]: E1213 01:54:54.759658 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.767802 kubelet[3450]: E1213 01:54:54.760036 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.767802 kubelet[3450]: W1213 01:54:54.760109 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.767802 kubelet[3450]: E1213 01:54:54.760139 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.767802 kubelet[3450]: E1213 01:54:54.760797 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.767802 kubelet[3450]: W1213 01:54:54.760822 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.767802 kubelet[3450]: E1213 01:54:54.760884 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.767802 kubelet[3450]: E1213 01:54:54.761301 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.767802 kubelet[3450]: W1213 01:54:54.761320 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.767802 kubelet[3450]: E1213 01:54:54.761345 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.767802 kubelet[3450]: E1213 01:54:54.761681 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.772828 kubelet[3450]: W1213 01:54:54.761698 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.772828 kubelet[3450]: E1213 01:54:54.761724 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.772828 kubelet[3450]: E1213 01:54:54.762155 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.772828 kubelet[3450]: W1213 01:54:54.762267 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.772828 kubelet[3450]: E1213 01:54:54.762300 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.772828 kubelet[3450]: E1213 01:54:54.762693 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.772828 kubelet[3450]: W1213 01:54:54.762713 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.772828 kubelet[3450]: E1213 01:54:54.762742 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.772828 kubelet[3450]: E1213 01:54:54.764338 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.772828 kubelet[3450]: W1213 01:54:54.764365 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.776650 kubelet[3450]: E1213 01:54:54.764400 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.776650 kubelet[3450]: E1213 01:54:54.764722 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.776650 kubelet[3450]: W1213 01:54:54.764740 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.776650 kubelet[3450]: E1213 01:54:54.764767 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.776650 kubelet[3450]: E1213 01:54:54.765177 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.776650 kubelet[3450]: W1213 01:54:54.765198 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.776650 kubelet[3450]: E1213 01:54:54.765306 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.776650 kubelet[3450]: E1213 01:54:54.765680 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.776650 kubelet[3450]: W1213 01:54:54.765699 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.776650 kubelet[3450]: E1213 01:54:54.765730 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.780623 kubelet[3450]: E1213 01:54:54.766132 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.780623 kubelet[3450]: W1213 01:54:54.766152 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.780623 kubelet[3450]: E1213 01:54:54.766178 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.780623 kubelet[3450]: E1213 01:54:54.766579 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.780623 kubelet[3450]: W1213 01:54:54.766600 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.780623 kubelet[3450]: E1213 01:54:54.766628 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.780623 kubelet[3450]: E1213 01:54:54.766956 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.780623 kubelet[3450]: W1213 01:54:54.766976 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.780623 kubelet[3450]: E1213 01:54:54.767004 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.780623 kubelet[3450]: E1213 01:54:54.767772 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.781133 kubelet[3450]: W1213 01:54:54.767796 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.781133 kubelet[3450]: E1213 01:54:54.767838 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.781133 kubelet[3450]: E1213 01:54:54.770501 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.781133 kubelet[3450]: W1213 01:54:54.770527 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.781133 kubelet[3450]: E1213 01:54:54.771390 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.781133 kubelet[3450]: W1213 01:54:54.771417 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.781133 kubelet[3450]: E1213 01:54:54.771943 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.781133 kubelet[3450]: W1213 01:54:54.771968 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.781133 kubelet[3450]: E1213 01:54:54.772025 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.781133 kubelet[3450]: E1213 01:54:54.772611 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.781900 kubelet[3450]: W1213 01:54:54.772631 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.781900 kubelet[3450]: E1213 01:54:54.772686 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.781900 kubelet[3450]: E1213 01:54:54.773043 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.781900 kubelet[3450]: W1213 01:54:54.773059 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.781900 kubelet[3450]: E1213 01:54:54.773112 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.781900 kubelet[3450]: E1213 01:54:54.774855 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.781900 kubelet[3450]: W1213 01:54:54.774885 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.781900 kubelet[3450]: E1213 01:54:54.774920 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.781900 kubelet[3450]: E1213 01:54:54.775197 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.781900 kubelet[3450]: E1213 01:54:54.775273 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.782543 kubelet[3450]: E1213 01:54:54.775856 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.782543 kubelet[3450]: W1213 01:54:54.775912 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.782543 kubelet[3450]: E1213 01:54:54.776107 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.782543 kubelet[3450]: E1213 01:54:54.776735 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.782543 kubelet[3450]: W1213 01:54:54.776758 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.782543 kubelet[3450]: E1213 01:54:54.776789 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.782543 kubelet[3450]: E1213 01:54:54.777153 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.782543 kubelet[3450]: W1213 01:54:54.777174 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.782543 kubelet[3450]: E1213 01:54:54.777244 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.782543 kubelet[3450]: E1213 01:54:54.777545 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.782993 kubelet[3450]: W1213 01:54:54.777560 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.782993 kubelet[3450]: E1213 01:54:54.777585 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.782993 kubelet[3450]: E1213 01:54:54.777959 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.782993 kubelet[3450]: W1213 01:54:54.777978 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.782993 kubelet[3450]: E1213 01:54:54.778011 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:54.782993 kubelet[3450]: E1213 01:54:54.778740 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:54.782993 kubelet[3450]: W1213 01:54:54.778765 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:54.782993 kubelet[3450]: E1213 01:54:54.778795 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.424931 kubelet[3450]: E1213 01:54:55.423915 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsw8v" podUID="283caadd-1af1-4d62-bdf3-ec7850179f30" Dec 13 01:54:55.597420 containerd[2155]: time="2024-12-13T01:54:55.597343990Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:55.599319 containerd[2155]: time="2024-12-13T01:54:55.599248582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:54:55.601798 containerd[2155]: time="2024-12-13T01:54:55.601711390Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:55.606408 containerd[2155]: time="2024-12-13T01:54:55.606319486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:55.607991 containerd[2155]: time="2024-12-13T01:54:55.607753282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.279633374s" Dec 13 01:54:55.607991 containerd[2155]: time="2024-12-13T01:54:55.607818286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:54:55.612245 containerd[2155]: time="2024-12-13T01:54:55.612136246Z" level=info msg="CreateContainer within sandbox \"cb8f1397c2dfd37b51334a1425b506bb943d17c7a08e8d78e37db2c5ec322bbc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:54:55.640744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327315518.mount: Deactivated successfully. Dec 13 01:54:55.643164 containerd[2155]: time="2024-12-13T01:54:55.643084450Z" level=info msg="CreateContainer within sandbox \"cb8f1397c2dfd37b51334a1425b506bb943d17c7a08e8d78e37db2c5ec322bbc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"abfe674e7d377a51541a81bb0830a26ee20f8698997aae8593d489268fc4a5a3\"" Dec 13 01:54:55.644897 containerd[2155]: time="2024-12-13T01:54:55.644191678Z" level=info msg="StartContainer for \"abfe674e7d377a51541a81bb0830a26ee20f8698997aae8593d489268fc4a5a3\"" Dec 13 01:54:55.693862 kubelet[3450]: I1213 01:54:55.693675 3450 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:54:55.764821 containerd[2155]: time="2024-12-13T01:54:55.764648747Z" level=info msg="StartContainer for \"abfe674e7d377a51541a81bb0830a26ee20f8698997aae8593d489268fc4a5a3\" returns successfully" Dec 13 01:54:55.773015 kubelet[3450]: E1213 01:54:55.772963 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.773015 kubelet[3450]: W1213 01:54:55.773003 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.774314 kubelet[3450]: E1213 01:54:55.773041 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.775796 kubelet[3450]: E1213 01:54:55.775451 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.775796 kubelet[3450]: W1213 01:54:55.775484 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.775796 kubelet[3450]: E1213 01:54:55.775543 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.776693 kubelet[3450]: E1213 01:54:55.776478 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.776693 kubelet[3450]: W1213 01:54:55.776510 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.776693 kubelet[3450]: E1213 01:54:55.776543 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.777590 kubelet[3450]: E1213 01:54:55.777561 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.777910 kubelet[3450]: W1213 01:54:55.777790 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.777910 kubelet[3450]: E1213 01:54:55.777837 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.778750 kubelet[3450]: E1213 01:54:55.778542 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.778750 kubelet[3450]: W1213 01:54:55.778581 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.778750 kubelet[3450]: E1213 01:54:55.778615 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.780350 kubelet[3450]: E1213 01:54:55.780277 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.780350 kubelet[3450]: W1213 01:54:55.780309 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.780749 kubelet[3450]: E1213 01:54:55.780642 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.781488 kubelet[3450]: E1213 01:54:55.781262 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.781488 kubelet[3450]: W1213 01:54:55.781292 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.781488 kubelet[3450]: E1213 01:54:55.781323 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.782160 kubelet[3450]: E1213 01:54:55.781987 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.782160 kubelet[3450]: W1213 01:54:55.782015 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.782160 kubelet[3450]: E1213 01:54:55.782051 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.782958 kubelet[3450]: E1213 01:54:55.782773 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.782958 kubelet[3450]: W1213 01:54:55.782801 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.782958 kubelet[3450]: E1213 01:54:55.782875 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.783802 kubelet[3450]: E1213 01:54:55.783599 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.783802 kubelet[3450]: W1213 01:54:55.783626 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.783802 kubelet[3450]: E1213 01:54:55.783661 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.784372 kubelet[3450]: E1213 01:54:55.784345 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.784669 kubelet[3450]: W1213 01:54:55.784482 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.784669 kubelet[3450]: E1213 01:54:55.784517 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.785400 kubelet[3450]: E1213 01:54:55.785223 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.785400 kubelet[3450]: W1213 01:54:55.785285 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.785400 kubelet[3450]: E1213 01:54:55.785318 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.786350 kubelet[3450]: E1213 01:54:55.786139 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.786350 kubelet[3450]: W1213 01:54:55.786167 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.786350 kubelet[3450]: E1213 01:54:55.786252 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.787218 kubelet[3450]: E1213 01:54:55.786954 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.787218 kubelet[3450]: W1213 01:54:55.786997 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.787218 kubelet[3450]: E1213 01:54:55.787031 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.788888 kubelet[3450]: E1213 01:54:55.788703 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.788888 kubelet[3450]: W1213 01:54:55.788734 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.788888 kubelet[3450]: E1213 01:54:55.788769 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.789827 kubelet[3450]: E1213 01:54:55.789656 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.789827 kubelet[3450]: W1213 01:54:55.789688 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.789827 kubelet[3450]: E1213 01:54:55.789722 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.791431 kubelet[3450]: E1213 01:54:55.791178 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.791431 kubelet[3450]: W1213 01:54:55.791256 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.791431 kubelet[3450]: E1213 01:54:55.791310 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.792140 kubelet[3450]: E1213 01:54:55.791966 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.792140 kubelet[3450]: W1213 01:54:55.791992 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.792483 kubelet[3450]: E1213 01:54:55.792431 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.792840 kubelet[3450]: E1213 01:54:55.792787 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.792840 kubelet[3450]: W1213 01:54:55.792811 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.793184 kubelet[3450]: E1213 01:54:55.793006 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.793997 kubelet[3450]: E1213 01:54:55.793711 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.793997 kubelet[3450]: W1213 01:54:55.793738 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.793997 kubelet[3450]: E1213 01:54:55.793794 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.794697 kubelet[3450]: E1213 01:54:55.794511 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.794697 kubelet[3450]: W1213 01:54:55.794536 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.794697 kubelet[3450]: E1213 01:54:55.794588 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.794989 kubelet[3450]: E1213 01:54:55.794969 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.795085 kubelet[3450]: W1213 01:54:55.795065 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.795383 kubelet[3450]: E1213 01:54:55.795347 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.795803 kubelet[3450]: E1213 01:54:55.795638 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.795803 kubelet[3450]: W1213 01:54:55.795659 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.795803 kubelet[3450]: E1213 01:54:55.795705 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.796078 kubelet[3450]: E1213 01:54:55.796059 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.796293 kubelet[3450]: W1213 01:54:55.796153 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.797120 kubelet[3450]: E1213 01:54:55.796622 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.797120 kubelet[3450]: E1213 01:54:55.796715 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.797120 kubelet[3450]: W1213 01:54:55.796731 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.797120 kubelet[3450]: E1213 01:54:55.796768 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.797825 kubelet[3450]: E1213 01:54:55.797799 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.797970 kubelet[3450]: W1213 01:54:55.797946 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.798153 kubelet[3450]: E1213 01:54:55.798047 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.798964 kubelet[3450]: E1213 01:54:55.798905 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.799181 kubelet[3450]: W1213 01:54:55.798933 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.799368 kubelet[3450]: E1213 01:54:55.799106 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.801184 kubelet[3450]: E1213 01:54:55.801127 3450 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:55.801184 kubelet[3450]: W1213 01:54:55.801171 3450 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:55.801384 kubelet[3450]: E1213 01:54:55.801261 3450 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:55.856687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abfe674e7d377a51541a81bb0830a26ee20f8698997aae8593d489268fc4a5a3-rootfs.mount: Deactivated successfully. Dec 13 01:54:56.080929 containerd[2155]: time="2024-12-13T01:54:56.080676105Z" level=info msg="shim disconnected" id=abfe674e7d377a51541a81bb0830a26ee20f8698997aae8593d489268fc4a5a3 namespace=k8s.io Dec 13 01:54:56.080929 containerd[2155]: time="2024-12-13T01:54:56.080749089Z" level=warning msg="cleaning up after shim disconnected" id=abfe674e7d377a51541a81bb0830a26ee20f8698997aae8593d489268fc4a5a3 namespace=k8s.io Dec 13 01:54:56.080929 containerd[2155]: time="2024-12-13T01:54:56.080769741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:54:56.704475 containerd[2155]: time="2024-12-13T01:54:56.703927416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:54:56.729604 kubelet[3450]: I1213 01:54:56.729544 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-66dc655765-gf2xj" podStartSLOduration=3.524971137 podStartE2EDuration="5.72844752s" podCreationTimestamp="2024-12-13 01:54:51 +0000 UTC" firstStartedPulling="2024-12-13 01:54:52.124242017 +0000 UTC m=+23.958090816" lastFinishedPulling="2024-12-13 01:54:54.3277184 +0000 UTC m=+26.161567199" observedRunningTime="2024-12-13 01:54:54.740806522 +0000 UTC m=+26.574655345" watchObservedRunningTime="2024-12-13 01:54:56.72844752 +0000 UTC m=+28.562296331" Dec 13 01:54:57.424178 kubelet[3450]: E1213 01:54:57.423893 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsw8v" podUID="283caadd-1af1-4d62-bdf3-ec7850179f30" Dec 13 01:54:59.423590 kubelet[3450]: E1213 01:54:59.423087 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsw8v" podUID="283caadd-1af1-4d62-bdf3-ec7850179f30" Dec 13 01:55:00.399479 containerd[2155]: time="2024-12-13T01:55:00.399407510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:00.401181 containerd[2155]: time="2024-12-13T01:55:00.401053802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:55:00.402373 containerd[2155]: time="2024-12-13T01:55:00.402251654Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:00.406303 containerd[2155]: time="2024-12-13T01:55:00.406194938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:00.407975 containerd[2155]: time="2024-12-13T01:55:00.407910578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.703918998s" Dec 13 01:55:00.408349 containerd[2155]: time="2024-12-13T01:55:00.407974826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:55:00.412822 containerd[2155]: time="2024-12-13T01:55:00.412735454Z" level=info msg="CreateContainer within sandbox \"cb8f1397c2dfd37b51334a1425b506bb943d17c7a08e8d78e37db2c5ec322bbc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:55:00.433574 containerd[2155]: time="2024-12-13T01:55:00.433385054Z" level=info msg="CreateContainer within sandbox \"cb8f1397c2dfd37b51334a1425b506bb943d17c7a08e8d78e37db2c5ec322bbc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"288d9f90703232d4b57090dee236db257ddad8dd904aba45f3cf16c6bcf5369e\"" Dec 13 01:55:00.435490 containerd[2155]: time="2024-12-13T01:55:00.435421886Z" level=info msg="StartContainer for \"288d9f90703232d4b57090dee236db257ddad8dd904aba45f3cf16c6bcf5369e\"" Dec 13 01:55:00.545301 containerd[2155]: time="2024-12-13T01:55:00.545104287Z" level=info msg="StartContainer for \"288d9f90703232d4b57090dee236db257ddad8dd904aba45f3cf16c6bcf5369e\" returns successfully" Dec 13 01:55:01.391874 containerd[2155]: time="2024-12-13T01:55:01.391810311Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:55:01.425197 kubelet[3450]: E1213 01:55:01.423585 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xsw8v" podUID="283caadd-1af1-4d62-bdf3-ec7850179f30" Dec 13 01:55:01.447575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-288d9f90703232d4b57090dee236db257ddad8dd904aba45f3cf16c6bcf5369e-rootfs.mount: Deactivated successfully. Dec 13 01:55:01.454490 kubelet[3450]: I1213 01:55:01.454380 3450 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:55:01.504368 kubelet[3450]: I1213 01:55:01.504267 3450 topology_manager.go:215] "Topology Admit Handler" podUID="e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9" podNamespace="kube-system" podName="coredns-76f75df574-zztz7" Dec 13 01:55:01.511292 kubelet[3450]: I1213 01:55:01.510647 3450 topology_manager.go:215] "Topology Admit Handler" podUID="9e1a0733-2392-49a6-b8a5-5725a39b39fb" podNamespace="kube-system" podName="coredns-76f75df574-v9nm9" Dec 13 01:55:01.524268 kubelet[3450]: I1213 01:55:01.518561 3450 topology_manager.go:215] "Topology Admit Handler" podUID="b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6" podNamespace="calico-apiserver" podName="calico-apiserver-7578865df6-bdb44" Dec 13 01:55:01.524268 kubelet[3450]: I1213 01:55:01.518851 3450 topology_manager.go:215] "Topology Admit Handler" podUID="a8345640-795c-4440-889c-5f65038d3192" podNamespace="calico-apiserver" podName="calico-apiserver-7578865df6-rpkll" Dec 13 01:55:01.536238 kubelet[3450]: I1213 01:55:01.534599 3450 topology_manager.go:215] "Topology Admit Handler" podUID="9a019854-49b1-4766-9d83-02d10b056c78" podNamespace="calico-system" podName="calico-kube-controllers-5594565998-zqvpl" Dec 13 01:55:01.645183 kubelet[3450]: I1213 01:55:01.645022 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a8345640-795c-4440-889c-5f65038d3192-calico-apiserver-certs\") pod \"calico-apiserver-7578865df6-rpkll\" (UID: \"a8345640-795c-4440-889c-5f65038d3192\") " pod="calico-apiserver/calico-apiserver-7578865df6-rpkll" Dec 13 01:55:01.645183 kubelet[3450]: I1213 01:55:01.645120 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db4fm\" (UniqueName: \"kubernetes.io/projected/9a019854-49b1-4766-9d83-02d10b056c78-kube-api-access-db4fm\") pod \"calico-kube-controllers-5594565998-zqvpl\" (UID: \"9a019854-49b1-4766-9d83-02d10b056c78\") " pod="calico-system/calico-kube-controllers-5594565998-zqvpl" Dec 13 01:55:01.645183 kubelet[3450]: I1213 01:55:01.645173 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6-calico-apiserver-certs\") pod \"calico-apiserver-7578865df6-bdb44\" (UID: \"b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6\") " pod="calico-apiserver/calico-apiserver-7578865df6-bdb44" Dec 13 01:55:01.645513 kubelet[3450]: I1213 01:55:01.645264 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj9vk\" (UniqueName: \"kubernetes.io/projected/a8345640-795c-4440-889c-5f65038d3192-kube-api-access-cj9vk\") pod \"calico-apiserver-7578865df6-rpkll\" (UID: \"a8345640-795c-4440-889c-5f65038d3192\") " pod="calico-apiserver/calico-apiserver-7578865df6-rpkll" Dec 13 01:55:01.645513 kubelet[3450]: I1213 01:55:01.645318 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28vgf\" (UniqueName: \"kubernetes.io/projected/9e1a0733-2392-49a6-b8a5-5725a39b39fb-kube-api-access-28vgf\") pod \"coredns-76f75df574-v9nm9\" (UID: \"9e1a0733-2392-49a6-b8a5-5725a39b39fb\") " pod="kube-system/coredns-76f75df574-v9nm9" Dec 13 01:55:01.645513 kubelet[3450]: I1213 01:55:01.645364 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbrqq\" (UniqueName: \"kubernetes.io/projected/b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6-kube-api-access-gbrqq\") pod \"calico-apiserver-7578865df6-bdb44\" (UID: \"b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6\") " pod="calico-apiserver/calico-apiserver-7578865df6-bdb44" Dec 13 01:55:01.645513 kubelet[3450]: I1213 01:55:01.645410 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a019854-49b1-4766-9d83-02d10b056c78-tigera-ca-bundle\") pod \"calico-kube-controllers-5594565998-zqvpl\" (UID: \"9a019854-49b1-4766-9d83-02d10b056c78\") " pod="calico-system/calico-kube-controllers-5594565998-zqvpl" Dec 13 01:55:01.645513 kubelet[3450]: I1213 01:55:01.645473 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e1a0733-2392-49a6-b8a5-5725a39b39fb-config-volume\") pod \"coredns-76f75df574-v9nm9\" (UID: \"9e1a0733-2392-49a6-b8a5-5725a39b39fb\") " pod="kube-system/coredns-76f75df574-v9nm9" Dec 13 01:55:01.645815 kubelet[3450]: I1213 01:55:01.645522 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9-config-volume\") pod \"coredns-76f75df574-zztz7\" (UID: \"e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9\") " pod="kube-system/coredns-76f75df574-zztz7" Dec 13 01:55:01.645815 kubelet[3450]: I1213 01:55:01.645571 3450 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52cxw\" (UniqueName: \"kubernetes.io/projected/e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9-kube-api-access-52cxw\") pod \"coredns-76f75df574-zztz7\" (UID: \"e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9\") " pod="kube-system/coredns-76f75df574-zztz7" Dec 13 01:55:01.873075 containerd[2155]: time="2024-12-13T01:55:01.872483669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v9nm9,Uid:9e1a0733-2392-49a6-b8a5-5725a39b39fb,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:01.883064 containerd[2155]: time="2024-12-13T01:55:01.882995141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7578865df6-bdb44,Uid:b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:55:01.883833 containerd[2155]: time="2024-12-13T01:55:01.883761593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5594565998-zqvpl,Uid:9a019854-49b1-4766-9d83-02d10b056c78,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:02.132061 containerd[2155]: time="2024-12-13T01:55:02.131907591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zztz7,Uid:e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:02.159444 containerd[2155]: time="2024-12-13T01:55:02.159096387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7578865df6-rpkll,Uid:a8345640-795c-4440-889c-5f65038d3192,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:55:02.380816 containerd[2155]: time="2024-12-13T01:55:02.380539816Z" level=info msg="shim disconnected" id=288d9f90703232d4b57090dee236db257ddad8dd904aba45f3cf16c6bcf5369e namespace=k8s.io Dec 13 01:55:02.380816 containerd[2155]: time="2024-12-13T01:55:02.380735416Z" level=warning msg="cleaning up after shim disconnected" id=288d9f90703232d4b57090dee236db257ddad8dd904aba45f3cf16c6bcf5369e namespace=k8s.io Dec 13 01:55:02.380816 containerd[2155]: time="2024-12-13T01:55:02.380775568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:02.723927 containerd[2155]: time="2024-12-13T01:55:02.723856650Z" level=error msg="Failed to destroy network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.728988 containerd[2155]: time="2024-12-13T01:55:02.728686254Z" level=error msg="encountered an error cleaning up failed sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.728988 containerd[2155]: time="2024-12-13T01:55:02.728829042Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7578865df6-bdb44,Uid:b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.731949 kubelet[3450]: E1213 01:55:02.730508 3450 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.731949 kubelet[3450]: E1213 01:55:02.730606 3450 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7578865df6-bdb44" Dec 13 01:55:02.731949 kubelet[3450]: E1213 01:55:02.730648 3450 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7578865df6-bdb44" Dec 13 01:55:02.733007 kubelet[3450]: E1213 01:55:02.730754 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7578865df6-bdb44_calico-apiserver(b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7578865df6-bdb44_calico-apiserver(b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7578865df6-bdb44" podUID="b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6" Dec 13 01:55:02.739101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283-shm.mount: Deactivated successfully. Dec 13 01:55:02.755222 containerd[2155]: time="2024-12-13T01:55:02.749827950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:55:02.809780 containerd[2155]: time="2024-12-13T01:55:02.809572014Z" level=error msg="Failed to destroy network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.823363 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d-shm.mount: Deactivated successfully. Dec 13 01:55:02.829543 containerd[2155]: time="2024-12-13T01:55:02.829458006Z" level=error msg="encountered an error cleaning up failed sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.829705 containerd[2155]: time="2024-12-13T01:55:02.829577958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5594565998-zqvpl,Uid:9a019854-49b1-4766-9d83-02d10b056c78,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.830256 kubelet[3450]: E1213 01:55:02.829916 3450 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.830256 kubelet[3450]: E1213 01:55:02.830002 3450 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5594565998-zqvpl" Dec 13 01:55:02.830256 kubelet[3450]: E1213 01:55:02.830041 3450 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5594565998-zqvpl" Dec 13 01:55:02.832567 kubelet[3450]: E1213 01:55:02.830121 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5594565998-zqvpl_calico-system(9a019854-49b1-4766-9d83-02d10b056c78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5594565998-zqvpl_calico-system(9a019854-49b1-4766-9d83-02d10b056c78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5594565998-zqvpl" podUID="9a019854-49b1-4766-9d83-02d10b056c78" Dec 13 01:55:02.841251 containerd[2155]: time="2024-12-13T01:55:02.840547530Z" level=error msg="Failed to destroy network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.842927 containerd[2155]: time="2024-12-13T01:55:02.842684802Z" level=error msg="encountered an error cleaning up failed sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.843124 containerd[2155]: time="2024-12-13T01:55:02.843065382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zztz7,Uid:e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.843846 kubelet[3450]: E1213 01:55:02.843790 3450 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.843985 kubelet[3450]: E1213 01:55:02.843882 3450 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zztz7" Dec 13 01:55:02.843985 kubelet[3450]: E1213 01:55:02.843921 3450 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zztz7" Dec 13 01:55:02.844146 kubelet[3450]: E1213 01:55:02.844002 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zztz7_kube-system(e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zztz7_kube-system(e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zztz7" podUID="e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9" Dec 13 01:55:02.853085 containerd[2155]: time="2024-12-13T01:55:02.853013682Z" level=error msg="Failed to destroy network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.853383 containerd[2155]: time="2024-12-13T01:55:02.853340706Z" level=error msg="Failed to destroy network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.854017 containerd[2155]: time="2024-12-13T01:55:02.853969218Z" level=error msg="encountered an error cleaning up failed sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.855084 containerd[2155]: time="2024-12-13T01:55:02.854242434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v9nm9,Uid:9e1a0733-2392-49a6-b8a5-5725a39b39fb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.855084 containerd[2155]: time="2024-12-13T01:55:02.854647614Z" level=error msg="encountered an error cleaning up failed sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.855084 containerd[2155]: time="2024-12-13T01:55:02.854868582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7578865df6-rpkll,Uid:a8345640-795c-4440-889c-5f65038d3192,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.855479 kubelet[3450]: E1213 01:55:02.854577 3450 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.855479 kubelet[3450]: E1213 01:55:02.854649 3450 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v9nm9" Dec 13 01:55:02.855479 kubelet[3450]: E1213 01:55:02.854696 3450 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-v9nm9" Dec 13 01:55:02.855683 kubelet[3450]: E1213 01:55:02.854789 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-v9nm9_kube-system(9e1a0733-2392-49a6-b8a5-5725a39b39fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-v9nm9_kube-system(9e1a0733-2392-49a6-b8a5-5725a39b39fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v9nm9" podUID="9e1a0733-2392-49a6-b8a5-5725a39b39fb" Dec 13 01:55:02.856474 kubelet[3450]: E1213 01:55:02.856323 3450 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.856947 kubelet[3450]: E1213 01:55:02.856659 3450 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7578865df6-rpkll" Dec 13 01:55:02.856947 kubelet[3450]: E1213 01:55:02.856771 3450 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7578865df6-rpkll" Dec 13 01:55:02.856947 kubelet[3450]: E1213 01:55:02.856887 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7578865df6-rpkll_calico-apiserver(a8345640-795c-4440-889c-5f65038d3192)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7578865df6-rpkll_calico-apiserver(a8345640-795c-4440-889c-5f65038d3192)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7578865df6-rpkll" podUID="a8345640-795c-4440-889c-5f65038d3192" Dec 13 01:55:03.432739 containerd[2155]: time="2024-12-13T01:55:03.432682157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsw8v,Uid:283caadd-1af1-4d62-bdf3-ec7850179f30,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:03.444694 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2-shm.mount: Deactivated successfully. Dec 13 01:55:03.445696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a-shm.mount: Deactivated successfully. Dec 13 01:55:03.446225 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042-shm.mount: Deactivated successfully. Dec 13 01:55:03.559305 containerd[2155]: time="2024-12-13T01:55:03.559133118Z" level=error msg="Failed to destroy network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:03.560157 containerd[2155]: time="2024-12-13T01:55:03.560085138Z" level=error msg="encountered an error cleaning up failed sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:03.560313 containerd[2155]: time="2024-12-13T01:55:03.560197854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsw8v,Uid:283caadd-1af1-4d62-bdf3-ec7850179f30,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:03.564633 kubelet[3450]: E1213 01:55:03.562458 3450 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:03.564633 kubelet[3450]: E1213 01:55:03.562537 3450 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xsw8v" Dec 13 01:55:03.564633 kubelet[3450]: E1213 01:55:03.562576 3450 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xsw8v" Dec 13 01:55:03.566699 kubelet[3450]: E1213 01:55:03.566633 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xsw8v_calico-system(283caadd-1af1-4d62-bdf3-ec7850179f30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xsw8v_calico-system(283caadd-1af1-4d62-bdf3-ec7850179f30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xsw8v" podUID="283caadd-1af1-4d62-bdf3-ec7850179f30" Dec 13 01:55:03.567797 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be-shm.mount: Deactivated successfully. Dec 13 01:55:03.749681 kubelet[3450]: I1213 01:55:03.749454 3450 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:03.752525 containerd[2155]: time="2024-12-13T01:55:03.751012975Z" level=info msg="StopPodSandbox for \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\"" Dec 13 01:55:03.752525 containerd[2155]: time="2024-12-13T01:55:03.751657735Z" level=info msg="Ensure that sandbox 95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d in task-service has been cleanup successfully" Dec 13 01:55:03.758104 kubelet[3450]: I1213 01:55:03.757695 3450 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:03.762132 containerd[2155]: time="2024-12-13T01:55:03.760196011Z" level=info msg="StopPodSandbox for \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\"" Dec 13 01:55:03.762657 containerd[2155]: time="2024-12-13T01:55:03.762608359Z" level=info msg="Ensure that sandbox e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283 in task-service has been cleanup successfully" Dec 13 01:55:03.773796 kubelet[3450]: I1213 01:55:03.773747 3450 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:03.782000 containerd[2155]: time="2024-12-13T01:55:03.781724959Z" level=info msg="StopPodSandbox for \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\"" Dec 13 01:55:03.782154 containerd[2155]: time="2024-12-13T01:55:03.782120515Z" level=info msg="Ensure that sandbox 3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2 in task-service has been cleanup successfully" Dec 13 01:55:03.809128 kubelet[3450]: I1213 01:55:03.809077 3450 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:03.815250 containerd[2155]: time="2024-12-13T01:55:03.815009851Z" level=info msg="StopPodSandbox for \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\"" Dec 13 01:55:03.816807 containerd[2155]: time="2024-12-13T01:55:03.815407327Z" level=info msg="Ensure that sandbox 7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a in task-service has been cleanup successfully" Dec 13 01:55:03.841737 kubelet[3450]: I1213 01:55:03.840807 3450 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:03.845072 containerd[2155]: time="2024-12-13T01:55:03.845006563Z" level=info msg="StopPodSandbox for \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\"" Dec 13 01:55:03.845387 containerd[2155]: time="2024-12-13T01:55:03.845342623Z" level=info msg="Ensure that sandbox 146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be in task-service has been cleanup successfully" Dec 13 01:55:03.868689 containerd[2155]: time="2024-12-13T01:55:03.868615951Z" level=info msg="StopPodSandbox for \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\"" Dec 13 01:55:03.868981 containerd[2155]: time="2024-12-13T01:55:03.868934347Z" level=info msg="Ensure that sandbox a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042 in task-service has been cleanup successfully" Dec 13 01:55:03.869463 kubelet[3450]: I1213 01:55:03.866451 3450 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:03.933296 containerd[2155]: time="2024-12-13T01:55:03.932786024Z" level=error msg="StopPodSandbox for \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\" failed" error="failed to destroy network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:03.935450 kubelet[3450]: E1213 01:55:03.935391 3450 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:03.935679 kubelet[3450]: E1213 01:55:03.935524 3450 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283"} Dec 13 01:55:03.935679 kubelet[3450]: E1213 01:55:03.935592 3450 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:03.935679 kubelet[3450]: E1213 01:55:03.935645 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7578865df6-bdb44" podUID="b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6" Dec 13 01:55:03.973594 containerd[2155]: time="2024-12-13T01:55:03.973511168Z" level=error msg="StopPodSandbox for \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\" failed" error="failed to destroy network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:03.974023 kubelet[3450]: E1213 01:55:03.973908 3450 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:03.974312 kubelet[3450]: E1213 01:55:03.974024 3450 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d"} Dec 13 01:55:03.974312 kubelet[3450]: E1213 01:55:03.974100 3450 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a019854-49b1-4766-9d83-02d10b056c78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:03.974312 kubelet[3450]: E1213 01:55:03.974154 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a019854-49b1-4766-9d83-02d10b056c78\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5594565998-zqvpl" podUID="9a019854-49b1-4766-9d83-02d10b056c78" Dec 13 01:55:03.990420 containerd[2155]: time="2024-12-13T01:55:03.990342620Z" level=error msg="StopPodSandbox for \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\" failed" error="failed to destroy network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:03.990818 kubelet[3450]: E1213 01:55:03.990730 3450 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:03.990818 kubelet[3450]: E1213 01:55:03.990803 3450 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2"} Dec 13 01:55:03.991130 kubelet[3450]: E1213 01:55:03.990869 3450 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8345640-795c-4440-889c-5f65038d3192\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:03.991130 kubelet[3450]: E1213 01:55:03.990920 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8345640-795c-4440-889c-5f65038d3192\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7578865df6-rpkll" podUID="a8345640-795c-4440-889c-5f65038d3192" Dec 13 01:55:04.019262 containerd[2155]: time="2024-12-13T01:55:04.019044796Z" level=error msg="StopPodSandbox for \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\" failed" error="failed to destroy network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:04.019463 kubelet[3450]: E1213 01:55:04.019424 3450 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:04.019572 kubelet[3450]: E1213 01:55:04.019492 3450 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a"} Dec 13 01:55:04.019572 kubelet[3450]: E1213 01:55:04.019561 3450 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:04.019739 kubelet[3450]: E1213 01:55:04.019618 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zztz7" podUID="e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9" Dec 13 01:55:04.030763 containerd[2155]: time="2024-12-13T01:55:04.030687700Z" level=error msg="StopPodSandbox for \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\" failed" error="failed to destroy network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:04.032451 kubelet[3450]: E1213 01:55:04.032410 3450 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:04.032596 kubelet[3450]: E1213 01:55:04.032481 3450 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be"} Dec 13 01:55:04.032596 kubelet[3450]: E1213 01:55:04.032569 3450 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"283caadd-1af1-4d62-bdf3-ec7850179f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:04.032788 kubelet[3450]: E1213 01:55:04.032620 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"283caadd-1af1-4d62-bdf3-ec7850179f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xsw8v" podUID="283caadd-1af1-4d62-bdf3-ec7850179f30" Dec 13 01:55:04.045853 containerd[2155]: time="2024-12-13T01:55:04.045289564Z" level=error msg="StopPodSandbox for \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\" failed" error="failed to destroy network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:04.046424 kubelet[3450]: E1213 01:55:04.045828 3450 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:04.046424 kubelet[3450]: E1213 01:55:04.045890 3450 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042"} Dec 13 01:55:04.046424 kubelet[3450]: E1213 01:55:04.045955 3450 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e1a0733-2392-49a6-b8a5-5725a39b39fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:04.046424 kubelet[3450]: E1213 01:55:04.046007 3450 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e1a0733-2392-49a6-b8a5-5725a39b39fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-v9nm9" podUID="9e1a0733-2392-49a6-b8a5-5725a39b39fb" Dec 13 01:55:09.064183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2888098208.mount: Deactivated successfully. Dec 13 01:55:09.154261 containerd[2155]: time="2024-12-13T01:55:09.152428330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:09.157851 containerd[2155]: time="2024-12-13T01:55:09.157760914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:55:09.162180 containerd[2155]: time="2024-12-13T01:55:09.162115930Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:09.171307 containerd[2155]: time="2024-12-13T01:55:09.171233578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:09.172308 containerd[2155]: time="2024-12-13T01:55:09.172220506Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.422003468s" Dec 13 01:55:09.172442 containerd[2155]: time="2024-12-13T01:55:09.172312786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:55:09.211322 containerd[2155]: time="2024-12-13T01:55:09.209842426Z" level=info msg="CreateContainer within sandbox \"cb8f1397c2dfd37b51334a1425b506bb943d17c7a08e8d78e37db2c5ec322bbc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:55:09.274596 containerd[2155]: time="2024-12-13T01:55:09.274531918Z" level=info msg="CreateContainer within sandbox \"cb8f1397c2dfd37b51334a1425b506bb943d17c7a08e8d78e37db2c5ec322bbc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3fe908d42e0a00008810f37c3c47e7542ceaf8bf4e599a3e61133adbde129b4c\"" Dec 13 01:55:09.276709 containerd[2155]: time="2024-12-13T01:55:09.276543802Z" level=info msg="StartContainer for \"3fe908d42e0a00008810f37c3c47e7542ceaf8bf4e599a3e61133adbde129b4c\"" Dec 13 01:55:09.423509 containerd[2155]: time="2024-12-13T01:55:09.423434123Z" level=info msg="StartContainer for \"3fe908d42e0a00008810f37c3c47e7542ceaf8bf4e599a3e61133adbde129b4c\" returns successfully" Dec 13 01:55:09.547983 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:55:09.548151 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:55:10.936089 systemd[1]: run-containerd-runc-k8s.io-3fe908d42e0a00008810f37c3c47e7542ceaf8bf4e599a3e61133adbde129b4c-runc.5x2jwF.mount: Deactivated successfully. Dec 13 01:55:11.461515 kubelet[3450]: I1213 01:55:11.461459 3450 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:11.503541 kubelet[3450]: I1213 01:55:11.500017 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-w7kbj" podStartSLOduration=3.540630116 podStartE2EDuration="20.499958065s" podCreationTimestamp="2024-12-13 01:54:51 +0000 UTC" firstStartedPulling="2024-12-13 01:54:52.213274649 +0000 UTC m=+24.047123436" lastFinishedPulling="2024-12-13 01:55:09.172602598 +0000 UTC m=+41.006451385" observedRunningTime="2024-12-13 01:55:10.026986714 +0000 UTC m=+41.860835537" watchObservedRunningTime="2024-12-13 01:55:11.499958065 +0000 UTC m=+43.333806852" Dec 13 01:55:11.975512 kernel: bpftool[4933]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:55:12.310730 (udev-worker)[4747]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:12.312512 systemd-networkd[1694]: vxlan.calico: Link UP Dec 13 01:55:12.312520 systemd-networkd[1694]: vxlan.calico: Gained carrier Dec 13 01:55:12.352245 (udev-worker)[4743]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:13.014822 systemd[1]: Started sshd@7-172.31.22.156:22-139.178.68.195:57388.service - OpenSSH per-connection server daemon (139.178.68.195:57388). Dec 13 01:55:13.204965 sshd[5013]: Accepted publickey for core from 139.178.68.195 port 57388 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:13.208544 sshd[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:13.217064 systemd-logind[2111]: New session 8 of user core. Dec 13 01:55:13.227719 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:55:13.509725 sshd[5013]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.516997 systemd-logind[2111]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:55:13.520836 systemd[1]: sshd@7-172.31.22.156:22-139.178.68.195:57388.service: Deactivated successfully. Dec 13 01:55:13.533348 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:55:13.536318 systemd-logind[2111]: Removed session 8. Dec 13 01:55:14.311731 systemd-networkd[1694]: vxlan.calico: Gained IPv6LL Dec 13 01:55:15.427169 containerd[2155]: time="2024-12-13T01:55:15.423915713Z" level=info msg="StopPodSandbox for \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\"" Dec 13 01:55:15.427169 containerd[2155]: time="2024-12-13T01:55:15.424349969Z" level=info msg="StopPodSandbox for \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\"" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.593 [INFO][5056] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.593 [INFO][5056] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" iface="eth0" netns="/var/run/netns/cni-8d7a516e-6fb5-6a9b-eb40-d197a5e3b066" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.596 [INFO][5056] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" iface="eth0" netns="/var/run/netns/cni-8d7a516e-6fb5-6a9b-eb40-d197a5e3b066" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.604 [INFO][5056] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" iface="eth0" netns="/var/run/netns/cni-8d7a516e-6fb5-6a9b-eb40-d197a5e3b066" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.604 [INFO][5056] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.604 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.670 [INFO][5070] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" HandleID="k8s-pod-network.a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.670 [INFO][5070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.671 [INFO][5070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.688 [WARNING][5070] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" HandleID="k8s-pod-network.a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.688 [INFO][5070] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" HandleID="k8s-pod-network.a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.691 [INFO][5070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:15.709876 containerd[2155]: 2024-12-13 01:55:15.705 [INFO][5056] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:15.714030 containerd[2155]: time="2024-12-13T01:55:15.710372970Z" level=info msg="TearDown network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\" successfully" Dec 13 01:55:15.714030 containerd[2155]: time="2024-12-13T01:55:15.710416242Z" level=info msg="StopPodSandbox for \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\" returns successfully" Dec 13 01:55:15.720049 systemd[1]: run-netns-cni\x2d8d7a516e\x2d6fb5\x2d6a9b\x2deb40\x2dd197a5e3b066.mount: Deactivated successfully. Dec 13 01:55:15.728635 containerd[2155]: time="2024-12-13T01:55:15.725699634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v9nm9,Uid:9e1a0733-2392-49a6-b8a5-5725a39b39fb,Namespace:kube-system,Attempt:1,}" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.603 [INFO][5057] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.605 [INFO][5057] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" iface="eth0" netns="/var/run/netns/cni-ebcc9304-edd4-1814-324b-6656b6852f19" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.607 [INFO][5057] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" iface="eth0" netns="/var/run/netns/cni-ebcc9304-edd4-1814-324b-6656b6852f19" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.608 [INFO][5057] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" iface="eth0" netns="/var/run/netns/cni-ebcc9304-edd4-1814-324b-6656b6852f19" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.609 [INFO][5057] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.610 [INFO][5057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.685 [INFO][5071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" HandleID="k8s-pod-network.7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.685 [INFO][5071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.692 [INFO][5071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.706 [WARNING][5071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" HandleID="k8s-pod-network.7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.706 [INFO][5071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" HandleID="k8s-pod-network.7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.708 [INFO][5071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:15.728635 containerd[2155]: 2024-12-13 01:55:15.719 [INFO][5057] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:15.728635 containerd[2155]: time="2024-12-13T01:55:15.727652670Z" level=info msg="TearDown network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\" successfully" Dec 13 01:55:15.728635 containerd[2155]: time="2024-12-13T01:55:15.727705710Z" level=info msg="StopPodSandbox for \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\" returns successfully" Dec 13 01:55:15.732183 containerd[2155]: time="2024-12-13T01:55:15.728786730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zztz7,Uid:e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9,Namespace:kube-system,Attempt:1,}" Dec 13 01:55:15.735172 systemd[1]: run-netns-cni\x2debcc9304\x2dedd4\x2d1814\x2d324b\x2d6656b6852f19.mount: Deactivated successfully. Dec 13 01:55:16.023357 systemd-networkd[1694]: calibe26ed07dad: Link UP Dec 13 01:55:16.027395 systemd-networkd[1694]: calibe26ed07dad: Gained carrier Dec 13 01:55:16.040087 (udev-worker)[5120]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.859 [INFO][5084] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0 coredns-76f75df574- kube-system 9e1a0733-2392-49a6-b8a5-5725a39b39fb 802 0 2024-12-13 01:54:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-156 coredns-76f75df574-v9nm9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibe26ed07dad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Namespace="kube-system" Pod="coredns-76f75df574-v9nm9" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.862 [INFO][5084] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Namespace="kube-system" Pod="coredns-76f75df574-v9nm9" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.926 [INFO][5107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" HandleID="k8s-pod-network.518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.945 [INFO][5107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" HandleID="k8s-pod-network.518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042b860), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-156", "pod":"coredns-76f75df574-v9nm9", "timestamp":"2024-12-13 01:55:15.926268823 +0000 UTC"}, Hostname:"ip-172-31-22-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.945 [INFO][5107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.945 [INFO][5107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.945 [INFO][5107] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-156' Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.948 [INFO][5107] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" host="ip-172-31-22-156" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.959 [INFO][5107] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-156" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.968 [INFO][5107] ipam/ipam.go 489: Trying affinity for 192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.972 [INFO][5107] ipam/ipam.go 155: Attempting to load block cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.975 [INFO][5107] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.976 [INFO][5107] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.57.128/26 handle="k8s-pod-network.518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" host="ip-172-31-22-156" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.978 [INFO][5107] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.985 [INFO][5107] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.57.128/26 handle="k8s-pod-network.518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" host="ip-172-31-22-156" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.999 [INFO][5107] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.57.129/26] block=192.168.57.128/26 handle="k8s-pod-network.518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" host="ip-172-31-22-156" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.999 [INFO][5107] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.57.129/26] handle="k8s-pod-network.518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" host="ip-172-31-22-156" Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.999 [INFO][5107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:16.090615 containerd[2155]: 2024-12-13 01:55:15.999 [INFO][5107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.129/26] IPv6=[] ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" HandleID="k8s-pod-network.518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:16.094759 containerd[2155]: 2024-12-13 01:55:16.004 [INFO][5084] cni-plugin/k8s.go 386: Populated endpoint ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Namespace="kube-system" Pod="coredns-76f75df574-v9nm9" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9e1a0733-2392-49a6-b8a5-5725a39b39fb", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"", Pod:"coredns-76f75df574-v9nm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe26ed07dad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:16.094759 containerd[2155]: 2024-12-13 01:55:16.004 [INFO][5084] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.57.129/32] ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Namespace="kube-system" Pod="coredns-76f75df574-v9nm9" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:16.094759 containerd[2155]: 2024-12-13 01:55:16.004 [INFO][5084] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe26ed07dad ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Namespace="kube-system" Pod="coredns-76f75df574-v9nm9" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:16.094759 containerd[2155]: 2024-12-13 01:55:16.028 [INFO][5084] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Namespace="kube-system" Pod="coredns-76f75df574-v9nm9" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:16.094759 containerd[2155]: 2024-12-13 01:55:16.030 [INFO][5084] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Namespace="kube-system" Pod="coredns-76f75df574-v9nm9" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9e1a0733-2392-49a6-b8a5-5725a39b39fb", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab", Pod:"coredns-76f75df574-v9nm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe26ed07dad", MAC:"32:2d:d5:b2:33:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:16.094759 containerd[2155]: 2024-12-13 01:55:16.061 [INFO][5084] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab" Namespace="kube-system" Pod="coredns-76f75df574-v9nm9" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:16.113424 systemd-networkd[1694]: cali9969bccf4e7: Link UP Dec 13 01:55:16.115404 systemd-networkd[1694]: cali9969bccf4e7: Gained carrier Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:15.870 [INFO][5089] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0 coredns-76f75df574- kube-system e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9 803 0 2024-12-13 01:54:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-156 coredns-76f75df574-zztz7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9969bccf4e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Namespace="kube-system" Pod="coredns-76f75df574-zztz7" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:15.872 [INFO][5089] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Namespace="kube-system" Pod="coredns-76f75df574-zztz7" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:15.940 [INFO][5111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" HandleID="k8s-pod-network.d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:15.966 [INFO][5111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" HandleID="k8s-pod-network.d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317860), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-156", "pod":"coredns-76f75df574-zztz7", "timestamp":"2024-12-13 01:55:15.939958483 +0000 UTC"}, Hostname:"ip-172-31-22-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:15.967 [INFO][5111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:15.999 [INFO][5111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:15.999 [INFO][5111] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-156' Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.004 [INFO][5111] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" host="ip-172-31-22-156" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.014 [INFO][5111] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-156" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.028 [INFO][5111] ipam/ipam.go 489: Trying affinity for 192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.044 [INFO][5111] ipam/ipam.go 155: Attempting to load block cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.050 [INFO][5111] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.050 [INFO][5111] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.57.128/26 handle="k8s-pod-network.d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" host="ip-172-31-22-156" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.061 [INFO][5111] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01 Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.074 [INFO][5111] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.57.128/26 handle="k8s-pod-network.d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" host="ip-172-31-22-156" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.099 [INFO][5111] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.57.130/26] block=192.168.57.128/26 handle="k8s-pod-network.d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" host="ip-172-31-22-156" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.099 [INFO][5111] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.57.130/26] handle="k8s-pod-network.d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" host="ip-172-31-22-156" Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.099 [INFO][5111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:16.160873 containerd[2155]: 2024-12-13 01:55:16.099 [INFO][5111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.130/26] IPv6=[] ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" HandleID="k8s-pod-network.d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:16.162748 containerd[2155]: 2024-12-13 01:55:16.103 [INFO][5089] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Namespace="kube-system" Pod="coredns-76f75df574-zztz7" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"", Pod:"coredns-76f75df574-zztz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9969bccf4e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:16.162748 containerd[2155]: 2024-12-13 01:55:16.104 [INFO][5089] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.57.130/32] ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Namespace="kube-system" Pod="coredns-76f75df574-zztz7" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:16.162748 containerd[2155]: 2024-12-13 01:55:16.104 [INFO][5089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9969bccf4e7 ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Namespace="kube-system" Pod="coredns-76f75df574-zztz7" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:16.162748 containerd[2155]: 2024-12-13 01:55:16.114 [INFO][5089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Namespace="kube-system" Pod="coredns-76f75df574-zztz7" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:16.162748 containerd[2155]: 2024-12-13 01:55:16.119 [INFO][5089] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Namespace="kube-system" Pod="coredns-76f75df574-zztz7" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01", Pod:"coredns-76f75df574-zztz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9969bccf4e7", MAC:"be:a5:e7:d2:8c:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:16.162748 containerd[2155]: 2024-12-13 01:55:16.147 [INFO][5089] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01" Namespace="kube-system" Pod="coredns-76f75df574-zztz7" WorkloadEndpoint="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:16.187289 containerd[2155]: time="2024-12-13T01:55:16.186367060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:16.187289 containerd[2155]: time="2024-12-13T01:55:16.186470920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:16.187289 containerd[2155]: time="2024-12-13T01:55:16.186524944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:16.187289 containerd[2155]: time="2024-12-13T01:55:16.186758776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:16.247863 containerd[2155]: time="2024-12-13T01:55:16.247196189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:16.247863 containerd[2155]: time="2024-12-13T01:55:16.247788281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:16.249253 containerd[2155]: time="2024-12-13T01:55:16.248967041Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:16.249566 containerd[2155]: time="2024-12-13T01:55:16.249307397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:16.330391 containerd[2155]: time="2024-12-13T01:55:16.330326381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-v9nm9,Uid:9e1a0733-2392-49a6-b8a5-5725a39b39fb,Namespace:kube-system,Attempt:1,} returns sandbox id \"518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab\"" Dec 13 01:55:16.374188 containerd[2155]: time="2024-12-13T01:55:16.373990001Z" level=info msg="CreateContainer within sandbox \"518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:16.389937 containerd[2155]: time="2024-12-13T01:55:16.389870405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zztz7,Uid:e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9,Namespace:kube-system,Attempt:1,} returns sandbox id \"d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01\"" Dec 13 01:55:16.399021 containerd[2155]: time="2024-12-13T01:55:16.398155158Z" level=info msg="CreateContainer within sandbox \"d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:16.417768 containerd[2155]: time="2024-12-13T01:55:16.417711510Z" level=info msg="CreateContainer within sandbox \"518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5bdd5cc0f1e613890acc46df7154183f8c05b30d9010287b3981b019b33c567\"" Dec 13 01:55:16.419431 containerd[2155]: time="2024-12-13T01:55:16.419267238Z" level=info msg="StartContainer for \"a5bdd5cc0f1e613890acc46df7154183f8c05b30d9010287b3981b019b33c567\"" Dec 13 01:55:16.427866 containerd[2155]: time="2024-12-13T01:55:16.427435362Z" level=info msg="StopPodSandbox for \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\"" Dec 13 01:55:16.442321 containerd[2155]: time="2024-12-13T01:55:16.441047178Z" level=info msg="CreateContainer within sandbox \"d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"862e9bccd749f590f0c09454aa0d0c469b67d3248189c32aa342fb353d6150d4\"" Dec 13 01:55:16.443546 containerd[2155]: time="2024-12-13T01:55:16.443496690Z" level=info msg="StartContainer for \"862e9bccd749f590f0c09454aa0d0c469b67d3248189c32aa342fb353d6150d4\"" Dec 13 01:55:16.627583 containerd[2155]: time="2024-12-13T01:55:16.627331855Z" level=info msg="StartContainer for \"a5bdd5cc0f1e613890acc46df7154183f8c05b30d9010287b3981b019b33c567\" returns successfully" Dec 13 01:55:16.683589 containerd[2155]: time="2024-12-13T01:55:16.683125747Z" level=info msg="StartContainer for \"862e9bccd749f590f0c09454aa0d0c469b67d3248189c32aa342fb353d6150d4\" returns successfully" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.654 [INFO][5252] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.658 [INFO][5252] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" iface="eth0" netns="/var/run/netns/cni-cf0b1497-e96a-6843-0ef5-2f3e8e278a1d" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.658 [INFO][5252] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" iface="eth0" netns="/var/run/netns/cni-cf0b1497-e96a-6843-0ef5-2f3e8e278a1d" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.661 [INFO][5252] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" iface="eth0" netns="/var/run/netns/cni-cf0b1497-e96a-6843-0ef5-2f3e8e278a1d" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.661 [INFO][5252] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.661 [INFO][5252] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.779 [INFO][5319] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" HandleID="k8s-pod-network.95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.779 [INFO][5319] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.779 [INFO][5319] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.801 [WARNING][5319] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" HandleID="k8s-pod-network.95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.801 [INFO][5319] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" HandleID="k8s-pod-network.95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.803 [INFO][5319] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:16.815070 containerd[2155]: 2024-12-13 01:55:16.811 [INFO][5252] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:16.818435 containerd[2155]: time="2024-12-13T01:55:16.816325796Z" level=info msg="TearDown network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\" successfully" Dec 13 01:55:16.818435 containerd[2155]: time="2024-12-13T01:55:16.816382364Z" level=info msg="StopPodSandbox for \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\" returns successfully" Dec 13 01:55:16.824326 systemd[1]: run-netns-cni\x2dcf0b1497\x2de96a\x2d6843\x2d0ef5\x2d2f3e8e278a1d.mount: Deactivated successfully. Dec 13 01:55:16.836030 containerd[2155]: time="2024-12-13T01:55:16.835537172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5594565998-zqvpl,Uid:9a019854-49b1-4766-9d83-02d10b056c78,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:17.037090 kubelet[3450]: I1213 01:55:17.037004 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zztz7" podStartSLOduration=36.036915785 podStartE2EDuration="36.036915785s" podCreationTimestamp="2024-12-13 01:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:17.008706569 +0000 UTC m=+48.842555380" watchObservedRunningTime="2024-12-13 01:55:17.036915785 +0000 UTC m=+48.870764584" Dec 13 01:55:17.156759 systemd-networkd[1694]: cali7c3a75e853e: Link UP Dec 13 01:55:17.160172 systemd-networkd[1694]: cali7c3a75e853e: Gained carrier Dec 13 01:55:17.186290 kubelet[3450]: I1213 01:55:17.185323 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-v9nm9" podStartSLOduration=36.185261585 podStartE2EDuration="36.185261585s" podCreationTimestamp="2024-12-13 01:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:17.047866265 +0000 UTC m=+48.881715064" watchObservedRunningTime="2024-12-13 01:55:17.185261585 +0000 UTC m=+49.019110396" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:16.951 [INFO][5334] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0 calico-kube-controllers-5594565998- calico-system 9a019854-49b1-4766-9d83-02d10b056c78 822 0 2024-12-13 01:54:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5594565998 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-156 calico-kube-controllers-5594565998-zqvpl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7c3a75e853e [] []}} ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Namespace="calico-system" Pod="calico-kube-controllers-5594565998-zqvpl" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:16.952 [INFO][5334] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Namespace="calico-system" Pod="calico-kube-controllers-5594565998-zqvpl" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.039 [INFO][5345] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" HandleID="k8s-pod-network.de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.082 [INFO][5345] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" HandleID="k8s-pod-network.de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004d4d40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-156", "pod":"calico-kube-controllers-5594565998-zqvpl", "timestamp":"2024-12-13 01:55:17.039351185 +0000 UTC"}, Hostname:"ip-172-31-22-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.083 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.083 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.083 [INFO][5345] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-156' Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.089 [INFO][5345] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" host="ip-172-31-22-156" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.097 [INFO][5345] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-156" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.107 [INFO][5345] ipam/ipam.go 489: Trying affinity for 192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.111 [INFO][5345] ipam/ipam.go 155: Attempting to load block cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.115 [INFO][5345] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.115 [INFO][5345] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.57.128/26 handle="k8s-pod-network.de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" host="ip-172-31-22-156" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.118 [INFO][5345] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.127 [INFO][5345] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.57.128/26 handle="k8s-pod-network.de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" host="ip-172-31-22-156" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.140 [INFO][5345] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.57.131/26] block=192.168.57.128/26 handle="k8s-pod-network.de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" host="ip-172-31-22-156" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.140 [INFO][5345] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.57.131/26] handle="k8s-pod-network.de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" host="ip-172-31-22-156" Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.140 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:17.191949 containerd[2155]: 2024-12-13 01:55:17.140 [INFO][5345] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.131/26] IPv6=[] ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" HandleID="k8s-pod-network.de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:17.195185 containerd[2155]: 2024-12-13 01:55:17.146 [INFO][5334] cni-plugin/k8s.go 386: Populated endpoint ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Namespace="calico-system" Pod="calico-kube-controllers-5594565998-zqvpl" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0", GenerateName:"calico-kube-controllers-5594565998-", Namespace:"calico-system", SelfLink:"", UID:"9a019854-49b1-4766-9d83-02d10b056c78", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5594565998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"", Pod:"calico-kube-controllers-5594565998-zqvpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c3a75e853e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:17.195185 containerd[2155]: 2024-12-13 01:55:17.147 [INFO][5334] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.57.131/32] ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Namespace="calico-system" Pod="calico-kube-controllers-5594565998-zqvpl" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:17.195185 containerd[2155]: 2024-12-13 01:55:17.148 [INFO][5334] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c3a75e853e ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Namespace="calico-system" Pod="calico-kube-controllers-5594565998-zqvpl" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:17.195185 containerd[2155]: 2024-12-13 01:55:17.159 [INFO][5334] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Namespace="calico-system" Pod="calico-kube-controllers-5594565998-zqvpl" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:17.195185 containerd[2155]: 2024-12-13 01:55:17.160 [INFO][5334] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Namespace="calico-system" Pod="calico-kube-controllers-5594565998-zqvpl" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0", GenerateName:"calico-kube-controllers-5594565998-", Namespace:"calico-system", SelfLink:"", UID:"9a019854-49b1-4766-9d83-02d10b056c78", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5594565998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d", Pod:"calico-kube-controllers-5594565998-zqvpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c3a75e853e", MAC:"ba:45:9e:25:78:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:17.195185 containerd[2155]: 2024-12-13 01:55:17.186 [INFO][5334] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d" Namespace="calico-system" Pod="calico-kube-controllers-5594565998-zqvpl" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:17.242620 containerd[2155]: time="2024-12-13T01:55:17.241818786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:17.242620 containerd[2155]: time="2024-12-13T01:55:17.241947714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:17.242620 containerd[2155]: time="2024-12-13T01:55:17.241985658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:17.243982 containerd[2155]: time="2024-12-13T01:55:17.243848046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:17.367180 containerd[2155]: time="2024-12-13T01:55:17.367125426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5594565998-zqvpl,Uid:9a019854-49b1-4766-9d83-02d10b056c78,Namespace:calico-system,Attempt:1,} returns sandbox id \"de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d\"" Dec 13 01:55:17.373430 containerd[2155]: time="2024-12-13T01:55:17.373370298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:55:17.425039 containerd[2155]: time="2024-12-13T01:55:17.424547923Z" level=info msg="StopPodSandbox for \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\"" Dec 13 01:55:17.575990 systemd-networkd[1694]: cali9969bccf4e7: Gained IPv6LL Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.516 [INFO][5423] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.516 [INFO][5423] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" iface="eth0" netns="/var/run/netns/cni-c5e6852a-f338-9944-1b50-0d7c8be92147" Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.517 [INFO][5423] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" iface="eth0" netns="/var/run/netns/cni-c5e6852a-f338-9944-1b50-0d7c8be92147" Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.517 [INFO][5423] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" iface="eth0" netns="/var/run/netns/cni-c5e6852a-f338-9944-1b50-0d7c8be92147" Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.517 [INFO][5423] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.517 [INFO][5423] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.562 [INFO][5429] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" HandleID="k8s-pod-network.3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.562 [INFO][5429] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.562 [INFO][5429] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.574 [WARNING][5429] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" HandleID="k8s-pod-network.3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.575 [INFO][5429] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" HandleID="k8s-pod-network.3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.578 [INFO][5429] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:17.584072 containerd[2155]: 2024-12-13 01:55:17.581 [INFO][5423] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:17.586123 containerd[2155]: time="2024-12-13T01:55:17.584351683Z" level=info msg="TearDown network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\" successfully" Dec 13 01:55:17.586123 containerd[2155]: time="2024-12-13T01:55:17.584394751Z" level=info msg="StopPodSandbox for \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\" returns successfully" Dec 13 01:55:17.586123 containerd[2155]: time="2024-12-13T01:55:17.585642907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7578865df6-rpkll,Uid:a8345640-795c-4440-889c-5f65038d3192,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:55:17.728107 systemd[1]: run-netns-cni\x2dc5e6852a\x2df338\x2d9944\x2d1b50\x2d0d7c8be92147.mount: Deactivated successfully. Dec 13 01:55:17.767793 systemd-networkd[1694]: calibe26ed07dad: Gained IPv6LL Dec 13 01:55:17.853338 systemd-networkd[1694]: cali424b11dca4a: Link UP Dec 13 01:55:17.856040 systemd-networkd[1694]: cali424b11dca4a: Gained carrier Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.676 [INFO][5436] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0 calico-apiserver-7578865df6- calico-apiserver a8345640-795c-4440-889c-5f65038d3192 841 0 2024-12-13 01:54:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7578865df6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-156 calico-apiserver-7578865df6-rpkll eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali424b11dca4a [] []}} ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-rpkll" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.677 [INFO][5436] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-rpkll" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.759 [INFO][5446] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" HandleID="k8s-pod-network.103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.786 [INFO][5446] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" HandleID="k8s-pod-network.103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319c00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-156", "pod":"calico-apiserver-7578865df6-rpkll", "timestamp":"2024-12-13 01:55:17.759492116 +0000 UTC"}, Hostname:"ip-172-31-22-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.786 [INFO][5446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.786 [INFO][5446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.786 [INFO][5446] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-156' Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.789 [INFO][5446] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" host="ip-172-31-22-156" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.798 [INFO][5446] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-156" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.809 [INFO][5446] ipam/ipam.go 489: Trying affinity for 192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.813 [INFO][5446] ipam/ipam.go 155: Attempting to load block cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.819 [INFO][5446] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.819 [INFO][5446] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.57.128/26 handle="k8s-pod-network.103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" host="ip-172-31-22-156" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.822 [INFO][5446] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178 Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.829 [INFO][5446] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.57.128/26 handle="k8s-pod-network.103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" host="ip-172-31-22-156" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.843 [INFO][5446] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.57.132/26] block=192.168.57.128/26 handle="k8s-pod-network.103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" host="ip-172-31-22-156" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.843 [INFO][5446] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.57.132/26] handle="k8s-pod-network.103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" host="ip-172-31-22-156" Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.843 [INFO][5446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:17.890422 containerd[2155]: 2024-12-13 01:55:17.843 [INFO][5446] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.132/26] IPv6=[] ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" HandleID="k8s-pod-network.103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.892499 containerd[2155]: 2024-12-13 01:55:17.847 [INFO][5436] cni-plugin/k8s.go 386: Populated endpoint ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-rpkll" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0", GenerateName:"calico-apiserver-7578865df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8345640-795c-4440-889c-5f65038d3192", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7578865df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"", Pod:"calico-apiserver-7578865df6-rpkll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali424b11dca4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:17.892499 containerd[2155]: 2024-12-13 01:55:17.847 [INFO][5436] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.57.132/32] ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-rpkll" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.892499 containerd[2155]: 2024-12-13 01:55:17.847 [INFO][5436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali424b11dca4a ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-rpkll" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.892499 containerd[2155]: 2024-12-13 01:55:17.855 [INFO][5436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-rpkll" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.892499 containerd[2155]: 2024-12-13 01:55:17.858 [INFO][5436] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-rpkll" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0", GenerateName:"calico-apiserver-7578865df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8345640-795c-4440-889c-5f65038d3192", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7578865df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178", Pod:"calico-apiserver-7578865df6-rpkll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali424b11dca4a", MAC:"12:3b:34:08:48:05", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:17.892499 containerd[2155]: 2024-12-13 01:55:17.883 [INFO][5436] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-rpkll" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:17.935384 containerd[2155]: time="2024-12-13T01:55:17.934972257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:17.935384 containerd[2155]: time="2024-12-13T01:55:17.935088633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:17.935384 containerd[2155]: time="2024-12-13T01:55:17.935125977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:17.936056 containerd[2155]: time="2024-12-13T01:55:17.935964561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:18.109104 containerd[2155]: time="2024-12-13T01:55:18.108995802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7578865df6-rpkll,Uid:a8345640-795c-4440-889c-5f65038d3192,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178\"" Dec 13 01:55:18.426980 containerd[2155]: time="2024-12-13T01:55:18.426491336Z" level=info msg="StopPodSandbox for \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\"" Dec 13 01:55:18.541915 systemd[1]: Started sshd@8-172.31.22.156:22-139.178.68.195:49852.service - OpenSSH per-connection server daemon (139.178.68.195:49852). Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.525 [INFO][5529] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.526 [INFO][5529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" iface="eth0" netns="/var/run/netns/cni-8926039a-85d5-8ebf-29cc-7ab57bc2294e" Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.526 [INFO][5529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" iface="eth0" netns="/var/run/netns/cni-8926039a-85d5-8ebf-29cc-7ab57bc2294e" Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.527 [INFO][5529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" iface="eth0" netns="/var/run/netns/cni-8926039a-85d5-8ebf-29cc-7ab57bc2294e" Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.527 [INFO][5529] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.527 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.612 [INFO][5535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" HandleID="k8s-pod-network.146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.613 [INFO][5535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.613 [INFO][5535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.626 [WARNING][5535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" HandleID="k8s-pod-network.146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.626 [INFO][5535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" HandleID="k8s-pod-network.146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.629 [INFO][5535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:18.640506 containerd[2155]: 2024-12-13 01:55:18.632 [INFO][5529] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:18.640506 containerd[2155]: time="2024-12-13T01:55:18.640375869Z" level=info msg="TearDown network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\" successfully" Dec 13 01:55:18.640506 containerd[2155]: time="2024-12-13T01:55:18.640424205Z" level=info msg="StopPodSandbox for \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\" returns successfully" Dec 13 01:55:18.646834 containerd[2155]: time="2024-12-13T01:55:18.643614861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsw8v,Uid:283caadd-1af1-4d62-bdf3-ec7850179f30,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:18.650638 systemd[1]: run-netns-cni\x2d8926039a\x2d85d5\x2d8ebf\x2d29cc\x2d7ab57bc2294e.mount: Deactivated successfully. Dec 13 01:55:18.767372 sshd[5536]: Accepted publickey for core from 139.178.68.195 port 49852 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:18.772952 sshd[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:18.797387 systemd-logind[2111]: New session 9 of user core. Dec 13 01:55:18.801869 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:55:19.051292 systemd-networkd[1694]: cali7c3a75e853e: Gained IPv6LL Dec 13 01:55:19.055335 systemd-networkd[1694]: cali6ae7b28e213: Link UP Dec 13 01:55:19.058972 systemd-networkd[1694]: cali6ae7b28e213: Gained carrier Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.804 [INFO][5546] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0 csi-node-driver- calico-system 283caadd-1af1-4d62-bdf3-ec7850179f30 858 0 2024-12-13 01:54:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-22-156 csi-node-driver-xsw8v eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6ae7b28e213 [] []}} ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Namespace="calico-system" Pod="csi-node-driver-xsw8v" WorkloadEndpoint="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.810 [INFO][5546] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Namespace="calico-system" Pod="csi-node-driver-xsw8v" WorkloadEndpoint="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.893 [INFO][5556] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" HandleID="k8s-pod-network.8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.918 [INFO][5556] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" HandleID="k8s-pod-network.8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c420), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-156", "pod":"csi-node-driver-xsw8v", "timestamp":"2024-12-13 01:55:18.89384803 +0000 UTC"}, Hostname:"ip-172-31-22-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.918 [INFO][5556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.918 [INFO][5556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.918 [INFO][5556] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-156' Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.925 [INFO][5556] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" host="ip-172-31-22-156" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.936 [INFO][5556] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-156" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.955 [INFO][5556] ipam/ipam.go 489: Trying affinity for 192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.960 [INFO][5556] ipam/ipam.go 155: Attempting to load block cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.974 [INFO][5556] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.974 [INFO][5556] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.57.128/26 handle="k8s-pod-network.8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" host="ip-172-31-22-156" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:18.980 [INFO][5556] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:19.002 [INFO][5556] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.57.128/26 handle="k8s-pod-network.8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" host="ip-172-31-22-156" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:19.025 [INFO][5556] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.57.133/26] block=192.168.57.128/26 handle="k8s-pod-network.8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" host="ip-172-31-22-156" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:19.025 [INFO][5556] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.57.133/26] handle="k8s-pod-network.8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" host="ip-172-31-22-156" Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:19.026 [INFO][5556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:19.132502 containerd[2155]: 2024-12-13 01:55:19.026 [INFO][5556] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.133/26] IPv6=[] ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" HandleID="k8s-pod-network.8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:19.133864 containerd[2155]: 2024-12-13 01:55:19.037 [INFO][5546] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Namespace="calico-system" Pod="csi-node-driver-xsw8v" WorkloadEndpoint="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"283caadd-1af1-4d62-bdf3-ec7850179f30", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"", Pod:"csi-node-driver-xsw8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ae7b28e213", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:19.133864 containerd[2155]: 2024-12-13 01:55:19.039 [INFO][5546] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.57.133/32] ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Namespace="calico-system" Pod="csi-node-driver-xsw8v" WorkloadEndpoint="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:19.133864 containerd[2155]: 2024-12-13 01:55:19.039 [INFO][5546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ae7b28e213 ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Namespace="calico-system" Pod="csi-node-driver-xsw8v" WorkloadEndpoint="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:19.133864 containerd[2155]: 2024-12-13 01:55:19.061 [INFO][5546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Namespace="calico-system" Pod="csi-node-driver-xsw8v" WorkloadEndpoint="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:19.133864 containerd[2155]: 2024-12-13 01:55:19.062 [INFO][5546] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Namespace="calico-system" Pod="csi-node-driver-xsw8v" WorkloadEndpoint="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"283caadd-1af1-4d62-bdf3-ec7850179f30", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca", Pod:"csi-node-driver-xsw8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ae7b28e213", MAC:"9a:1a:4c:e8:57:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:19.133864 containerd[2155]: 2024-12-13 01:55:19.099 [INFO][5546] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca" Namespace="calico-system" Pod="csi-node-driver-xsw8v" WorkloadEndpoint="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:19.261238 sshd[5536]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:19.275236 containerd[2155]: time="2024-12-13T01:55:19.274281044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:19.275236 containerd[2155]: time="2024-12-13T01:55:19.274388696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:19.275236 containerd[2155]: time="2024-12-13T01:55:19.274416032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:19.275236 containerd[2155]: time="2024-12-13T01:55:19.274601312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:19.277131 systemd[1]: sshd@8-172.31.22.156:22-139.178.68.195:49852.service: Deactivated successfully. Dec 13 01:55:19.311915 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:55:19.326706 systemd-logind[2111]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:55:19.336234 systemd-logind[2111]: Removed session 9. Dec 13 01:55:19.435955 containerd[2155]: time="2024-12-13T01:55:19.435459165Z" level=info msg="StopPodSandbox for \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\"" Dec 13 01:55:19.542463 containerd[2155]: time="2024-12-13T01:55:19.541937661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xsw8v,Uid:283caadd-1af1-4d62-bdf3-ec7850179f30,Namespace:calico-system,Attempt:1,} returns sandbox id \"8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca\"" Dec 13 01:55:19.880503 systemd-networkd[1694]: cali424b11dca4a: Gained IPv6LL Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.678 [INFO][5640] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.678 [INFO][5640] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" iface="eth0" netns="/var/run/netns/cni-5b2ce8d6-cfbd-86f3-9b13-2a06487e7595" Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.684 [INFO][5640] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" iface="eth0" netns="/var/run/netns/cni-5b2ce8d6-cfbd-86f3-9b13-2a06487e7595" Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.685 [INFO][5640] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" iface="eth0" netns="/var/run/netns/cni-5b2ce8d6-cfbd-86f3-9b13-2a06487e7595" Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.685 [INFO][5640] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.686 [INFO][5640] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.861 [INFO][5653] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" HandleID="k8s-pod-network.e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.871 [INFO][5653] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.871 [INFO][5653] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.898 [WARNING][5653] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" HandleID="k8s-pod-network.e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.898 [INFO][5653] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" HandleID="k8s-pod-network.e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.903 [INFO][5653] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:19.926110 containerd[2155]: 2024-12-13 01:55:19.913 [INFO][5640] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:19.939016 containerd[2155]: time="2024-12-13T01:55:19.938460083Z" level=info msg="TearDown network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\" successfully" Dec 13 01:55:19.939016 containerd[2155]: time="2024-12-13T01:55:19.938545967Z" level=info msg="StopPodSandbox for \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\" returns successfully" Dec 13 01:55:19.939838 systemd[1]: run-netns-cni\x2d5b2ce8d6\x2dcfbd\x2d86f3\x2d9b13\x2d2a06487e7595.mount: Deactivated successfully. Dec 13 01:55:19.947355 containerd[2155]: time="2024-12-13T01:55:19.946780259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7578865df6-bdb44,Uid:b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:55:20.374251 systemd-networkd[1694]: cali8f62b04ad07: Link UP Dec 13 01:55:20.379695 systemd-networkd[1694]: cali8f62b04ad07: Gained carrier Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.178 [INFO][5661] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0 calico-apiserver-7578865df6- calico-apiserver b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6 870 0 2024-12-13 01:54:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7578865df6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-156 calico-apiserver-7578865df6-bdb44 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8f62b04ad07 [] []}} ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-bdb44" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.179 [INFO][5661] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-bdb44" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.254 [INFO][5672] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" HandleID="k8s-pod-network.1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.282 [INFO][5672] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" HandleID="k8s-pod-network.1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-156", "pod":"calico-apiserver-7578865df6-bdb44", "timestamp":"2024-12-13 01:55:20.254570733 +0000 UTC"}, Hostname:"ip-172-31-22-156", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.283 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.283 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.283 [INFO][5672] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-156' Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.286 [INFO][5672] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" host="ip-172-31-22-156" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.297 [INFO][5672] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-156" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.310 [INFO][5672] ipam/ipam.go 489: Trying affinity for 192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.317 [INFO][5672] ipam/ipam.go 155: Attempting to load block cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.323 [INFO][5672] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.57.128/26 host="ip-172-31-22-156" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.323 [INFO][5672] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.57.128/26 handle="k8s-pod-network.1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" host="ip-172-31-22-156" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.327 [INFO][5672] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90 Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.337 [INFO][5672] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.57.128/26 handle="k8s-pod-network.1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" host="ip-172-31-22-156" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.355 [INFO][5672] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.57.134/26] block=192.168.57.128/26 handle="k8s-pod-network.1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" host="ip-172-31-22-156" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.356 [INFO][5672] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.57.134/26] handle="k8s-pod-network.1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" host="ip-172-31-22-156" Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.356 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:20.445150 containerd[2155]: 2024-12-13 01:55:20.356 [INFO][5672] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.57.134/26] IPv6=[] ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" HandleID="k8s-pod-network.1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:20.447386 containerd[2155]: 2024-12-13 01:55:20.361 [INFO][5661] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-bdb44" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0", GenerateName:"calico-apiserver-7578865df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7578865df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"", Pod:"calico-apiserver-7578865df6-bdb44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f62b04ad07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:20.447386 containerd[2155]: 2024-12-13 01:55:20.361 [INFO][5661] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.57.134/32] ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-bdb44" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:20.447386 containerd[2155]: 2024-12-13 01:55:20.361 [INFO][5661] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f62b04ad07 ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-bdb44" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:20.447386 containerd[2155]: 2024-12-13 01:55:20.387 [INFO][5661] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-bdb44" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:20.447386 containerd[2155]: 2024-12-13 01:55:20.388 [INFO][5661] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-bdb44" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0", GenerateName:"calico-apiserver-7578865df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7578865df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90", Pod:"calico-apiserver-7578865df6-bdb44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f62b04ad07", MAC:"5e:9a:0b:ef:51:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:20.447386 containerd[2155]: 2024-12-13 01:55:20.411 [INFO][5661] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90" Namespace="calico-apiserver" Pod="calico-apiserver-7578865df6-bdb44" WorkloadEndpoint="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:20.516417 containerd[2155]: time="2024-12-13T01:55:20.515997058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:20.517675 containerd[2155]: time="2024-12-13T01:55:20.516577090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:20.518348 containerd[2155]: time="2024-12-13T01:55:20.518275834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:20.519047 containerd[2155]: time="2024-12-13T01:55:20.518973466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:20.648324 systemd-networkd[1694]: cali6ae7b28e213: Gained IPv6LL Dec 13 01:55:20.689194 containerd[2155]: time="2024-12-13T01:55:20.689003375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7578865df6-bdb44,Uid:b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90\"" Dec 13 01:55:21.117189 containerd[2155]: time="2024-12-13T01:55:21.117008193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:21.119327 containerd[2155]: time="2024-12-13T01:55:21.119266845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:55:21.121782 containerd[2155]: time="2024-12-13T01:55:21.121704645Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:21.126672 containerd[2155]: time="2024-12-13T01:55:21.126571029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:21.129122 containerd[2155]: time="2024-12-13T01:55:21.127988085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.754548055s" Dec 13 01:55:21.129122 containerd[2155]: time="2024-12-13T01:55:21.128041713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:55:21.129839 containerd[2155]: time="2024-12-13T01:55:21.129733605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:55:21.159032 containerd[2155]: time="2024-12-13T01:55:21.157028265Z" level=info msg="CreateContainer within sandbox \"de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:55:21.194144 containerd[2155]: time="2024-12-13T01:55:21.193753305Z" level=info msg="CreateContainer within sandbox \"de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7532743c977f741b54c718e2f2e8fb76cbeca14dc10b1af3546acb4ae32d61eb\"" Dec 13 01:55:21.196989 containerd[2155]: time="2024-12-13T01:55:21.196339497Z" level=info msg="StartContainer for \"7532743c977f741b54c718e2f2e8fb76cbeca14dc10b1af3546acb4ae32d61eb\"" Dec 13 01:55:21.323933 containerd[2155]: time="2024-12-13T01:55:21.323861206Z" level=info msg="StartContainer for \"7532743c977f741b54c718e2f2e8fb76cbeca14dc10b1af3546acb4ae32d61eb\" returns successfully" Dec 13 01:55:21.481148 systemd-networkd[1694]: cali8f62b04ad07: Gained IPv6LL Dec 13 01:55:22.122334 kubelet[3450]: I1213 01:55:22.121388 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5594565998-zqvpl" podStartSLOduration=27.364292539 podStartE2EDuration="31.121315222s" podCreationTimestamp="2024-12-13 01:54:51 +0000 UTC" firstStartedPulling="2024-12-13 01:55:17.37147233 +0000 UTC m=+49.205321129" lastFinishedPulling="2024-12-13 01:55:21.128495013 +0000 UTC m=+52.962343812" observedRunningTime="2024-12-13 01:55:22.117540754 +0000 UTC m=+53.951389637" watchObservedRunningTime="2024-12-13 01:55:22.121315222 +0000 UTC m=+53.955164009" Dec 13 01:55:23.629872 ntpd[2098]: Listen normally on 6 vxlan.calico 192.168.57.128:123 Dec 13 01:55:23.630007 ntpd[2098]: Listen normally on 7 vxlan.calico [fe80::645d:4aff:fe6b:44d1%4]:123 Dec 13 01:55:23.630964 ntpd[2098]: 13 Dec 01:55:23 ntpd[2098]: Listen normally on 6 vxlan.calico 192.168.57.128:123 Dec 13 01:55:23.630964 ntpd[2098]: 13 Dec 01:55:23 ntpd[2098]: Listen normally on 7 vxlan.calico [fe80::645d:4aff:fe6b:44d1%4]:123 Dec 13 01:55:23.630964 ntpd[2098]: 13 Dec 01:55:23 ntpd[2098]: Listen normally on 8 calibe26ed07dad [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:23.630964 ntpd[2098]: 13 Dec 01:55:23 ntpd[2098]: Listen normally on 9 cali9969bccf4e7 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:55:23.630964 ntpd[2098]: 13 Dec 01:55:23 ntpd[2098]: Listen normally on 10 cali7c3a75e853e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:55:23.630964 ntpd[2098]: 13 Dec 01:55:23 ntpd[2098]: Listen normally on 11 cali424b11dca4a [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:55:23.630091 ntpd[2098]: Listen normally on 8 calibe26ed07dad [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:23.630158 ntpd[2098]: Listen normally on 9 cali9969bccf4e7 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:55:23.630252 ntpd[2098]: Listen normally on 10 cali7c3a75e853e [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:55:23.630323 ntpd[2098]: Listen normally on 11 cali424b11dca4a [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:55:23.633474 ntpd[2098]: Listen normally on 12 cali6ae7b28e213 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:55:23.634052 ntpd[2098]: 13 Dec 01:55:23 ntpd[2098]: Listen normally on 12 cali6ae7b28e213 [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:55:23.634052 ntpd[2098]: 13 Dec 01:55:23 ntpd[2098]: Listen normally on 13 cali8f62b04ad07 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:55:23.633601 ntpd[2098]: Listen normally on 13 cali8f62b04ad07 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:55:23.755830 containerd[2155]: time="2024-12-13T01:55:23.754463582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:23.757614 containerd[2155]: time="2024-12-13T01:55:23.754939142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:55:23.766705 containerd[2155]: time="2024-12-13T01:55:23.766548434Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:23.769494 containerd[2155]: time="2024-12-13T01:55:23.769306778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:23.772166 containerd[2155]: time="2024-12-13T01:55:23.772069682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.642267317s" Dec 13 01:55:23.773283 containerd[2155]: time="2024-12-13T01:55:23.772166030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:55:23.775432 containerd[2155]: time="2024-12-13T01:55:23.774442802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:55:23.782275 containerd[2155]: time="2024-12-13T01:55:23.780839390Z" level=info msg="CreateContainer within sandbox \"103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:55:23.807568 containerd[2155]: time="2024-12-13T01:55:23.807302606Z" level=info msg="CreateContainer within sandbox \"103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d71e71532175b0071757a12511badce325f01af76fd97dd95e055a7fb870b7c3\"" Dec 13 01:55:23.816258 containerd[2155]: time="2024-12-13T01:55:23.814709102Z" level=info msg="StartContainer for \"d71e71532175b0071757a12511badce325f01af76fd97dd95e055a7fb870b7c3\"" Dec 13 01:55:23.957956 systemd[1]: run-containerd-runc-k8s.io-d71e71532175b0071757a12511badce325f01af76fd97dd95e055a7fb870b7c3-runc.qu01vV.mount: Deactivated successfully. Dec 13 01:55:24.040570 containerd[2155]: time="2024-12-13T01:55:24.040492211Z" level=info msg="StartContainer for \"d71e71532175b0071757a12511badce325f01af76fd97dd95e055a7fb870b7c3\" returns successfully" Dec 13 01:55:24.294894 systemd[1]: Started sshd@9-172.31.22.156:22-139.178.68.195:49854.service - OpenSSH per-connection server daemon (139.178.68.195:49854). Dec 13 01:55:24.491260 sshd[5840]: Accepted publickey for core from 139.178.68.195 port 49854 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:24.496560 sshd[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:24.507492 systemd-logind[2111]: New session 10 of user core. Dec 13 01:55:24.516585 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:55:24.847248 sshd[5840]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:24.852513 systemd-logind[2111]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:55:24.859965 systemd[1]: sshd@9-172.31.22.156:22-139.178.68.195:49854.service: Deactivated successfully. Dec 13 01:55:24.869974 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:55:24.886373 systemd[1]: Started sshd@10-172.31.22.156:22-139.178.68.195:49858.service - OpenSSH per-connection server daemon (139.178.68.195:49858). Dec 13 01:55:24.890014 systemd-logind[2111]: Removed session 10. Dec 13 01:55:25.063966 sshd[5855]: Accepted publickey for core from 139.178.68.195 port 49858 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:25.071632 sshd[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:25.090339 systemd-logind[2111]: New session 11 of user core. Dec 13 01:55:25.093793 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:55:25.122954 kubelet[3450]: I1213 01:55:25.122253 3450 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:25.280783 containerd[2155]: time="2024-12-13T01:55:25.278244878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:25.284293 containerd[2155]: time="2024-12-13T01:55:25.284191730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:55:25.291131 containerd[2155]: time="2024-12-13T01:55:25.291024206Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:25.302712 containerd[2155]: time="2024-12-13T01:55:25.300970370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:25.302712 containerd[2155]: time="2024-12-13T01:55:25.301957274Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.527451616s" Dec 13 01:55:25.302712 containerd[2155]: time="2024-12-13T01:55:25.302005598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:55:25.307363 containerd[2155]: time="2024-12-13T01:55:25.305097326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:55:25.309515 containerd[2155]: time="2024-12-13T01:55:25.309436718Z" level=info msg="CreateContainer within sandbox \"8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:55:25.369813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount409035935.mount: Deactivated successfully. Dec 13 01:55:25.385315 containerd[2155]: time="2024-12-13T01:55:25.383980838Z" level=info msg="CreateContainer within sandbox \"8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"881fba247a1a2d01934d900a1365a07867d5bb7442812a70934b7cd9f0feaf0f\"" Dec 13 01:55:25.392998 containerd[2155]: time="2024-12-13T01:55:25.392467946Z" level=info msg="StartContainer for \"881fba247a1a2d01934d900a1365a07867d5bb7442812a70934b7cd9f0feaf0f\"" Dec 13 01:55:25.681794 containerd[2155]: time="2024-12-13T01:55:25.681606916Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:25.684776 containerd[2155]: time="2024-12-13T01:55:25.683876020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:55:25.688529 sshd[5855]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:25.696890 systemd-logind[2111]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:55:25.700611 systemd[1]: sshd@10-172.31.22.156:22-139.178.68.195:49858.service: Deactivated successfully. Dec 13 01:55:25.718817 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:55:25.731085 systemd-logind[2111]: Removed session 11. Dec 13 01:55:25.746076 containerd[2155]: time="2024-12-13T01:55:25.744246184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 435.957458ms" Dec 13 01:55:25.746076 containerd[2155]: time="2024-12-13T01:55:25.744318412Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:55:25.752123 containerd[2155]: time="2024-12-13T01:55:25.751648120Z" level=info msg="CreateContainer within sandbox \"1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:55:25.764406 systemd[1]: Started sshd@11-172.31.22.156:22-139.178.68.195:49866.service - OpenSSH per-connection server daemon (139.178.68.195:49866). Dec 13 01:55:25.767909 containerd[2155]: time="2024-12-13T01:55:25.767144056Z" level=info msg="StartContainer for \"881fba247a1a2d01934d900a1365a07867d5bb7442812a70934b7cd9f0feaf0f\" returns successfully" Dec 13 01:55:25.776971 containerd[2155]: time="2024-12-13T01:55:25.776881696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:55:25.816182 containerd[2155]: time="2024-12-13T01:55:25.813227560Z" level=info msg="CreateContainer within sandbox \"1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"38948d6a40d939a5e9700f1ed68cbe37736746f6a065925e75ec101c6abc2cdb\"" Dec 13 01:55:25.819597 containerd[2155]: time="2024-12-13T01:55:25.818771740Z" level=info msg="StartContainer for \"38948d6a40d939a5e9700f1ed68cbe37736746f6a065925e75ec101c6abc2cdb\"" Dec 13 01:55:25.981410 sshd[5904]: Accepted publickey for core from 139.178.68.195 port 49866 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:25.981043 sshd[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:25.993283 systemd-logind[2111]: New session 12 of user core. Dec 13 01:55:26.000419 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:55:26.017677 containerd[2155]: time="2024-12-13T01:55:26.016581889Z" level=info msg="StartContainer for \"38948d6a40d939a5e9700f1ed68cbe37736746f6a065925e75ec101c6abc2cdb\" returns successfully" Dec 13 01:55:26.178609 kubelet[3450]: I1213 01:55:26.177556 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7578865df6-rpkll" podStartSLOduration=32.51588175 podStartE2EDuration="38.177475214s" podCreationTimestamp="2024-12-13 01:54:48 +0000 UTC" firstStartedPulling="2024-12-13 01:55:18.111151194 +0000 UTC m=+49.944999969" lastFinishedPulling="2024-12-13 01:55:23.772744634 +0000 UTC m=+55.606593433" observedRunningTime="2024-12-13 01:55:24.1354848 +0000 UTC m=+55.969333611" watchObservedRunningTime="2024-12-13 01:55:26.177475214 +0000 UTC m=+58.011324001" Dec 13 01:55:26.184907 kubelet[3450]: I1213 01:55:26.184033 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7578865df6-bdb44" podStartSLOduration=33.131846829 podStartE2EDuration="38.183924926s" podCreationTimestamp="2024-12-13 01:54:48 +0000 UTC" firstStartedPulling="2024-12-13 01:55:20.694505675 +0000 UTC m=+52.528354450" lastFinishedPulling="2024-12-13 01:55:25.74658376 +0000 UTC m=+57.580432547" observedRunningTime="2024-12-13 01:55:26.176521154 +0000 UTC m=+58.010369989" watchObservedRunningTime="2024-12-13 01:55:26.183924926 +0000 UTC m=+58.017773725" Dec 13 01:55:26.513752 sshd[5904]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:26.523910 systemd[1]: sshd@11-172.31.22.156:22-139.178.68.195:49866.service: Deactivated successfully. Dec 13 01:55:26.538096 systemd-logind[2111]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:55:26.539595 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:55:26.545430 systemd-logind[2111]: Removed session 12. Dec 13 01:55:27.305727 containerd[2155]: time="2024-12-13T01:55:27.305670076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:27.309189 containerd[2155]: time="2024-12-13T01:55:27.309140632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:55:27.311724 containerd[2155]: time="2024-12-13T01:55:27.311677144Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:27.317832 containerd[2155]: time="2024-12-13T01:55:27.317767816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:27.319750 containerd[2155]: time="2024-12-13T01:55:27.319683208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.542244688s" Dec 13 01:55:27.320845 containerd[2155]: time="2024-12-13T01:55:27.320809156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:55:27.324921 containerd[2155]: time="2024-12-13T01:55:27.324651592Z" level=info msg="CreateContainer within sandbox \"8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:55:27.360665 containerd[2155]: time="2024-12-13T01:55:27.360605896Z" level=info msg="CreateContainer within sandbox \"8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ddbc6208fe1b3d537d41ed58cb8e77061e45d734731744dceb9db7f271f2c024\"" Dec 13 01:55:27.369687 containerd[2155]: time="2024-12-13T01:55:27.368022436Z" level=info msg="StartContainer for \"ddbc6208fe1b3d537d41ed58cb8e77061e45d734731744dceb9db7f271f2c024\"" Dec 13 01:55:27.379919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427277605.mount: Deactivated successfully. Dec 13 01:55:27.530164 containerd[2155]: time="2024-12-13T01:55:27.529650629Z" level=info msg="StartContainer for \"ddbc6208fe1b3d537d41ed58cb8e77061e45d734731744dceb9db7f271f2c024\" returns successfully" Dec 13 01:55:27.687960 kubelet[3450]: I1213 01:55:27.687305 3450 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:55:27.687960 kubelet[3450]: I1213 01:55:27.687375 3450 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:55:28.175129 kubelet[3450]: I1213 01:55:28.174659 3450 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:28.196830 kubelet[3450]: I1213 01:55:28.196014 3450 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-xsw8v" podStartSLOduration=29.430374417 podStartE2EDuration="37.1959574s" podCreationTimestamp="2024-12-13 01:54:51 +0000 UTC" firstStartedPulling="2024-12-13 01:55:19.555920673 +0000 UTC m=+51.389769472" lastFinishedPulling="2024-12-13 01:55:27.321503656 +0000 UTC m=+59.155352455" observedRunningTime="2024-12-13 01:55:28.195779104 +0000 UTC m=+60.029627915" watchObservedRunningTime="2024-12-13 01:55:28.1959574 +0000 UTC m=+60.029806199" Dec 13 01:55:28.386257 containerd[2155]: time="2024-12-13T01:55:28.384784541Z" level=info msg="StopPodSandbox for \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\"" Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.467 [WARNING][6016] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01", Pod:"coredns-76f75df574-zztz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9969bccf4e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.467 [INFO][6016] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.467 [INFO][6016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" iface="eth0" netns="" Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.467 [INFO][6016] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.468 [INFO][6016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.504 [INFO][6024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" HandleID="k8s-pod-network.7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.504 [INFO][6024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.505 [INFO][6024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.518 [WARNING][6024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" HandleID="k8s-pod-network.7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.518 [INFO][6024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" HandleID="k8s-pod-network.7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.523 [INFO][6024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.529474 containerd[2155]: 2024-12-13 01:55:28.525 [INFO][6016] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:28.529474 containerd[2155]: time="2024-12-13T01:55:28.529405962Z" level=info msg="TearDown network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\" successfully" Dec 13 01:55:28.529474 containerd[2155]: time="2024-12-13T01:55:28.529444542Z" level=info msg="StopPodSandbox for \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\" returns successfully" Dec 13 01:55:28.531891 containerd[2155]: time="2024-12-13T01:55:28.531020754Z" level=info msg="RemovePodSandbox for \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\"" Dec 13 01:55:28.531891 containerd[2155]: time="2024-12-13T01:55:28.531077262Z" level=info msg="Forcibly stopping sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\"" Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.601 [WARNING][6042] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e77f90bd-b0f4-4ec6-abfa-2aaf66e43cc9", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"d5c00a3f53d53da7938abf503ff2d6c8adf3861e183cbfed4d28203dcb146f01", Pod:"coredns-76f75df574-zztz7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9969bccf4e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.601 [INFO][6042] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.601 [INFO][6042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" iface="eth0" netns="" Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.602 [INFO][6042] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.602 [INFO][6042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.650 [INFO][6048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" HandleID="k8s-pod-network.7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.651 [INFO][6048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.651 [INFO][6048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.667 [WARNING][6048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" HandleID="k8s-pod-network.7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.667 [INFO][6048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" HandleID="k8s-pod-network.7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--zztz7-eth0" Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.670 [INFO][6048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.675608 containerd[2155]: 2024-12-13 01:55:28.672 [INFO][6042] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a" Dec 13 01:55:28.676538 containerd[2155]: time="2024-12-13T01:55:28.675637831Z" level=info msg="TearDown network for sandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\" successfully" Dec 13 01:55:28.680038 containerd[2155]: time="2024-12-13T01:55:28.679921579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:28.680265 containerd[2155]: time="2024-12-13T01:55:28.680041651Z" level=info msg="RemovePodSandbox \"7ba58e1680893246a6fa110cfc88a2a27895825fd97a2df4bb96ed841975288a\" returns successfully" Dec 13 01:55:28.681189 containerd[2155]: time="2024-12-13T01:55:28.681130255Z" level=info msg="StopPodSandbox for \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\"" Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.761 [WARNING][6066] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0", GenerateName:"calico-apiserver-7578865df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8345640-795c-4440-889c-5f65038d3192", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7578865df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178", Pod:"calico-apiserver-7578865df6-rpkll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali424b11dca4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.761 [INFO][6066] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.761 [INFO][6066] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" iface="eth0" netns="" Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.761 [INFO][6066] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.761 [INFO][6066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.805 [INFO][6072] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" HandleID="k8s-pod-network.3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.806 [INFO][6072] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.806 [INFO][6072] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.821 [WARNING][6072] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" HandleID="k8s-pod-network.3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.821 [INFO][6072] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" HandleID="k8s-pod-network.3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.824 [INFO][6072] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.830322 containerd[2155]: 2024-12-13 01:55:28.827 [INFO][6066] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:28.830322 containerd[2155]: time="2024-12-13T01:55:28.830037691Z" level=info msg="TearDown network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\" successfully" Dec 13 01:55:28.830322 containerd[2155]: time="2024-12-13T01:55:28.830080807Z" level=info msg="StopPodSandbox for \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\" returns successfully" Dec 13 01:55:28.832745 containerd[2155]: time="2024-12-13T01:55:28.832003003Z" level=info msg="RemovePodSandbox for \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\"" Dec 13 01:55:28.832745 containerd[2155]: time="2024-12-13T01:55:28.832056871Z" level=info msg="Forcibly stopping sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\"" Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.902 [WARNING][6090] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0", GenerateName:"calico-apiserver-7578865df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a8345640-795c-4440-889c-5f65038d3192", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7578865df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"103bea0a5bf5d7dd57e2a38b3d5d96ac15d7cbab8b6023e60ab4f241e80cc178", Pod:"calico-apiserver-7578865df6-rpkll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali424b11dca4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.903 [INFO][6090] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.903 [INFO][6090] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" iface="eth0" netns="" Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.903 [INFO][6090] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.903 [INFO][6090] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.946 [INFO][6096] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" HandleID="k8s-pod-network.3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.946 [INFO][6096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.946 [INFO][6096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.961 [WARNING][6096] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" HandleID="k8s-pod-network.3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.961 [INFO][6096] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" HandleID="k8s-pod-network.3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--rpkll-eth0" Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.964 [INFO][6096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.969350 containerd[2155]: 2024-12-13 01:55:28.966 [INFO][6090] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2" Dec 13 01:55:28.970384 containerd[2155]: time="2024-12-13T01:55:28.969454016Z" level=info msg="TearDown network for sandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\" successfully" Dec 13 01:55:28.975147 containerd[2155]: time="2024-12-13T01:55:28.975035480Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:28.975341 containerd[2155]: time="2024-12-13T01:55:28.975176732Z" level=info msg="RemovePodSandbox \"3929dbb394031f7a93239e303b16232bc199fa408ae33d6af073a3c5a6cf48f2\" returns successfully" Dec 13 01:55:28.976376 containerd[2155]: time="2024-12-13T01:55:28.976190576Z" level=info msg="StopPodSandbox for \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\"" Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.048 [WARNING][6114] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0", GenerateName:"calico-apiserver-7578865df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7578865df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90", Pod:"calico-apiserver-7578865df6-bdb44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f62b04ad07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.049 [INFO][6114] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.049 [INFO][6114] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" iface="eth0" netns="" Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.049 [INFO][6114] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.049 [INFO][6114] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.093 [INFO][6120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" HandleID="k8s-pod-network.e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.094 [INFO][6120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.094 [INFO][6120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.108 [WARNING][6120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" HandleID="k8s-pod-network.e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.108 [INFO][6120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" HandleID="k8s-pod-network.e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.111 [INFO][6120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.117745 containerd[2155]: 2024-12-13 01:55:29.113 [INFO][6114] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:29.117745 containerd[2155]: time="2024-12-13T01:55:29.117694397Z" level=info msg="TearDown network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\" successfully" Dec 13 01:55:29.119958 containerd[2155]: time="2024-12-13T01:55:29.117746837Z" level=info msg="StopPodSandbox for \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\" returns successfully" Dec 13 01:55:29.119958 containerd[2155]: time="2024-12-13T01:55:29.119101301Z" level=info msg="RemovePodSandbox for \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\"" Dec 13 01:55:29.119958 containerd[2155]: time="2024-12-13T01:55:29.119853725Z" level=info msg="Forcibly stopping sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\"" Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.183 [WARNING][6138] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0", GenerateName:"calico-apiserver-7578865df6-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5dc9ebd-a877-4fb0-ae45-842b4b9c23d6", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7578865df6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"1c7b706cb1e9d417cb9a2b46c8046cfa28b02b7794c96ea39a76de1688fbfe90", Pod:"calico-apiserver-7578865df6-bdb44", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.57.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8f62b04ad07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.184 [INFO][6138] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.184 [INFO][6138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" iface="eth0" netns="" Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.184 [INFO][6138] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.184 [INFO][6138] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.222 [INFO][6144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" HandleID="k8s-pod-network.e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.223 [INFO][6144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.223 [INFO][6144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.236 [WARNING][6144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" HandleID="k8s-pod-network.e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.236 [INFO][6144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" HandleID="k8s-pod-network.e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Workload="ip--172--31--22--156-k8s-calico--apiserver--7578865df6--bdb44-eth0" Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.241 [INFO][6144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.248655 containerd[2155]: 2024-12-13 01:55:29.244 [INFO][6138] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283" Dec 13 01:55:29.249942 containerd[2155]: time="2024-12-13T01:55:29.248701553Z" level=info msg="TearDown network for sandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\" successfully" Dec 13 01:55:29.254733 containerd[2155]: time="2024-12-13T01:55:29.254541977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:29.254733 containerd[2155]: time="2024-12-13T01:55:29.254655077Z" level=info msg="RemovePodSandbox \"e3e9eb47354632edc7b5b06e4223addc81d40ac618b1d55a84e504db8eec8283\" returns successfully" Dec 13 01:55:29.256793 containerd[2155]: time="2024-12-13T01:55:29.255767753Z" level=info msg="StopPodSandbox for \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\"" Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.367 [WARNING][6162] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9e1a0733-2392-49a6-b8a5-5725a39b39fb", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab", Pod:"coredns-76f75df574-v9nm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe26ed07dad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.368 [INFO][6162] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.368 [INFO][6162] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" iface="eth0" netns="" Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.368 [INFO][6162] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.368 [INFO][6162] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.403 [INFO][6168] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" HandleID="k8s-pod-network.a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.404 [INFO][6168] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.404 [INFO][6168] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.419 [WARNING][6168] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" HandleID="k8s-pod-network.a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.421 [INFO][6168] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" HandleID="k8s-pod-network.a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.427 [INFO][6168] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.437735 containerd[2155]: 2024-12-13 01:55:29.433 [INFO][6162] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:29.439718 containerd[2155]: time="2024-12-13T01:55:29.438320118Z" level=info msg="TearDown network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\" successfully" Dec 13 01:55:29.439718 containerd[2155]: time="2024-12-13T01:55:29.438383406Z" level=info msg="StopPodSandbox for \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\" returns successfully" Dec 13 01:55:29.439718 containerd[2155]: time="2024-12-13T01:55:29.439012062Z" level=info msg="RemovePodSandbox for \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\"" Dec 13 01:55:29.439718 containerd[2155]: time="2024-12-13T01:55:29.439058778Z" level=info msg="Forcibly stopping sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\"" Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.501 [WARNING][6186] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"9e1a0733-2392-49a6-b8a5-5725a39b39fb", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"518e5060035d1f4b2ee17882ba8b1e5a814b1da9fae899993d14ea3c146a1eab", Pod:"coredns-76f75df574-v9nm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.57.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe26ed07dad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.502 [INFO][6186] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.502 [INFO][6186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" iface="eth0" netns="" Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.502 [INFO][6186] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.502 [INFO][6186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.543 [INFO][6192] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" HandleID="k8s-pod-network.a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.543 [INFO][6192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.543 [INFO][6192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.556 [WARNING][6192] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" HandleID="k8s-pod-network.a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.557 [INFO][6192] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" HandleID="k8s-pod-network.a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Workload="ip--172--31--22--156-k8s-coredns--76f75df574--v9nm9-eth0" Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.559 [INFO][6192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.564073 containerd[2155]: 2024-12-13 01:55:29.561 [INFO][6186] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042" Dec 13 01:55:29.565414 containerd[2155]: time="2024-12-13T01:55:29.564121519Z" level=info msg="TearDown network for sandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\" successfully" Dec 13 01:55:29.570420 containerd[2155]: time="2024-12-13T01:55:29.570346831Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:29.570570 containerd[2155]: time="2024-12-13T01:55:29.570461035Z" level=info msg="RemovePodSandbox \"a38286f30d536776596bed383d5c8c084642cbc2d35bc71718687943f3a78042\" returns successfully" Dec 13 01:55:29.571483 containerd[2155]: time="2024-12-13T01:55:29.571252315Z" level=info msg="StopPodSandbox for \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\"" Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.660 [WARNING][6211] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"283caadd-1af1-4d62-bdf3-ec7850179f30", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca", Pod:"csi-node-driver-xsw8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ae7b28e213", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.661 [INFO][6211] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.661 [INFO][6211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" iface="eth0" netns="" Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.661 [INFO][6211] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.661 [INFO][6211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.700 [INFO][6217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" HandleID="k8s-pod-network.146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.700 [INFO][6217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.700 [INFO][6217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.715 [WARNING][6217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" HandleID="k8s-pod-network.146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.715 [INFO][6217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" HandleID="k8s-pod-network.146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.717 [INFO][6217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.722526 containerd[2155]: 2024-12-13 01:55:29.719 [INFO][6211] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:29.723643 containerd[2155]: time="2024-12-13T01:55:29.722796260Z" level=info msg="TearDown network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\" successfully" Dec 13 01:55:29.723643 containerd[2155]: time="2024-12-13T01:55:29.722843672Z" level=info msg="StopPodSandbox for \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\" returns successfully" Dec 13 01:55:29.725459 containerd[2155]: time="2024-12-13T01:55:29.724820204Z" level=info msg="RemovePodSandbox for \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\"" Dec 13 01:55:29.725459 containerd[2155]: time="2024-12-13T01:55:29.724879364Z" level=info msg="Forcibly stopping sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\"" Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.793 [WARNING][6236] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"283caadd-1af1-4d62-bdf3-ec7850179f30", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"8595e2587a3b7ec40d17cb339daabcaeac5855a408e362e8b05b66ec447c7bca", Pod:"csi-node-driver-xsw8v", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.57.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6ae7b28e213", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.793 [INFO][6236] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.794 [INFO][6236] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" iface="eth0" netns="" Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.794 [INFO][6236] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.794 [INFO][6236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.836 [INFO][6242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" HandleID="k8s-pod-network.146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.838 [INFO][6242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.838 [INFO][6242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.855 [WARNING][6242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" HandleID="k8s-pod-network.146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.855 [INFO][6242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" HandleID="k8s-pod-network.146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Workload="ip--172--31--22--156-k8s-csi--node--driver--xsw8v-eth0" Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.858 [INFO][6242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.862942 containerd[2155]: 2024-12-13 01:55:29.860 [INFO][6236] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be" Dec 13 01:55:29.864023 containerd[2155]: time="2024-12-13T01:55:29.862989836Z" level=info msg="TearDown network for sandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\" successfully" Dec 13 01:55:29.869285 containerd[2155]: time="2024-12-13T01:55:29.869181584Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:29.869430 containerd[2155]: time="2024-12-13T01:55:29.869314124Z" level=info msg="RemovePodSandbox \"146d9db3d75d3561c3941e2a03194a564daec8c6140bad88c4c77d0230faf4be\" returns successfully" Dec 13 01:55:29.870312 containerd[2155]: time="2024-12-13T01:55:29.870050180Z" level=info msg="StopPodSandbox for \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\"" Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.930 [WARNING][6260] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0", GenerateName:"calico-kube-controllers-5594565998-", Namespace:"calico-system", SelfLink:"", UID:"9a019854-49b1-4766-9d83-02d10b056c78", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5594565998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d", Pod:"calico-kube-controllers-5594565998-zqvpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c3a75e853e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.932 [INFO][6260] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.932 [INFO][6260] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" iface="eth0" netns="" Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.932 [INFO][6260] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.932 [INFO][6260] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.969 [INFO][6266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" HandleID="k8s-pod-network.95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.970 [INFO][6266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.970 [INFO][6266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.983 [WARNING][6266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" HandleID="k8s-pod-network.95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.983 [INFO][6266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" HandleID="k8s-pod-network.95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.986 [INFO][6266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.991239 containerd[2155]: 2024-12-13 01:55:29.988 [INFO][6260] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:29.991239 containerd[2155]: time="2024-12-13T01:55:29.991150221Z" level=info msg="TearDown network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\" successfully" Dec 13 01:55:29.993136 containerd[2155]: time="2024-12-13T01:55:29.991241889Z" level=info msg="StopPodSandbox for \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\" returns successfully" Dec 13 01:55:29.993136 containerd[2155]: time="2024-12-13T01:55:29.993074577Z" level=info msg="RemovePodSandbox for \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\"" Dec 13 01:55:29.993136 containerd[2155]: time="2024-12-13T01:55:29.993126177Z" level=info msg="Forcibly stopping sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\"" Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.057 [WARNING][6284] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0", GenerateName:"calico-kube-controllers-5594565998-", Namespace:"calico-system", SelfLink:"", UID:"9a019854-49b1-4766-9d83-02d10b056c78", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5594565998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-156", ContainerID:"de20f03b94ba40eee533b54cdaf6ed06fb6c7d061cf293f7786d47010e0b112d", Pod:"calico-kube-controllers-5594565998-zqvpl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.57.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c3a75e853e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.057 [INFO][6284] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.057 [INFO][6284] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" iface="eth0" netns="" Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.057 [INFO][6284] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.057 [INFO][6284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.096 [INFO][6290] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" HandleID="k8s-pod-network.95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.097 [INFO][6290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.097 [INFO][6290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.108 [WARNING][6290] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" HandleID="k8s-pod-network.95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.108 [INFO][6290] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" HandleID="k8s-pod-network.95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Workload="ip--172--31--22--156-k8s-calico--kube--controllers--5594565998--zqvpl-eth0" Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.111 [INFO][6290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:30.116890 containerd[2155]: 2024-12-13 01:55:30.113 [INFO][6284] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d" Dec 13 01:55:30.118427 containerd[2155]: time="2024-12-13T01:55:30.116927478Z" level=info msg="TearDown network for sandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\" successfully" Dec 13 01:55:30.123481 containerd[2155]: time="2024-12-13T01:55:30.123383226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:30.123824 containerd[2155]: time="2024-12-13T01:55:30.123485622Z" level=info msg="RemovePodSandbox \"95dc69858a22b42df757b58240d1ae4bed199971e228431b21ac1093126cbe3d\" returns successfully" Dec 13 01:55:31.486540 kubelet[3450]: I1213 01:55:31.486120 3450 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:31.563741 systemd[1]: Started sshd@12-172.31.22.156:22-139.178.68.195:55592.service - OpenSSH per-connection server daemon (139.178.68.195:55592). Dec 13 01:55:31.748482 sshd[6302]: Accepted publickey for core from 139.178.68.195 port 55592 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:31.751047 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:31.761638 systemd-logind[2111]: New session 13 of user core. Dec 13 01:55:31.768744 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:55:31.946416 systemd[1]: run-containerd-runc-k8s.io-7532743c977f741b54c718e2f2e8fb76cbeca14dc10b1af3546acb4ae32d61eb-runc.kAl796.mount: Deactivated successfully. Dec 13 01:55:32.168891 sshd[6302]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:32.186300 systemd-logind[2111]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:55:32.191081 systemd[1]: sshd@12-172.31.22.156:22-139.178.68.195:55592.service: Deactivated successfully. Dec 13 01:55:32.203484 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:55:32.210376 systemd-logind[2111]: Removed session 13. Dec 13 01:55:37.212710 systemd[1]: Started sshd@13-172.31.22.156:22-139.178.68.195:44348.service - OpenSSH per-connection server daemon (139.178.68.195:44348). Dec 13 01:55:37.402243 sshd[6369]: Accepted publickey for core from 139.178.68.195 port 44348 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:37.405191 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:37.413315 systemd-logind[2111]: New session 14 of user core. Dec 13 01:55:37.426314 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:55:37.681387 sshd[6369]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:37.686644 systemd[1]: sshd@13-172.31.22.156:22-139.178.68.195:44348.service: Deactivated successfully. Dec 13 01:55:37.694470 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:55:37.697812 systemd-logind[2111]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:55:37.700500 systemd-logind[2111]: Removed session 14. Dec 13 01:55:42.711737 systemd[1]: Started sshd@14-172.31.22.156:22-139.178.68.195:44360.service - OpenSSH per-connection server daemon (139.178.68.195:44360). Dec 13 01:55:42.885790 sshd[6387]: Accepted publickey for core from 139.178.68.195 port 44360 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:42.888790 sshd[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:42.898761 systemd-logind[2111]: New session 15 of user core. Dec 13 01:55:42.906196 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:55:43.161593 sshd[6387]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:43.168276 systemd[1]: sshd@14-172.31.22.156:22-139.178.68.195:44360.service: Deactivated successfully. Dec 13 01:55:43.169119 systemd-logind[2111]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:55:43.177079 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:55:43.178653 systemd-logind[2111]: Removed session 15. Dec 13 01:55:48.198566 systemd[1]: Started sshd@15-172.31.22.156:22-139.178.68.195:34236.service - OpenSSH per-connection server daemon (139.178.68.195:34236). Dec 13 01:55:48.395391 sshd[6401]: Accepted publickey for core from 139.178.68.195 port 34236 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:48.401895 sshd[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:48.423775 systemd-logind[2111]: New session 16 of user core. Dec 13 01:55:48.434789 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:55:48.839599 sshd[6401]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:48.853731 systemd[1]: sshd@15-172.31.22.156:22-139.178.68.195:34236.service: Deactivated successfully. Dec 13 01:55:48.865124 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:55:48.874310 systemd-logind[2111]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:55:48.892973 systemd[1]: Started sshd@16-172.31.22.156:22-139.178.68.195:34252.service - OpenSSH per-connection server daemon (139.178.68.195:34252). Dec 13 01:55:48.897750 systemd-logind[2111]: Removed session 16. Dec 13 01:55:49.092518 sshd[6415]: Accepted publickey for core from 139.178.68.195 port 34252 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:49.095270 sshd[6415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:49.106621 systemd-logind[2111]: New session 17 of user core. Dec 13 01:55:49.115551 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:55:49.828877 sshd[6415]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:49.840617 systemd[1]: sshd@16-172.31.22.156:22-139.178.68.195:34252.service: Deactivated successfully. Dec 13 01:55:49.858525 systemd-logind[2111]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:55:49.862551 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:55:49.879764 systemd[1]: Started sshd@17-172.31.22.156:22-139.178.68.195:34256.service - OpenSSH per-connection server daemon (139.178.68.195:34256). Dec 13 01:55:49.885352 systemd-logind[2111]: Removed session 17. Dec 13 01:55:50.069140 sshd[6430]: Accepted publickey for core from 139.178.68.195 port 34256 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:50.071958 sshd[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:50.080537 systemd-logind[2111]: New session 18 of user core. Dec 13 01:55:50.088122 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:55:54.033538 sshd[6430]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:54.050827 systemd[1]: sshd@17-172.31.22.156:22-139.178.68.195:34256.service: Deactivated successfully. Dec 13 01:55:54.070777 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:55:54.093235 systemd-logind[2111]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:55:54.112705 systemd[1]: Started sshd@18-172.31.22.156:22-139.178.68.195:34262.service - OpenSSH per-connection server daemon (139.178.68.195:34262). Dec 13 01:55:54.119761 systemd-logind[2111]: Removed session 18. Dec 13 01:55:54.297983 sshd[6459]: Accepted publickey for core from 139.178.68.195 port 34262 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:54.302842 sshd[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:54.314300 systemd-logind[2111]: New session 19 of user core. Dec 13 01:55:54.322948 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:55:54.972685 sshd[6459]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:54.981718 systemd-logind[2111]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:55:54.983690 systemd[1]: sshd@18-172.31.22.156:22-139.178.68.195:34262.service: Deactivated successfully. Dec 13 01:55:54.991174 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:55:55.002580 systemd-logind[2111]: Removed session 19. Dec 13 01:55:55.007713 systemd[1]: Started sshd@19-172.31.22.156:22-139.178.68.195:34264.service - OpenSSH per-connection server daemon (139.178.68.195:34264). Dec 13 01:55:55.180974 sshd[6473]: Accepted publickey for core from 139.178.68.195 port 34264 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:55.183707 sshd[6473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:55.197370 systemd-logind[2111]: New session 20 of user core. Dec 13 01:55:55.200561 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:55:55.491526 sshd[6473]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:55.504645 systemd[1]: sshd@19-172.31.22.156:22-139.178.68.195:34264.service: Deactivated successfully. Dec 13 01:55:55.519647 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:55:55.520326 systemd-logind[2111]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:55:55.524404 systemd-logind[2111]: Removed session 20. Dec 13 01:56:00.524855 systemd[1]: Started sshd@20-172.31.22.156:22-139.178.68.195:40138.service - OpenSSH per-connection server daemon (139.178.68.195:40138). Dec 13 01:56:00.713672 sshd[6489]: Accepted publickey for core from 139.178.68.195 port 40138 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:00.716970 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:00.727893 systemd-logind[2111]: New session 21 of user core. Dec 13 01:56:00.736965 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:56:01.037979 sshd[6489]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:01.051251 systemd[1]: sshd@20-172.31.22.156:22-139.178.68.195:40138.service: Deactivated successfully. Dec 13 01:56:01.060706 systemd-logind[2111]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:56:01.062633 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:56:01.065475 systemd-logind[2111]: Removed session 21. Dec 13 01:56:06.069773 systemd[1]: Started sshd@21-172.31.22.156:22-139.178.68.195:49786.service - OpenSSH per-connection server daemon (139.178.68.195:49786). Dec 13 01:56:06.250370 sshd[6525]: Accepted publickey for core from 139.178.68.195 port 49786 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:06.253131 sshd[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:06.261928 systemd-logind[2111]: New session 22 of user core. Dec 13 01:56:06.267689 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:56:06.528432 sshd[6525]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:06.534955 systemd[1]: sshd@21-172.31.22.156:22-139.178.68.195:49786.service: Deactivated successfully. Dec 13 01:56:06.543499 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:56:06.545125 systemd-logind[2111]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:56:06.547445 systemd-logind[2111]: Removed session 22. Dec 13 01:56:11.560683 systemd[1]: Started sshd@22-172.31.22.156:22-139.178.68.195:49792.service - OpenSSH per-connection server daemon (139.178.68.195:49792). Dec 13 01:56:11.744837 sshd[6579]: Accepted publickey for core from 139.178.68.195 port 49792 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:11.748505 sshd[6579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:11.760687 systemd-logind[2111]: New session 23 of user core. Dec 13 01:56:11.767792 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:56:12.045567 sshd[6579]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:12.051861 systemd[1]: sshd@22-172.31.22.156:22-139.178.68.195:49792.service: Deactivated successfully. Dec 13 01:56:12.060878 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:56:12.064085 systemd-logind[2111]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:56:12.067714 systemd-logind[2111]: Removed session 23. Dec 13 01:56:17.079576 systemd[1]: Started sshd@23-172.31.22.156:22-139.178.68.195:48866.service - OpenSSH per-connection server daemon (139.178.68.195:48866). Dec 13 01:56:17.281397 sshd[6595]: Accepted publickey for core from 139.178.68.195 port 48866 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:17.284993 sshd[6595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:17.293381 systemd-logind[2111]: New session 24 of user core. Dec 13 01:56:17.302057 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:56:17.630853 sshd[6595]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:17.640922 systemd-logind[2111]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:56:17.644469 systemd[1]: sshd@23-172.31.22.156:22-139.178.68.195:48866.service: Deactivated successfully. Dec 13 01:56:17.659317 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:56:17.663917 systemd-logind[2111]: Removed session 24. Dec 13 01:56:22.659729 systemd[1]: Started sshd@24-172.31.22.156:22-139.178.68.195:48876.service - OpenSSH per-connection server daemon (139.178.68.195:48876). Dec 13 01:56:22.840116 sshd[6609]: Accepted publickey for core from 139.178.68.195 port 48876 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:22.842932 sshd[6609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:22.851395 systemd-logind[2111]: New session 25 of user core. Dec 13 01:56:22.861781 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:56:23.106018 sshd[6609]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:23.111826 systemd-logind[2111]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:56:23.112703 systemd[1]: sshd@24-172.31.22.156:22-139.178.68.195:48876.service: Deactivated successfully. Dec 13 01:56:23.122172 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:56:23.124682 systemd-logind[2111]: Removed session 25. Dec 13 01:56:28.140748 systemd[1]: Started sshd@25-172.31.22.156:22-139.178.68.195:50824.service - OpenSSH per-connection server daemon (139.178.68.195:50824). Dec 13 01:56:28.318269 sshd[6622]: Accepted publickey for core from 139.178.68.195 port 50824 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:28.321257 sshd[6622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:28.329577 systemd-logind[2111]: New session 26 of user core. Dec 13 01:56:28.335875 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:56:28.586278 sshd[6622]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:28.593670 systemd[1]: sshd@25-172.31.22.156:22-139.178.68.195:50824.service: Deactivated successfully. Dec 13 01:56:28.600489 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:56:28.602043 systemd-logind[2111]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:56:28.603992 systemd-logind[2111]: Removed session 26. Dec 13 01:56:31.928766 systemd[1]: run-containerd-runc-k8s.io-7532743c977f741b54c718e2f2e8fb76cbeca14dc10b1af3546acb4ae32d61eb-runc.2D90vE.mount: Deactivated successfully. Dec 13 01:56:43.664133 containerd[2155]: time="2024-12-13T01:56:43.663675367Z" level=info msg="shim disconnected" id=fb286871df7d009ba0c6aba2dfa05f9d245a7d101bced4a13c5630524045f902 namespace=k8s.io Dec 13 01:56:43.664133 containerd[2155]: time="2024-12-13T01:56:43.663762871Z" level=warning msg="cleaning up after shim disconnected" id=fb286871df7d009ba0c6aba2dfa05f9d245a7d101bced4a13c5630524045f902 namespace=k8s.io Dec 13 01:56:43.664133 containerd[2155]: time="2024-12-13T01:56:43.663783103Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:43.666297 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb286871df7d009ba0c6aba2dfa05f9d245a7d101bced4a13c5630524045f902-rootfs.mount: Deactivated successfully. Dec 13 01:56:43.749444 containerd[2155]: time="2024-12-13T01:56:43.749319259Z" level=info msg="shim disconnected" id=ca63ab311e894f7f7b17cfec482e6754ff46f33057e9762bc50e2d34b329528e namespace=k8s.io Dec 13 01:56:43.749444 containerd[2155]: time="2024-12-13T01:56:43.749431891Z" level=warning msg="cleaning up after shim disconnected" id=ca63ab311e894f7f7b17cfec482e6754ff46f33057e9762bc50e2d34b329528e namespace=k8s.io Dec 13 01:56:43.751330 containerd[2155]: time="2024-12-13T01:56:43.749454247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:43.754560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca63ab311e894f7f7b17cfec482e6754ff46f33057e9762bc50e2d34b329528e-rootfs.mount: Deactivated successfully. Dec 13 01:56:44.450233 kubelet[3450]: I1213 01:56:44.450117 3450 scope.go:117] "RemoveContainer" containerID="fb286871df7d009ba0c6aba2dfa05f9d245a7d101bced4a13c5630524045f902" Dec 13 01:56:44.456022 kubelet[3450]: I1213 01:56:44.455965 3450 scope.go:117] "RemoveContainer" containerID="ca63ab311e894f7f7b17cfec482e6754ff46f33057e9762bc50e2d34b329528e" Dec 13 01:56:44.457276 containerd[2155]: time="2024-12-13T01:56:44.457062523Z" level=info msg="CreateContainer within sandbox \"26cb3f20e74d5130e06af4d15acfa34e6fae58079e75df62df1aff4fbceb0b5a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:56:44.460491 containerd[2155]: time="2024-12-13T01:56:44.460396003Z" level=info msg="CreateContainer within sandbox \"e330fc0a64823e8237a5ea81f0c88752454cf693bd5738ceb654eb0e8a9eb056\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 01:56:44.503998 containerd[2155]: time="2024-12-13T01:56:44.503831539Z" level=info msg="CreateContainer within sandbox \"26cb3f20e74d5130e06af4d15acfa34e6fae58079e75df62df1aff4fbceb0b5a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9a46a667230a0869bdfa57e90fd46983524100cb2bc96b4def1387458063f8a7\"" Dec 13 01:56:44.506957 containerd[2155]: time="2024-12-13T01:56:44.504776311Z" level=info msg="StartContainer for \"9a46a667230a0869bdfa57e90fd46983524100cb2bc96b4def1387458063f8a7\"" Dec 13 01:56:44.507331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2518874460.mount: Deactivated successfully. Dec 13 01:56:44.507954 containerd[2155]: time="2024-12-13T01:56:44.507875227Z" level=info msg="CreateContainer within sandbox \"e330fc0a64823e8237a5ea81f0c88752454cf693bd5738ceb654eb0e8a9eb056\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"740a9a191877ec372ae1cf2ad9c583d62cf85a0a52283d92a1931ba66bb69b2f\"" Dec 13 01:56:44.512723 containerd[2155]: time="2024-12-13T01:56:44.512650723Z" level=info msg="StartContainer for \"740a9a191877ec372ae1cf2ad9c583d62cf85a0a52283d92a1931ba66bb69b2f\"" Dec 13 01:56:44.702699 containerd[2155]: time="2024-12-13T01:56:44.702475568Z" level=info msg="StartContainer for \"740a9a191877ec372ae1cf2ad9c583d62cf85a0a52283d92a1931ba66bb69b2f\" returns successfully" Dec 13 01:56:44.705472 containerd[2155]: time="2024-12-13T01:56:44.704740736Z" level=info msg="StartContainer for \"9a46a667230a0869bdfa57e90fd46983524100cb2bc96b4def1387458063f8a7\" returns successfully" Dec 13 01:56:47.138929 containerd[2155]: time="2024-12-13T01:56:47.138710156Z" level=info msg="shim disconnected" id=6ee61ff23ecaba4182e866c0d244f85e3188dc09470cc1a11e2fc5abf7d4e201 namespace=k8s.io Dec 13 01:56:47.138929 containerd[2155]: time="2024-12-13T01:56:47.138876632Z" level=warning msg="cleaning up after shim disconnected" id=6ee61ff23ecaba4182e866c0d244f85e3188dc09470cc1a11e2fc5abf7d4e201 namespace=k8s.io Dec 13 01:56:47.141365 containerd[2155]: time="2024-12-13T01:56:47.138899252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:47.145770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ee61ff23ecaba4182e866c0d244f85e3188dc09470cc1a11e2fc5abf7d4e201-rootfs.mount: Deactivated successfully. Dec 13 01:56:47.475992 kubelet[3450]: I1213 01:56:47.475612 3450 scope.go:117] "RemoveContainer" containerID="6ee61ff23ecaba4182e866c0d244f85e3188dc09470cc1a11e2fc5abf7d4e201" Dec 13 01:56:47.480136 containerd[2155]: time="2024-12-13T01:56:47.479935870Z" level=info msg="CreateContainer within sandbox \"61feb8663c6b8e6ff9c1d811ae4ef894139aa11e03be7f159520d6ce999937ed\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:56:47.507504 containerd[2155]: time="2024-12-13T01:56:47.507430702Z" level=info msg="CreateContainer within sandbox \"61feb8663c6b8e6ff9c1d811ae4ef894139aa11e03be7f159520d6ce999937ed\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"187d9aaee466b049e5e7c48d12c124a8a44e3fee8280c099dc5c6e36bc994045\"" Dec 13 01:56:47.508581 containerd[2155]: time="2024-12-13T01:56:47.508508410Z" level=info msg="StartContainer for \"187d9aaee466b049e5e7c48d12c124a8a44e3fee8280c099dc5c6e36bc994045\"" Dec 13 01:56:47.628795 containerd[2155]: time="2024-12-13T01:56:47.628599827Z" level=info msg="StartContainer for \"187d9aaee466b049e5e7c48d12c124a8a44e3fee8280c099dc5c6e36bc994045\" returns successfully" Dec 13 01:56:50.988478 kubelet[3450]: E1213 01:56:50.987957 3450 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-156?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 01:57:00.989348 kubelet[3450]: E1213 01:57:00.989233 3450 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.156:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-156?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"