Dec 13 01:53:30.164311 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Dec 13 01:53:30.164358 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:53:30.164382 kernel: KASLR disabled due to lack of seed Dec 13 01:53:30.164399 kernel: efi: EFI v2.7 by EDK II Dec 13 01:53:30.164415 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Dec 13 01:53:30.164430 kernel: ACPI: Early table checksum verification disabled Dec 13 01:53:30.164448 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Dec 13 01:53:30.164464 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Dec 13 01:53:30.164480 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 01:53:30.164495 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Dec 13 01:53:30.164515 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 01:53:30.164531 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Dec 13 01:53:30.164546 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Dec 13 01:53:30.164562 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Dec 13 01:53:30.164581 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 01:53:30.164601 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Dec 13 01:53:30.164619 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Dec 13 01:53:30.164635 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Dec 13 01:53:30.164653 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Dec 13 01:53:30.164670 kernel: printk: bootconsole [uart0] enabled Dec 13 01:53:30.164687 kernel: NUMA: Failed to initialise from firmware Dec 13 01:53:30.164705 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:53:30.164724 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Dec 13 01:53:30.164742 kernel: Zone ranges: Dec 13 01:53:30.164760 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Dec 13 01:53:30.164776 kernel: DMA32 empty Dec 13 01:53:30.164798 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Dec 13 01:53:30.164815 kernel: Movable zone start for each node Dec 13 01:53:30.164832 kernel: Early memory node ranges Dec 13 01:53:30.164849 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Dec 13 01:53:30.164866 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Dec 13 01:53:30.164883 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Dec 13 01:53:30.164900 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Dec 13 01:53:30.164917 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Dec 13 01:53:30.164934 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Dec 13 01:53:30.164950 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Dec 13 01:53:30.164967 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Dec 13 01:53:30.164984 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Dec 13 01:53:30.165005 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Dec 13 01:53:30.165022 kernel: psci: probing for conduit method from ACPI. Dec 13 01:53:30.165046 kernel: psci: PSCIv1.0 detected in firmware. Dec 13 01:53:30.165063 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:53:30.165081 kernel: psci: Trusted OS migration not required Dec 13 01:53:30.165102 kernel: psci: SMC Calling Convention v1.1 Dec 13 01:53:30.165153 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:53:30.165175 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:53:30.165195 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:53:30.165212 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:53:30.165230 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:53:30.165248 kernel: CPU features: detected: Spectre-v2 Dec 13 01:53:30.165265 kernel: CPU features: detected: Spectre-v3a Dec 13 01:53:30.165283 kernel: CPU features: detected: Spectre-BHB Dec 13 01:53:30.165389 kernel: CPU features: detected: ARM erratum 1742098 Dec 13 01:53:30.165699 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Dec 13 01:53:30.165875 kernel: alternatives: applying boot alternatives Dec 13 01:53:30.165896 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:53:30.165915 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:53:30.165932 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:53:30.165950 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:53:30.165968 kernel: Fallback order for Node 0: 0 Dec 13 01:53:30.165985 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Dec 13 01:53:30.166002 kernel: Policy zone: Normal Dec 13 01:53:30.166019 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:53:30.166037 kernel: software IO TLB: area num 2. Dec 13 01:53:30.166054 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Dec 13 01:53:30.166077 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Dec 13 01:53:30.166095 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:53:30.166288 kernel: trace event string verifier disabled Dec 13 01:53:30.166595 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:53:30.166813 kernel: rcu: RCU event tracing is enabled. Dec 13 01:53:30.166920 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:53:30.167180 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:53:30.167201 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:53:30.167219 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:53:30.167236 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:53:30.167253 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:53:30.167277 kernel: GICv3: 96 SPIs implemented Dec 13 01:53:30.167294 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:53:30.167312 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:53:30.167329 kernel: GICv3: GICv3 features: 16 PPIs Dec 13 01:53:30.167346 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Dec 13 01:53:30.167363 kernel: ITS [mem 0x10080000-0x1009ffff] Dec 13 01:53:30.167381 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 01:53:30.167399 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 01:53:30.167416 kernel: GICv3: using LPI property table @0x00000004000d0000 Dec 13 01:53:30.167433 kernel: ITS: Using hypervisor restricted LPI range [128] Dec 13 01:53:30.167451 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Dec 13 01:53:30.167468 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:53:30.167490 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Dec 13 01:53:30.167508 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Dec 13 01:53:30.167526 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Dec 13 01:53:30.167543 kernel: Console: colour dummy device 80x25 Dec 13 01:53:30.167562 kernel: printk: console [tty1] enabled Dec 13 01:53:30.167579 kernel: ACPI: Core revision 20230628 Dec 13 01:53:30.167598 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Dec 13 01:53:30.167615 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:53:30.167633 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:53:30.167651 kernel: landlock: Up and running. Dec 13 01:53:30.167673 kernel: SELinux: Initializing. Dec 13 01:53:30.167691 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:30.167708 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:53:30.167726 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:53:30.167744 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:53:30.167762 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:53:30.167780 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:53:30.167798 kernel: Platform MSI: ITS@0x10080000 domain created Dec 13 01:53:30.167820 kernel: PCI/MSI: ITS@0x10080000 domain created Dec 13 01:53:30.167837 kernel: Remapping and enabling EFI services. Dec 13 01:53:30.167855 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:53:30.167873 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:53:30.167891 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Dec 13 01:53:30.167909 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Dec 13 01:53:30.167926 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Dec 13 01:53:30.167944 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:53:30.167962 kernel: SMP: Total of 2 processors activated. Dec 13 01:53:30.167979 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:53:30.168001 kernel: CPU features: detected: 32-bit EL1 Support Dec 13 01:53:30.168019 kernel: CPU features: detected: CRC32 instructions Dec 13 01:53:30.168047 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:53:30.168070 kernel: alternatives: applying system-wide alternatives Dec 13 01:53:30.168088 kernel: devtmpfs: initialized Dec 13 01:53:30.168107 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:53:30.168171 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:53:30.168192 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:53:30.168212 kernel: SMBIOS 3.0.0 present. Dec 13 01:53:30.168258 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Dec 13 01:53:30.168280 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:53:30.168299 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:53:30.168319 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:53:30.168338 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:53:30.168356 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:53:30.168375 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Dec 13 01:53:30.168399 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:53:30.168419 kernel: cpuidle: using governor menu Dec 13 01:53:30.168437 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:53:30.168456 kernel: ASID allocator initialised with 65536 entries Dec 13 01:53:30.168475 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:53:30.168493 kernel: Serial: AMBA PL011 UART driver Dec 13 01:53:30.168512 kernel: Modules: 17520 pages in range for non-PLT usage Dec 13 01:53:30.168531 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:53:30.168549 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:53:30.168572 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:53:30.168592 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:53:30.168611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:53:30.168629 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:53:30.168649 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:53:30.168667 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:53:30.168686 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:53:30.168705 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:53:30.168723 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:53:30.168746 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:53:30.168764 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:53:30.168783 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:53:30.168801 kernel: ACPI: Interpreter enabled Dec 13 01:53:30.168819 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:53:30.168838 kernel: ACPI: MCFG table detected, 1 entries Dec 13 01:53:30.168856 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Dec 13 01:53:30.169180 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:53:30.169405 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 01:53:30.169606 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 01:53:30.169811 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Dec 13 01:53:30.170013 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Dec 13 01:53:30.170039 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Dec 13 01:53:30.170058 kernel: acpiphp: Slot [1] registered Dec 13 01:53:30.170077 kernel: acpiphp: Slot [2] registered Dec 13 01:53:30.170095 kernel: acpiphp: Slot [3] registered Dec 13 01:53:30.170138 kernel: acpiphp: Slot [4] registered Dec 13 01:53:30.170160 kernel: acpiphp: Slot [5] registered Dec 13 01:53:30.170179 kernel: acpiphp: Slot [6] registered Dec 13 01:53:30.170197 kernel: acpiphp: Slot [7] registered Dec 13 01:53:30.170215 kernel: acpiphp: Slot [8] registered Dec 13 01:53:30.170233 kernel: acpiphp: Slot [9] registered Dec 13 01:53:30.170252 kernel: acpiphp: Slot [10] registered Dec 13 01:53:30.170271 kernel: acpiphp: Slot [11] registered Dec 13 01:53:30.170289 kernel: acpiphp: Slot [12] registered Dec 13 01:53:30.170308 kernel: acpiphp: Slot [13] registered Dec 13 01:53:30.170333 kernel: acpiphp: Slot [14] registered Dec 13 01:53:30.170351 kernel: acpiphp: Slot [15] registered Dec 13 01:53:30.170370 kernel: acpiphp: Slot [16] registered Dec 13 01:53:30.170388 kernel: acpiphp: Slot [17] registered Dec 13 01:53:30.170408 kernel: acpiphp: Slot [18] registered Dec 13 01:53:30.170426 kernel: acpiphp: Slot [19] registered Dec 13 01:53:30.170444 kernel: acpiphp: Slot [20] registered Dec 13 01:53:30.170463 kernel: acpiphp: Slot [21] registered Dec 13 01:53:30.170481 kernel: acpiphp: Slot [22] registered Dec 13 01:53:30.170503 kernel: acpiphp: Slot [23] registered Dec 13 01:53:30.170522 kernel: acpiphp: Slot [24] registered Dec 13 01:53:30.170540 kernel: acpiphp: Slot [25] registered Dec 13 01:53:30.170559 kernel: acpiphp: Slot [26] registered Dec 13 01:53:30.170577 kernel: acpiphp: Slot [27] registered Dec 13 01:53:30.170595 kernel: acpiphp: Slot [28] registered Dec 13 01:53:30.170614 kernel: acpiphp: Slot [29] registered Dec 13 01:53:30.170632 kernel: acpiphp: Slot [30] registered Dec 13 01:53:30.170650 kernel: acpiphp: Slot [31] registered Dec 13 01:53:30.170668 kernel: PCI host bridge to bus 0000:00 Dec 13 01:53:30.170884 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Dec 13 01:53:30.171072 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 01:53:30.171288 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Dec 13 01:53:30.171476 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Dec 13 01:53:30.171726 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Dec 13 01:53:30.171953 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Dec 13 01:53:30.172357 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Dec 13 01:53:30.173892 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 01:53:30.174104 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Dec 13 01:53:30.174411 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:53:30.174662 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 01:53:30.174881 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Dec 13 01:53:30.175095 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Dec 13 01:53:30.175385 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Dec 13 01:53:30.175590 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 01:53:30.175790 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Dec 13 01:53:30.175990 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Dec 13 01:53:30.176318 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Dec 13 01:53:30.176530 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Dec 13 01:53:30.176740 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Dec 13 01:53:30.176941 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Dec 13 01:53:30.177208 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 01:53:30.177398 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Dec 13 01:53:30.177425 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 01:53:30.177444 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 01:53:30.177464 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 01:53:30.177482 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 01:53:30.177501 kernel: iommu: Default domain type: Translated Dec 13 01:53:30.177531 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:53:30.177550 kernel: efivars: Registered efivars operations Dec 13 01:53:30.177568 kernel: vgaarb: loaded Dec 13 01:53:30.177587 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:53:30.177606 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:53:30.177625 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:53:30.177643 kernel: pnp: PnP ACPI init Dec 13 01:53:30.177859 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Dec 13 01:53:30.177891 kernel: pnp: PnP ACPI: found 1 devices Dec 13 01:53:30.177911 kernel: NET: Registered PF_INET protocol family Dec 13 01:53:30.177929 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:53:30.177948 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:53:30.177967 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:53:30.177986 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:53:30.178005 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:53:30.178024 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:53:30.178042 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:30.178065 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:53:30.178084 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:53:30.178102 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:53:30.178174 kernel: kvm [1]: HYP mode not available Dec 13 01:53:30.178197 kernel: Initialise system trusted keyrings Dec 13 01:53:30.178217 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:53:30.178236 kernel: Key type asymmetric registered Dec 13 01:53:30.178254 kernel: Asymmetric key parser 'x509' registered Dec 13 01:53:30.178273 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:53:30.178297 kernel: io scheduler mq-deadline registered Dec 13 01:53:30.178317 kernel: io scheduler kyber registered Dec 13 01:53:30.178336 kernel: io scheduler bfq registered Dec 13 01:53:30.178564 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Dec 13 01:53:30.178593 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 01:53:30.178612 kernel: ACPI: button: Power Button [PWRB] Dec 13 01:53:30.178632 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Dec 13 01:53:30.178651 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 01:53:30.178676 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:53:30.178697 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Dec 13 01:53:30.178915 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Dec 13 01:53:30.178944 kernel: printk: console [ttyS0] disabled Dec 13 01:53:30.178963 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Dec 13 01:53:30.178983 kernel: printk: console [ttyS0] enabled Dec 13 01:53:30.179002 kernel: printk: bootconsole [uart0] disabled Dec 13 01:53:30.179020 kernel: thunder_xcv, ver 1.0 Dec 13 01:53:30.179039 kernel: thunder_bgx, ver 1.0 Dec 13 01:53:30.179057 kernel: nicpf, ver 1.0 Dec 13 01:53:30.179082 kernel: nicvf, ver 1.0 Dec 13 01:53:30.179342 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:53:30.179581 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:53:29 UTC (1734054809) Dec 13 01:53:30.179609 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:53:30.179629 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Dec 13 01:53:30.179649 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:53:30.179668 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:53:30.179694 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:53:30.179713 kernel: Segment Routing with IPv6 Dec 13 01:53:30.179731 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:53:30.179750 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:53:30.179768 kernel: Key type dns_resolver registered Dec 13 01:53:30.179786 kernel: registered taskstats version 1 Dec 13 01:53:30.179805 kernel: Loading compiled-in X.509 certificates Dec 13 01:53:30.179823 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:53:30.179842 kernel: Key type .fscrypt registered Dec 13 01:53:30.179860 kernel: Key type fscrypt-provisioning registered Dec 13 01:53:30.179882 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:53:30.179901 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:53:30.179919 kernel: ima: No architecture policies found Dec 13 01:53:30.179938 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:53:30.179957 kernel: clk: Disabling unused clocks Dec 13 01:53:30.179975 kernel: Freeing unused kernel memory: 39360K Dec 13 01:53:30.179993 kernel: Run /init as init process Dec 13 01:53:30.180011 kernel: with arguments: Dec 13 01:53:30.180029 kernel: /init Dec 13 01:53:30.180052 kernel: with environment: Dec 13 01:53:30.180070 kernel: HOME=/ Dec 13 01:53:30.180089 kernel: TERM=linux Dec 13 01:53:30.180107 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:53:30.180853 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:53:30.180879 systemd[1]: Detected virtualization amazon. Dec 13 01:53:30.180900 systemd[1]: Detected architecture arm64. Dec 13 01:53:30.180927 systemd[1]: Running in initrd. Dec 13 01:53:30.180947 systemd[1]: No hostname configured, using default hostname. Dec 13 01:53:30.180966 systemd[1]: Hostname set to . Dec 13 01:53:30.180987 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:30.181007 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:53:30.181028 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:30.181048 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:30.181070 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:53:30.181094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:53:30.181554 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:53:30.181591 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:53:30.181615 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:53:30.181637 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:53:30.181657 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:30.181678 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:30.181706 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:53:30.181727 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:53:30.181747 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:53:30.181767 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:53:30.181787 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:53:30.181808 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:53:30.181828 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:53:30.181849 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:53:30.181869 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:30.181894 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:30.181915 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:30.181935 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:53:30.181955 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:53:30.181976 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:53:30.181996 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:53:30.182016 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:53:30.182036 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:53:30.182061 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:53:30.182081 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:30.182101 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:53:30.182182 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:30.182250 systemd-journald[251]: Collecting audit messages is disabled. Dec 13 01:53:30.182301 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:53:30.182322 systemd-journald[251]: Journal started Dec 13 01:53:30.182364 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2578c1f38dfaebffabd5a156ed58ae) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:53:30.174651 systemd-modules-load[252]: Inserted module 'overlay' Dec 13 01:53:30.206209 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:53:30.210161 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:53:30.216201 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:53:30.218719 systemd-modules-load[252]: Inserted module 'br_netfilter' Dec 13 01:53:30.220540 kernel: Bridge firewalling registered Dec 13 01:53:30.229421 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:53:30.232390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:30.241754 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:30.244602 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:30.256627 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:30.267309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:53:30.273408 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:53:30.280704 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:30.307065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:30.323200 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:30.332453 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:53:30.336842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:30.356447 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:53:30.378111 dracut-cmdline[288]: dracut-dracut-053 Dec 13 01:53:30.385279 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:53:30.442868 systemd-resolved[290]: Positive Trust Anchors: Dec 13 01:53:30.442902 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:30.442965 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:53:30.536202 kernel: SCSI subsystem initialized Dec 13 01:53:30.546146 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:53:30.557161 kernel: iscsi: registered transport (tcp) Dec 13 01:53:30.579157 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:53:30.579227 kernel: QLogic iSCSI HBA Driver Dec 13 01:53:30.667145 kernel: random: crng init done Dec 13 01:53:30.667334 systemd-resolved[290]: Defaulting to hostname 'linux'. Dec 13 01:53:30.670721 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:53:30.675040 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:30.699199 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:53:30.715389 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:53:30.745167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:53:30.745241 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:53:30.746154 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:53:30.812164 kernel: raid6: neonx8 gen() 6676 MB/s Dec 13 01:53:30.829151 kernel: raid6: neonx4 gen() 6495 MB/s Dec 13 01:53:30.846148 kernel: raid6: neonx2 gen() 5424 MB/s Dec 13 01:53:30.863147 kernel: raid6: neonx1 gen() 3947 MB/s Dec 13 01:53:30.880147 kernel: raid6: int64x8 gen() 3799 MB/s Dec 13 01:53:30.897148 kernel: raid6: int64x4 gen() 3722 MB/s Dec 13 01:53:30.914147 kernel: raid6: int64x2 gen() 3590 MB/s Dec 13 01:53:30.931904 kernel: raid6: int64x1 gen() 2767 MB/s Dec 13 01:53:30.931940 kernel: raid6: using algorithm neonx8 gen() 6676 MB/s Dec 13 01:53:30.949888 kernel: raid6: .... xor() 4929 MB/s, rmw enabled Dec 13 01:53:30.949933 kernel: raid6: using neon recovery algorithm Dec 13 01:53:30.958248 kernel: xor: measuring software checksum speed Dec 13 01:53:30.958316 kernel: 8regs : 10945 MB/sec Dec 13 01:53:30.959331 kernel: 32regs : 11937 MB/sec Dec 13 01:53:30.960501 kernel: arm64_neon : 9510 MB/sec Dec 13 01:53:30.960544 kernel: xor: using function: 32regs (11937 MB/sec) Dec 13 01:53:31.045160 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:53:31.064219 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:53:31.076454 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:31.114963 systemd-udevd[472]: Using default interface naming scheme 'v255'. Dec 13 01:53:31.124188 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:31.142203 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:53:31.182300 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Dec 13 01:53:31.240074 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:53:31.250417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:53:31.367827 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:31.377217 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:53:31.431081 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:53:31.437271 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:53:31.441943 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:31.446816 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:53:31.459423 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:53:31.504058 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:53:31.565750 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 01:53:31.565815 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Dec 13 01:53:31.593592 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 01:53:31.593860 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 01:53:31.594138 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Dec 13 01:53:31.594170 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 01:53:31.594751 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ed:81:04:6e:6b Dec 13 01:53:31.586781 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:53:31.587015 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:31.605276 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 01:53:31.591547 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:31.593916 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:53:31.594218 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:31.615885 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:53:31.615923 kernel: GPT:9289727 != 16777215 Dec 13 01:53:31.615949 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:53:31.596602 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:31.620568 kernel: GPT:9289727 != 16777215 Dec 13 01:53:31.620601 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:53:31.620635 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:31.620652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:31.631816 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:31.661221 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:31.673522 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:53:31.716806 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:31.777221 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (519) Dec 13 01:53:31.789470 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Dec 13 01:53:31.798165 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/nvme0n1p3 scanned by (udev-worker) (524) Dec 13 01:53:31.867474 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Dec 13 01:53:31.897311 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Dec 13 01:53:31.903393 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Dec 13 01:53:31.918672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:53:31.937468 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:53:31.951294 disk-uuid[663]: Primary Header is updated. Dec 13 01:53:31.951294 disk-uuid[663]: Secondary Entries is updated. Dec 13 01:53:31.951294 disk-uuid[663]: Secondary Header is updated. Dec 13 01:53:31.962147 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:31.972177 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:31.981159 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:32.982241 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 01:53:32.984704 disk-uuid[664]: The operation has completed successfully. Dec 13 01:53:33.163810 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:53:33.164010 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:53:33.207385 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:53:33.215969 sh[1007]: Success Dec 13 01:53:33.235222 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:53:33.335732 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:53:33.350351 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:53:33.358191 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:53:33.397342 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:53:33.397417 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:33.397445 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:53:33.400268 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:53:33.400302 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:53:33.498162 kernel: BTRFS info (device dm-0): enabling ssd optimizations Dec 13 01:53:33.515136 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:53:33.519065 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:53:33.527529 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:53:33.530282 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:53:33.565486 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:33.565574 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:33.566967 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:33.586166 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:33.607027 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:53:33.612177 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:33.635401 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:53:33.650535 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:53:33.701363 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:53:33.715521 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:53:33.758868 systemd-networkd[1199]: lo: Link UP Dec 13 01:53:33.758892 systemd-networkd[1199]: lo: Gained carrier Dec 13 01:53:33.762468 systemd-networkd[1199]: Enumeration completed Dec 13 01:53:33.762624 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:53:33.764769 systemd[1]: Reached target network.target - Network. Dec 13 01:53:33.764966 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:33.764973 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:33.770439 systemd-networkd[1199]: eth0: Link UP Dec 13 01:53:33.770447 systemd-networkd[1199]: eth0: Gained carrier Dec 13 01:53:33.770463 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:33.807206 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.19.153/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:53:34.103888 ignition[1158]: Ignition 2.19.0 Dec 13 01:53:34.103923 ignition[1158]: Stage: fetch-offline Dec 13 01:53:34.107360 ignition[1158]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:34.107402 ignition[1158]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:34.111368 ignition[1158]: Ignition finished successfully Dec 13 01:53:34.115151 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:53:34.128511 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:53:34.152381 ignition[1219]: Ignition 2.19.0 Dec 13 01:53:34.152410 ignition[1219]: Stage: fetch Dec 13 01:53:34.153880 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:34.153906 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:34.154055 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.214955 ignition[1219]: PUT result: OK Dec 13 01:53:34.224775 ignition[1219]: parsed url from cmdline: "" Dec 13 01:53:34.224792 ignition[1219]: no config URL provided Dec 13 01:53:34.224807 ignition[1219]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:53:34.224833 ignition[1219]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:53:34.224864 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.228631 ignition[1219]: PUT result: OK Dec 13 01:53:34.228715 ignition[1219]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 01:53:34.232812 ignition[1219]: GET result: OK Dec 13 01:53:34.232967 ignition[1219]: parsing config with SHA512: 803648a786e9b231f675fe05a6688407a9f3d7b2fa73e91b7784b7fda1830d9f6b31de1e557fd1deedff332dcceb5017a16d2bb31f23f81b22cea5457e0c927c Dec 13 01:53:34.248785 unknown[1219]: fetched base config from "system" Dec 13 01:53:34.248818 unknown[1219]: fetched base config from "system" Dec 13 01:53:34.248833 unknown[1219]: fetched user config from "aws" Dec 13 01:53:34.253613 ignition[1219]: fetch: fetch complete Dec 13 01:53:34.253626 ignition[1219]: fetch: fetch passed Dec 13 01:53:34.253724 ignition[1219]: Ignition finished successfully Dec 13 01:53:34.261300 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:53:34.283491 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:53:34.307624 ignition[1226]: Ignition 2.19.0 Dec 13 01:53:34.308149 ignition[1226]: Stage: kargs Dec 13 01:53:34.308795 ignition[1226]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:34.308819 ignition[1226]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:34.309004 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.312943 ignition[1226]: PUT result: OK Dec 13 01:53:34.321568 ignition[1226]: kargs: kargs passed Dec 13 01:53:34.322431 ignition[1226]: Ignition finished successfully Dec 13 01:53:34.327223 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:53:34.337411 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:53:34.367584 ignition[1232]: Ignition 2.19.0 Dec 13 01:53:34.367613 ignition[1232]: Stage: disks Dec 13 01:53:34.368581 ignition[1232]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:34.368607 ignition[1232]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:34.368755 ignition[1232]: PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:34.370534 ignition[1232]: PUT result: OK Dec 13 01:53:34.379087 ignition[1232]: disks: disks passed Dec 13 01:53:34.379449 ignition[1232]: Ignition finished successfully Dec 13 01:53:34.387202 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:53:34.389541 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:53:34.392169 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:53:34.400726 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:53:34.402589 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:53:34.402971 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:53:34.417491 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:53:34.455355 systemd-fsck[1240]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:53:34.465055 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:53:34.476321 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:53:34.562158 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:53:34.563760 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:53:34.565713 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:53:34.593405 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:53:34.597287 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:53:34.609595 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:53:34.609863 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:53:34.609913 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:53:34.619406 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:53:34.637558 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:53:34.651065 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1259) Dec 13 01:53:34.651155 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:34.652707 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:34.653895 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:34.659187 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:34.661338 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:53:34.847306 systemd-networkd[1199]: eth0: Gained IPv6LL Dec 13 01:53:34.919469 initrd-setup-root[1283]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:53:34.928192 initrd-setup-root[1290]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:53:34.936226 initrd-setup-root[1297]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:53:34.958553 initrd-setup-root[1304]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:53:35.297239 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:53:35.315479 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:53:35.321172 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:53:35.337331 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:53:35.341068 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:35.381783 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:53:35.387581 ignition[1371]: INFO : Ignition 2.19.0 Dec 13 01:53:35.387581 ignition[1371]: INFO : Stage: mount Dec 13 01:53:35.390762 ignition[1371]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:35.390762 ignition[1371]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:35.394876 ignition[1371]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:35.397991 ignition[1371]: INFO : PUT result: OK Dec 13 01:53:35.403257 ignition[1371]: INFO : mount: mount passed Dec 13 01:53:35.405096 ignition[1371]: INFO : Ignition finished successfully Dec 13 01:53:35.409204 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:53:35.423296 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:53:35.572499 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:53:35.601166 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1384) Dec 13 01:53:35.605170 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:53:35.605223 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:53:35.605250 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 01:53:35.611139 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 01:53:35.613588 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:53:35.649182 ignition[1402]: INFO : Ignition 2.19.0 Dec 13 01:53:35.649182 ignition[1402]: INFO : Stage: files Dec 13 01:53:35.649182 ignition[1402]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:35.649182 ignition[1402]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:35.649182 ignition[1402]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:35.658708 ignition[1402]: INFO : PUT result: OK Dec 13 01:53:35.663429 ignition[1402]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:53:35.684844 ignition[1402]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:53:35.690163 ignition[1402]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:53:35.694219 ignition[1402]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:53:35.697070 ignition[1402]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:53:35.699846 unknown[1402]: wrote ssh authorized keys file for user: core Dec 13 01:53:35.702230 ignition[1402]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:53:35.706642 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:53:35.706642 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:53:35.809421 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:53:36.407316 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:53:36.411038 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:36.414402 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:53:36.417565 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:36.421378 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:53:36.421378 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:36.421378 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:53:36.421378 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:36.421378 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:53:36.437352 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:36.437352 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:53:36.437352 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:36.437352 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:36.437352 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:36.437352 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 01:53:36.891781 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:53:37.265664 ignition[1402]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 01:53:37.265664 ignition[1402]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:53:37.272627 ignition[1402]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:37.272627 ignition[1402]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:53:37.272627 ignition[1402]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:53:37.272627 ignition[1402]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:37.272627 ignition[1402]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:53:37.272627 ignition[1402]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:37.272627 ignition[1402]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:53:37.272627 ignition[1402]: INFO : files: files passed Dec 13 01:53:37.272627 ignition[1402]: INFO : Ignition finished successfully Dec 13 01:53:37.289202 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:53:37.305589 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:53:37.314109 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:53:37.326521 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:53:37.327309 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:53:37.353892 initrd-setup-root-after-ignition[1429]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:37.353892 initrd-setup-root-after-ignition[1429]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:37.360247 initrd-setup-root-after-ignition[1433]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:53:37.365891 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:53:37.374052 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:53:37.383469 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:53:37.437551 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:53:37.437930 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:53:37.445944 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:53:37.448174 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:53:37.453941 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:53:37.461462 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:53:37.492235 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:53:37.507562 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:53:37.531529 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:37.536376 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:37.538683 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:53:37.540780 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:53:37.541088 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:53:37.550829 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:53:37.553053 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:53:37.558388 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:53:37.560722 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:53:37.564861 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:53:37.570739 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:53:37.574337 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:53:37.579848 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:53:37.583550 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:53:37.586645 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:53:37.591101 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:53:37.591368 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:53:37.595295 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:37.599712 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:37.605513 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:53:37.605741 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:37.610334 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:53:37.610591 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:53:37.616370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:53:37.616693 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:53:37.625330 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:53:37.625541 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:53:37.635463 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:53:37.662000 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:53:37.665296 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:53:37.665578 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:37.670165 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:53:37.670453 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:53:37.698760 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:53:37.711042 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:53:37.713949 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:53:37.727783 ignition[1453]: INFO : Ignition 2.19.0 Dec 13 01:53:37.730887 ignition[1453]: INFO : Stage: umount Dec 13 01:53:37.730887 ignition[1453]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:53:37.730887 ignition[1453]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 01:53:37.730887 ignition[1453]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 01:53:37.728998 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:53:37.731294 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:53:37.744260 ignition[1453]: INFO : PUT result: OK Dec 13 01:53:37.759213 ignition[1453]: INFO : umount: umount passed Dec 13 01:53:37.762512 ignition[1453]: INFO : Ignition finished successfully Dec 13 01:53:37.764907 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:53:37.767218 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:53:37.769727 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:53:37.769817 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:53:37.776287 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:53:37.776384 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:53:37.778267 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:53:37.778344 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:53:37.780562 systemd[1]: Stopped target network.target - Network. Dec 13 01:53:37.782202 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:53:37.782285 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:53:37.784554 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:53:37.786220 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:53:37.789623 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:37.792696 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:53:37.794356 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:53:37.796581 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:53:37.796971 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:53:37.799620 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:53:37.799692 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:53:37.801562 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:53:37.801643 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:53:37.803576 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:53:37.803651 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:53:37.805637 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:53:37.805711 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:53:37.807953 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:53:37.810402 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:53:37.815224 systemd-networkd[1199]: eth0: DHCPv6 lease lost Dec 13 01:53:37.851198 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:53:37.851419 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:53:37.862655 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:53:37.862751 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:37.879296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:53:37.891504 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:53:37.891628 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:53:37.894477 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:37.898560 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:53:37.898796 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:53:37.921995 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:53:37.923063 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:37.928488 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:53:37.928597 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:37.930739 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:53:37.930823 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:37.933579 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:53:37.934240 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:37.957696 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:53:37.957805 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:37.960687 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:53:37.960760 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:37.966876 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:53:37.966982 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:53:37.974251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:53:37.974357 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:53:37.975691 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:53:37.975778 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:53:38.006726 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:53:38.010887 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:53:38.011007 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:38.013365 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:53:38.013447 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:38.015752 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:53:38.015828 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:38.018126 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:53:38.019068 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:38.027514 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:53:38.029503 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:53:38.066839 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:53:38.067272 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:53:38.075250 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:53:38.091483 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:53:38.107361 systemd[1]: Switching root. Dec 13 01:53:38.155963 systemd-journald[251]: Journal stopped Dec 13 01:53:40.883870 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Dec 13 01:53:40.884000 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:53:40.884053 kernel: SELinux: policy capability open_perms=1 Dec 13 01:53:40.884098 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:53:40.891169 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:53:40.891215 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:53:40.891246 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:53:40.891278 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:53:40.891317 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:53:40.891347 kernel: audit: type=1403 audit(1734054819.025:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:53:40.891389 systemd[1]: Successfully loaded SELinux policy in 73.414ms. Dec 13 01:53:40.891439 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.080ms. Dec 13 01:53:40.891472 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:53:40.891505 systemd[1]: Detected virtualization amazon. Dec 13 01:53:40.891537 systemd[1]: Detected architecture arm64. Dec 13 01:53:40.891568 systemd[1]: Detected first boot. Dec 13 01:53:40.891604 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:53:40.891636 zram_generator::config[1496]: No configuration found. Dec 13 01:53:40.891680 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:53:40.891711 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:53:40.891742 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:53:40.891774 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:53:40.891806 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:53:40.891839 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:53:40.891872 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:53:40.891907 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:53:40.891947 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:53:40.891980 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:53:40.892014 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:53:40.892045 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:53:40.892077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:53:40.892107 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:53:40.892177 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:53:40.892216 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:53:40.892250 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:53:40.892282 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:53:40.892313 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:53:40.892343 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:53:40.892376 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:53:40.892405 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:53:40.892437 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:53:40.892471 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:53:40.892504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:53:40.892536 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:53:40.892567 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:53:40.892600 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:53:40.892629 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:53:40.892662 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:53:40.892694 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:53:40.892724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:53:40.892758 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:53:40.892788 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:53:40.892819 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:53:40.892851 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:53:40.892880 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:53:40.892911 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:53:40.892942 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:53:40.892975 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:53:40.893006 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:53:40.893055 systemd[1]: Reached target machines.target - Containers. Dec 13 01:53:40.893088 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:53:40.893145 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:40.893182 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:53:40.893212 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:53:40.893241 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:53:40.893273 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:53:40.893303 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:53:40.893338 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:53:40.893373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:53:40.893406 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:53:40.893436 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:53:40.893465 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:53:40.893494 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:53:40.893526 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:53:40.893556 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:53:40.893587 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:53:40.893623 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:53:40.893653 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:53:40.893682 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:53:40.893715 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:53:40.893747 systemd[1]: Stopped verity-setup.service. Dec 13 01:53:40.893780 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:53:40.893810 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:53:40.893839 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:53:40.893870 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:53:40.893904 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:53:40.893934 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:53:40.893963 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:53:40.893995 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:53:40.894028 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:53:40.894061 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:53:40.894091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:40.894139 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:53:40.894172 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:40.894202 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:53:40.894230 kernel: loop: module loaded Dec 13 01:53:40.894259 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:53:40.894289 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:53:40.894324 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:53:40.894358 kernel: fuse: init (API version 7.39) Dec 13 01:53:40.894387 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:53:40.894421 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:53:40.894452 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:53:40.894486 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:53:40.894520 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:53:40.894592 systemd-journald[1592]: Collecting audit messages is disabled. Dec 13 01:53:40.894651 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:53:40.894681 kernel: ACPI: bus type drm_connector registered Dec 13 01:53:40.894710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:40.894744 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:53:40.894778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:40.894811 systemd-journald[1592]: Journal started Dec 13 01:53:40.894858 systemd-journald[1592]: Runtime Journal (/run/log/journal/ec2578c1f38dfaebffabd5a156ed58ae) is 8.0M, max 75.3M, 67.3M free. Dec 13 01:53:40.209706 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:53:40.263404 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Dec 13 01:53:40.264214 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:53:40.903196 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:53:40.916837 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:53:40.932562 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:53:40.941076 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:53:40.945319 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:40.945719 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:53:40.949561 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:53:40.951196 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:53:40.954825 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:40.955202 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:53:40.963793 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:53:40.966356 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:53:40.969106 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:53:41.043459 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:53:41.050473 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:53:41.052783 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:53:41.055448 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:53:41.058950 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:53:41.062640 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:53:41.078621 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:53:41.088581 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:53:41.099178 kernel: loop0: detected capacity change from 0 to 114432 Dec 13 01:53:41.153139 systemd-journald[1592]: Time spent on flushing to /var/log/journal/ec2578c1f38dfaebffabd5a156ed58ae is 103.229ms for 913 entries. Dec 13 01:53:41.153139 systemd-journald[1592]: System Journal (/var/log/journal/ec2578c1f38dfaebffabd5a156ed58ae) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:53:41.347151 systemd-journald[1592]: Received client request to flush runtime journal. Dec 13 01:53:41.347267 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:53:41.347323 kernel: loop1: detected capacity change from 0 to 114328 Dec 13 01:53:41.168543 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Dec 13 01:53:41.168567 systemd-tmpfiles[1606]: ACLs are not supported, ignoring. Dec 13 01:53:41.181700 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:53:41.192539 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:53:41.253053 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:53:41.258218 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:53:41.286699 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:53:41.301414 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:53:41.322407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:53:41.339495 udevadm[1643]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:53:41.353947 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:53:41.378462 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:53:41.394583 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:53:41.429134 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Dec 13 01:53:41.429810 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Dec 13 01:53:41.443714 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:53:41.461193 kernel: loop2: detected capacity change from 0 to 52536 Dec 13 01:53:41.623287 kernel: loop3: detected capacity change from 0 to 194512 Dec 13 01:53:41.679178 kernel: loop4: detected capacity change from 0 to 114432 Dec 13 01:53:41.699417 kernel: loop5: detected capacity change from 0 to 114328 Dec 13 01:53:41.712192 kernel: loop6: detected capacity change from 0 to 52536 Dec 13 01:53:41.724190 kernel: loop7: detected capacity change from 0 to 194512 Dec 13 01:53:41.748499 (sd-merge)[1654]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Dec 13 01:53:41.749506 (sd-merge)[1654]: Merged extensions into '/usr'. Dec 13 01:53:41.760029 systemd[1]: Reloading requested from client PID 1605 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:53:41.760061 systemd[1]: Reloading... Dec 13 01:53:41.918181 zram_generator::config[1677]: No configuration found. Dec 13 01:53:42.264647 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:42.376383 systemd[1]: Reloading finished in 615 ms. Dec 13 01:53:42.418661 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:53:42.421774 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:53:42.435439 systemd[1]: Starting ensure-sysext.service... Dec 13 01:53:42.448529 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:53:42.456437 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:53:42.477966 systemd[1]: Reloading requested from client PID 1732 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:53:42.478003 systemd[1]: Reloading... Dec 13 01:53:42.522059 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:53:42.522780 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:53:42.526758 systemd-tmpfiles[1733]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:53:42.531546 systemd-tmpfiles[1733]: ACLs are not supported, ignoring. Dec 13 01:53:42.531704 systemd-tmpfiles[1733]: ACLs are not supported, ignoring. Dec 13 01:53:42.545973 systemd-udevd[1734]: Using default interface naming scheme 'v255'. Dec 13 01:53:42.546382 systemd-tmpfiles[1733]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:53:42.546395 systemd-tmpfiles[1733]: Skipping /boot Dec 13 01:53:42.574435 systemd-tmpfiles[1733]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:53:42.574456 systemd-tmpfiles[1733]: Skipping /boot Dec 13 01:53:42.718161 zram_generator::config[1767]: No configuration found. Dec 13 01:53:42.825166 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1785) Dec 13 01:53:42.849910 (udev-worker)[1774]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:42.857208 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1785) Dec 13 01:53:43.000075 ldconfig[1602]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:53:43.115938 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:53:43.136192 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1768) Dec 13 01:53:43.270791 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:53:43.271009 systemd[1]: Reloading finished in 792 ms. Dec 13 01:53:43.298565 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:53:43.303764 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:53:43.321935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:53:43.409250 systemd[1]: Finished ensure-sysext.service. Dec 13 01:53:43.427714 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:53:43.436302 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Dec 13 01:53:43.447657 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:43.453472 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:53:43.455943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:53:43.462476 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:53:43.472513 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:53:43.478650 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:53:43.487506 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:53:43.492454 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:53:43.494676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:53:43.500506 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:53:43.514476 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:53:43.525558 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:53:43.541460 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:53:43.543714 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:53:43.553361 lvm[1933]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:43.567981 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:53:43.574458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:53:43.577991 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:53:43.580246 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:53:43.621423 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:53:43.652648 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:53:43.653004 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:53:43.656023 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:53:43.661385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:53:43.662239 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:53:43.676979 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:53:43.677326 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:53:43.687147 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:53:43.695689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:53:43.707852 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:53:43.712229 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:53:43.712558 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:53:43.723185 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:53:43.728266 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:53:43.744544 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:53:43.759674 lvm[1962]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:53:43.781778 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:53:43.789992 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:53:43.806481 augenrules[1973]: No rules Dec 13 01:53:43.812981 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:43.819362 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:53:43.834262 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:53:43.843570 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:53:43.939231 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:53:43.952313 systemd-networkd[1946]: lo: Link UP Dec 13 01:53:43.952333 systemd-networkd[1946]: lo: Gained carrier Dec 13 01:53:43.954898 systemd-networkd[1946]: Enumeration completed Dec 13 01:53:43.955084 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:53:43.959974 systemd-networkd[1946]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:43.959997 systemd-networkd[1946]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:53:43.962518 systemd-networkd[1946]: eth0: Link UP Dec 13 01:53:43.962879 systemd-networkd[1946]: eth0: Gained carrier Dec 13 01:53:43.962914 systemd-networkd[1946]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:53:43.965449 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:53:43.976272 systemd-networkd[1946]: eth0: DHCPv4 address 172.31.19.153/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 01:53:43.986910 systemd-resolved[1947]: Positive Trust Anchors: Dec 13 01:53:43.986952 systemd-resolved[1947]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:53:43.987016 systemd-resolved[1947]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:53:43.995845 systemd-resolved[1947]: Defaulting to hostname 'linux'. Dec 13 01:53:43.999382 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:53:44.001810 systemd[1]: Reached target network.target - Network. Dec 13 01:53:44.003527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:53:44.005757 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:53:44.007861 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:53:44.010187 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:53:44.012832 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:53:44.014908 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:53:44.017193 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:53:44.019467 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:53:44.019518 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:53:44.021203 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:53:44.024537 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:53:44.029201 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:53:44.044419 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:53:44.047456 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:53:44.049704 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:53:44.051503 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:53:44.053410 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:53:44.053474 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:53:44.063305 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:53:44.068473 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:53:44.074533 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:53:44.084548 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:53:44.092229 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:53:44.095289 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:53:44.098489 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:53:44.112657 systemd[1]: Started ntpd.service - Network Time Service. Dec 13 01:53:44.120948 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:53:44.129393 systemd[1]: Starting setup-oem.service - Setup OEM... Dec 13 01:53:44.138522 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:53:44.146442 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:53:44.160431 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:53:44.163415 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:53:44.165456 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:53:44.176022 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:53:44.184841 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:53:44.191823 jq[1997]: false Dec 13 01:53:44.199005 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:53:44.201208 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:53:44.216791 dbus-daemon[1996]: [system] SELinux support is enabled Dec 13 01:53:44.217414 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:53:44.224798 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:53:44.224862 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:53:44.227459 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:53:44.227514 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:53:44.272276 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:53:44.272951 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: ntpd 4.2.8p17@1.4004-o Thu Dec 12 22:42:18 UTC 2024 (1): Starting Dec 13 01:53:44.272951 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:53:44.272951 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: ---------------------------------------------------- Dec 13 01:53:44.272951 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:53:44.272951 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:53:44.272951 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: corporation. Support and training for ntp-4 are Dec 13 01:53:44.272951 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: available at https://www.nwtime.org/support Dec 13 01:53:44.272951 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: ---------------------------------------------------- Dec 13 01:53:44.272342 ntpd[2000]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Dec 13 01:53:44.272363 ntpd[2000]: ---------------------------------------------------- Dec 13 01:53:44.272383 ntpd[2000]: ntp-4 is maintained by Network Time Foundation, Dec 13 01:53:44.272402 ntpd[2000]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Dec 13 01:53:44.272420 ntpd[2000]: corporation. Support and training for ntp-4 are Dec 13 01:53:44.272439 ntpd[2000]: available at https://www.nwtime.org/support Dec 13 01:53:44.272458 ntpd[2000]: ---------------------------------------------------- Dec 13 01:53:44.289509 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: proto: precision = 0.096 usec (-23) Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: basedate set to 2024-11-30 Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: gps base set to 2024-12-01 (week 2343) Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: Listen normally on 3 eth0 172.31.19.153:123 Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: Listen normally on 4 lo [::1]:123 Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: bind(21) AF_INET6 fe80::4ed:81ff:fe04:6e6b%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: unable to create socket on eth0 (5) for fe80::4ed:81ff:fe04:6e6b%2#123 Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: failed to init interface for address fe80::4ed:81ff:fe04:6e6b%2 Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: Listening on routing socket on fd #21 for interface updates Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:44.300682 ntpd[2000]: 13 Dec 01:53:44 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:44.274330 dbus-daemon[1996]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1946 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 01:53:44.277334 ntpd[2000]: proto: precision = 0.096 usec (-23) Dec 13 01:53:44.277752 ntpd[2000]: basedate set to 2024-11-30 Dec 13 01:53:44.277775 ntpd[2000]: gps base set to 2024-12-01 (week 2343) Dec 13 01:53:44.287637 ntpd[2000]: Listen and drop on 0 v6wildcard [::]:123 Dec 13 01:53:44.287718 ntpd[2000]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Dec 13 01:53:44.288009 ntpd[2000]: Listen normally on 2 lo 127.0.0.1:123 Dec 13 01:53:44.288073 ntpd[2000]: Listen normally on 3 eth0 172.31.19.153:123 Dec 13 01:53:44.288183 ntpd[2000]: Listen normally on 4 lo [::1]:123 Dec 13 01:53:44.314698 systemd[1]: Finished setup-oem.service - Setup OEM. Dec 13 01:53:44.288261 ntpd[2000]: bind(21) AF_INET6 fe80::4ed:81ff:fe04:6e6b%2#123 flags 0x11 failed: Cannot assign requested address Dec 13 01:53:44.288306 ntpd[2000]: unable to create socket on eth0 (5) for fe80::4ed:81ff:fe04:6e6b%2#123 Dec 13 01:53:44.288334 ntpd[2000]: failed to init interface for address fe80::4ed:81ff:fe04:6e6b%2 Dec 13 01:53:44.288387 ntpd[2000]: Listening on routing socket on fd #21 for interface updates Dec 13 01:53:44.294439 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:44.294492 ntpd[2000]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Dec 13 01:53:44.338376 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:53:44.338736 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:53:44.344765 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:53:44.347686 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:53:44.383219 jq[2009]: true Dec 13 01:53:44.399141 update_engine[2008]: I20241213 01:53:44.392876 2008 main.cc:92] Flatcar Update Engine starting Dec 13 01:53:44.405664 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:53:44.411737 update_engine[2008]: I20241213 01:53:44.411644 2008 update_check_scheduler.cc:74] Next update check in 8m53s Dec 13 01:53:44.413063 extend-filesystems[1998]: Found loop4 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found loop5 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found loop6 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found loop7 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found nvme0n1 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found nvme0n1p1 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found nvme0n1p2 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found nvme0n1p3 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found usr Dec 13 01:53:44.429381 extend-filesystems[1998]: Found nvme0n1p4 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found nvme0n1p6 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found nvme0n1p7 Dec 13 01:53:44.429381 extend-filesystems[1998]: Found nvme0n1p9 Dec 13 01:53:44.429381 extend-filesystems[1998]: Checking size of /dev/nvme0n1p9 Dec 13 01:53:44.424956 (ntainerd)[2034]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:53:44.517104 tar[2020]: linux-arm64/helm Dec 13 01:53:44.519235 extend-filesystems[1998]: Resized partition /dev/nvme0n1p9 Dec 13 01:53:44.526255 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 01:53:44.428425 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:53:44.526733 extend-filesystems[2048]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:53:44.444948 systemd-logind[2005]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 01:53:44.560784 jq[2039]: true Dec 13 01:53:44.444984 systemd-logind[2005]: Watching system buttons on /dev/input/event1 (Sleep Button) Dec 13 01:53:44.445438 systemd-logind[2005]: New seat seat0. Dec 13 01:53:44.448875 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:53:44.550695 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:53:44.607164 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 01:53:44.675208 extend-filesystems[2048]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 01:53:44.675208 extend-filesystems[2048]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:53:44.675208 extend-filesystems[2048]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 01:53:44.671990 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:53:44.689401 extend-filesystems[1998]: Resized filesystem in /dev/nvme0n1p9 Dec 13 01:53:44.677685 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:53:44.717574 coreos-metadata[1995]: Dec 13 01:53:44.717 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:53:44.730571 bash[2072]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:53:44.729675 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:53:44.737760 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (1774) Dec 13 01:53:44.737868 coreos-metadata[1995]: Dec 13 01:53:44.731 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Dec 13 01:53:44.737868 coreos-metadata[1995]: Dec 13 01:53:44.737 INFO Fetch successful Dec 13 01:53:44.737868 coreos-metadata[1995]: Dec 13 01:53:44.737 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Dec 13 01:53:44.760501 coreos-metadata[1995]: Dec 13 01:53:44.751 INFO Fetch successful Dec 13 01:53:44.760501 coreos-metadata[1995]: Dec 13 01:53:44.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Dec 13 01:53:44.763503 coreos-metadata[1995]: Dec 13 01:53:44.763 INFO Fetch successful Dec 13 01:53:44.763503 coreos-metadata[1995]: Dec 13 01:53:44.763 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Dec 13 01:53:44.766811 coreos-metadata[1995]: Dec 13 01:53:44.766 INFO Fetch successful Dec 13 01:53:44.766811 coreos-metadata[1995]: Dec 13 01:53:44.766 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Dec 13 01:53:44.768903 systemd[1]: Starting sshkeys.service... Dec 13 01:53:44.771185 coreos-metadata[1995]: Dec 13 01:53:44.769 INFO Fetch failed with 404: resource not found Dec 13 01:53:44.771185 coreos-metadata[1995]: Dec 13 01:53:44.769 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Dec 13 01:53:44.776297 coreos-metadata[1995]: Dec 13 01:53:44.772 INFO Fetch successful Dec 13 01:53:44.776297 coreos-metadata[1995]: Dec 13 01:53:44.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Dec 13 01:53:44.779505 coreos-metadata[1995]: Dec 13 01:53:44.777 INFO Fetch successful Dec 13 01:53:44.779505 coreos-metadata[1995]: Dec 13 01:53:44.777 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Dec 13 01:53:44.781682 coreos-metadata[1995]: Dec 13 01:53:44.781 INFO Fetch successful Dec 13 01:53:44.781682 coreos-metadata[1995]: Dec 13 01:53:44.781 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Dec 13 01:53:44.790165 coreos-metadata[1995]: Dec 13 01:53:44.787 INFO Fetch successful Dec 13 01:53:44.790165 coreos-metadata[1995]: Dec 13 01:53:44.787 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Dec 13 01:53:44.794145 coreos-metadata[1995]: Dec 13 01:53:44.791 INFO Fetch successful Dec 13 01:53:44.858733 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 01:53:44.875873 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 01:53:44.908576 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:53:44.916326 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:53:45.013079 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 01:53:45.013389 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 01:53:45.017260 dbus-daemon[1996]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2028 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 01:53:45.024395 systemd-networkd[1946]: eth0: Gained IPv6LL Dec 13 01:53:45.067267 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 01:53:45.071295 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:53:45.110608 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:53:45.120886 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Dec 13 01:53:45.134741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:53:45.140910 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:53:45.219368 polkitd[2154]: Started polkitd version 121 Dec 13 01:53:45.268228 containerd[2034]: time="2024-12-13T01:53:45.266036938Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:53:45.313438 polkitd[2154]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 01:53:45.313574 polkitd[2154]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 01:53:45.316671 polkitd[2154]: Finished loading, compiling and executing 2 rules Dec 13 01:53:45.321640 dbus-daemon[1996]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 01:53:45.321942 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 01:53:45.329222 polkitd[2154]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 01:53:45.338547 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:53:45.374439 coreos-metadata[2127]: Dec 13 01:53:45.373 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 01:53:45.390066 coreos-metadata[2127]: Dec 13 01:53:45.375 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Dec 13 01:53:45.390066 coreos-metadata[2127]: Dec 13 01:53:45.388 INFO Fetch successful Dec 13 01:53:45.390066 coreos-metadata[2127]: Dec 13 01:53:45.388 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 01:53:45.392215 coreos-metadata[2127]: Dec 13 01:53:45.391 INFO Fetch successful Dec 13 01:53:45.404083 unknown[2127]: wrote ssh authorized keys file for user: core Dec 13 01:53:45.471545 locksmithd[2043]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:53:45.482714 systemd-hostnamed[2028]: Hostname set to (transient) Dec 13 01:53:45.485217 systemd-resolved[1947]: System hostname changed to 'ip-172-31-19-153'. Dec 13 01:53:45.498487 update-ssh-keys[2199]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:53:45.496957 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 01:53:45.508256 systemd[1]: Finished sshkeys.service. Dec 13 01:53:45.517165 amazon-ssm-agent[2166]: Initializing new seelog logger Dec 13 01:53:45.517165 amazon-ssm-agent[2166]: New Seelog Logger Creation Complete Dec 13 01:53:45.517165 amazon-ssm-agent[2166]: 2024/12/13 01:53:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:45.517165 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:45.517740 amazon-ssm-agent[2166]: 2024/12/13 01:53:45 processing appconfig overrides Dec 13 01:53:45.521359 amazon-ssm-agent[2166]: 2024/12/13 01:53:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:45.521359 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:45.521359 amazon-ssm-agent[2166]: 2024/12/13 01:53:45 processing appconfig overrides Dec 13 01:53:45.521359 amazon-ssm-agent[2166]: 2024/12/13 01:53:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:45.521359 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:45.521359 amazon-ssm-agent[2166]: 2024/12/13 01:53:45 processing appconfig overrides Dec 13 01:53:45.521359 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO Proxy environment variables: Dec 13 01:53:45.524394 amazon-ssm-agent[2166]: 2024/12/13 01:53:45 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:45.524394 amazon-ssm-agent[2166]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 01:53:45.524568 amazon-ssm-agent[2166]: 2024/12/13 01:53:45 processing appconfig overrides Dec 13 01:53:45.551432 containerd[2034]: time="2024-12-13T01:53:45.550560419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.562954 containerd[2034]: time="2024-12-13T01:53:45.562858535Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.565167 containerd[2034]: time="2024-12-13T01:53:45.564299915Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:53:45.565167 containerd[2034]: time="2024-12-13T01:53:45.564373535Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:53:45.565167 containerd[2034]: time="2024-12-13T01:53:45.564682127Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:53:45.565167 containerd[2034]: time="2024-12-13T01:53:45.564717515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.565167 containerd[2034]: time="2024-12-13T01:53:45.564837203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.565167 containerd[2034]: time="2024-12-13T01:53:45.564868823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.565493 containerd[2034]: time="2024-12-13T01:53:45.565217327Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.565493 containerd[2034]: time="2024-12-13T01:53:45.565251899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.565493 containerd[2034]: time="2024-12-13T01:53:45.565300355Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.565493 containerd[2034]: time="2024-12-13T01:53:45.565327067Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.565716 containerd[2034]: time="2024-12-13T01:53:45.565492955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.566933 containerd[2034]: time="2024-12-13T01:53:45.565920515Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:53:45.570051 containerd[2034]: time="2024-12-13T01:53:45.567702623Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:53:45.570051 containerd[2034]: time="2024-12-13T01:53:45.567760271Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:53:45.570051 containerd[2034]: time="2024-12-13T01:53:45.567971939Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:53:45.570051 containerd[2034]: time="2024-12-13T01:53:45.568068815Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:53:45.588205 containerd[2034]: time="2024-12-13T01:53:45.585637427Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:53:45.588205 containerd[2034]: time="2024-12-13T01:53:45.586096619Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:53:45.588205 containerd[2034]: time="2024-12-13T01:53:45.586418615Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:53:45.588205 containerd[2034]: time="2024-12-13T01:53:45.586583543Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:53:45.588205 containerd[2034]: time="2024-12-13T01:53:45.586736303Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:53:45.588205 containerd[2034]: time="2024-12-13T01:53:45.587912363Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.589951331Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590248595Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590283791Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590314343Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590346743Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590380811Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590412467Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590463095Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590500571Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590533415Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590563655Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590591555Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590634059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592151 containerd[2034]: time="2024-12-13T01:53:45.590665799Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590697191Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590729183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590759723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590793551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590822927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590854007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590885603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590923547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590952083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.590984687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.591027767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.592889 containerd[2034]: time="2024-12-13T01:53:45.591063767Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.591106427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.593520719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.593589323Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.593888771Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.593957483Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.593986835Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.594046463Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.594073031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.595184903Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.595251503Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:53:45.599995 containerd[2034]: time="2024-12-13T01:53:45.595291163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:53:45.600603 containerd[2034]: time="2024-12-13T01:53:45.596388971Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:53:45.600603 containerd[2034]: time="2024-12-13T01:53:45.596734259Z" level=info msg="Connect containerd service" Dec 13 01:53:45.600603 containerd[2034]: time="2024-12-13T01:53:45.597187847Z" level=info msg="using legacy CRI server" Dec 13 01:53:45.600603 containerd[2034]: time="2024-12-13T01:53:45.597216923Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:53:45.600603 containerd[2034]: time="2024-12-13T01:53:45.599503907Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:53:45.605595 containerd[2034]: time="2024-12-13T01:53:45.601321811Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:53:45.605595 containerd[2034]: time="2024-12-13T01:53:45.601607111Z" level=info msg="Start subscribing containerd event" Dec 13 01:53:45.605595 containerd[2034]: time="2024-12-13T01:53:45.601678991Z" level=info msg="Start recovering state" Dec 13 01:53:45.605595 containerd[2034]: time="2024-12-13T01:53:45.601806527Z" level=info msg="Start event monitor" Dec 13 01:53:45.605595 containerd[2034]: time="2024-12-13T01:53:45.601831691Z" level=info msg="Start snapshots syncer" Dec 13 01:53:45.605595 containerd[2034]: time="2024-12-13T01:53:45.601854851Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:53:45.605595 containerd[2034]: time="2024-12-13T01:53:45.601875551Z" level=info msg="Start streaming server" Dec 13 01:53:45.605595 containerd[2034]: time="2024-12-13T01:53:45.604563131Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:53:45.605595 containerd[2034]: time="2024-12-13T01:53:45.604684199Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:53:45.606264 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:53:45.609289 containerd[2034]: time="2024-12-13T01:53:45.608873388Z" level=info msg="containerd successfully booted in 0.348672s" Dec 13 01:53:45.619213 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO http_proxy: Dec 13 01:53:45.721135 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO no_proxy: Dec 13 01:53:45.817707 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO https_proxy: Dec 13 01:53:45.915891 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO Checking if agent identity type OnPrem can be assumed Dec 13 01:53:46.014424 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO Checking if agent identity type EC2 can be assumed Dec 13 01:53:46.114894 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO Agent will take identity from EC2 Dec 13 01:53:46.214091 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:46.312779 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO [amazon-ssm-agent] using named pipe channel for IPC Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO [amazon-ssm-agent] Starting Core Agent Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO [amazon-ssm-agent] registrar detected. Attempting registration Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO [Registrar] Starting registrar module Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:45 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:46 INFO [EC2Identity] EC2 registration was successful. Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:46 INFO [CredentialRefresher] credentialRefresher has started Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:46 INFO [CredentialRefresher] Starting credentials refresher loop Dec 13 01:53:46.366892 amazon-ssm-agent[2166]: 2024-12-13 01:53:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials Dec 13 01:53:46.412337 amazon-ssm-agent[2166]: 2024-12-13 01:53:46 INFO [CredentialRefresher] Next credential rotation will be in 30.816656804166666 minutes Dec 13 01:53:46.637406 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:53:46.659683 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:53:46.685337 tar[2020]: linux-arm64/LICENSE Dec 13 01:53:46.686328 tar[2020]: linux-arm64/README.md Dec 13 01:53:46.720222 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:53:47.273073 ntpd[2000]: Listen normally on 6 eth0 [fe80::4ed:81ff:fe04:6e6b%2]:123 Dec 13 01:53:47.274072 ntpd[2000]: 13 Dec 01:53:47 ntpd[2000]: Listen normally on 6 eth0 [fe80::4ed:81ff:fe04:6e6b%2]:123 Dec 13 01:53:47.421517 amazon-ssm-agent[2166]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Dec 13 01:53:47.461437 kubelet[2224]: E1213 01:53:47.461178 2224 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:47.472510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:47.472832 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:47.473341 systemd[1]: kubelet.service: Consumed 1.290s CPU time. Dec 13 01:53:47.524477 amazon-ssm-agent[2166]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2236) started Dec 13 01:53:47.625223 amazon-ssm-agent[2166]: 2024-12-13 01:53:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Dec 13 01:53:47.935958 sshd_keygen[2038]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:53:47.974741 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:53:47.987656 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:53:47.994647 systemd[1]: Started sshd@0-172.31.19.153:22-139.178.68.195:44488.service - OpenSSH per-connection server daemon (139.178.68.195:44488). Dec 13 01:53:48.005368 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:53:48.007221 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:53:48.020630 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:53:48.054057 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:53:48.065707 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:53:48.074692 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:53:48.078530 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:53:48.080835 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:53:48.084081 systemd[1]: Startup finished in 1.141s (kernel) + 9.203s (initrd) + 9.130s (userspace) = 19.475s. Dec 13 01:53:48.344991 sshd[2256]: Accepted publickey for core from 139.178.68.195 port 44488 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:48.348364 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:48.368209 systemd-logind[2005]: New session 1 of user core. Dec 13 01:53:48.369699 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:53:48.376633 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:53:48.410790 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:53:48.421689 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:53:48.442091 (systemd)[2271]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:53:48.689867 systemd[2271]: Queued start job for default target default.target. Dec 13 01:53:48.698008 systemd[2271]: Created slice app.slice - User Application Slice. Dec 13 01:53:48.698102 systemd[2271]: Reached target paths.target - Paths. Dec 13 01:53:48.698201 systemd[2271]: Reached target timers.target - Timers. Dec 13 01:53:48.701215 systemd[2271]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:53:48.730975 systemd[2271]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:53:48.731607 systemd[2271]: Reached target sockets.target - Sockets. Dec 13 01:53:48.731815 systemd[2271]: Reached target basic.target - Basic System. Dec 13 01:53:48.732165 systemd[2271]: Reached target default.target - Main User Target. Dec 13 01:53:48.732500 systemd[2271]: Startup finished in 277ms. Dec 13 01:53:48.732728 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:53:48.745529 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:53:48.909749 systemd[1]: Started sshd@1-172.31.19.153:22-139.178.68.195:39124.service - OpenSSH per-connection server daemon (139.178.68.195:39124). Dec 13 01:53:49.100435 sshd[2282]: Accepted publickey for core from 139.178.68.195 port 39124 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:49.103538 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:49.122611 systemd-logind[2005]: New session 2 of user core. Dec 13 01:53:49.124446 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:53:49.256444 sshd[2282]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:49.260551 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:53:49.263378 systemd[1]: sshd@1-172.31.19.153:22-139.178.68.195:39124.service: Deactivated successfully. Dec 13 01:53:49.267439 systemd-logind[2005]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:53:49.269261 systemd-logind[2005]: Removed session 2. Dec 13 01:53:49.295605 systemd[1]: Started sshd@2-172.31.19.153:22-139.178.68.195:39138.service - OpenSSH per-connection server daemon (139.178.68.195:39138). Dec 13 01:53:49.464728 sshd[2289]: Accepted publickey for core from 139.178.68.195 port 39138 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:49.466771 sshd[2289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:49.474309 systemd-logind[2005]: New session 3 of user core. Dec 13 01:53:49.485429 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:53:49.607422 sshd[2289]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:49.612678 systemd[1]: sshd@2-172.31.19.153:22-139.178.68.195:39138.service: Deactivated successfully. Dec 13 01:53:49.612679 systemd-logind[2005]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:53:49.616263 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:53:49.620958 systemd-logind[2005]: Removed session 3. Dec 13 01:53:49.650318 systemd[1]: Started sshd@3-172.31.19.153:22-139.178.68.195:39150.service - OpenSSH per-connection server daemon (139.178.68.195:39150). Dec 13 01:53:49.815244 sshd[2296]: Accepted publickey for core from 139.178.68.195 port 39150 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:49.818349 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:49.827019 systemd-logind[2005]: New session 4 of user core. Dec 13 01:53:49.833384 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:53:49.957875 sshd[2296]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:49.963261 systemd-logind[2005]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:53:49.963821 systemd[1]: sshd@3-172.31.19.153:22-139.178.68.195:39150.service: Deactivated successfully. Dec 13 01:53:49.967929 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:53:49.972656 systemd-logind[2005]: Removed session 4. Dec 13 01:53:49.999634 systemd[1]: Started sshd@4-172.31.19.153:22-139.178.68.195:39164.service - OpenSSH per-connection server daemon (139.178.68.195:39164). Dec 13 01:53:50.167248 sshd[2303]: Accepted publickey for core from 139.178.68.195 port 39164 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:50.169772 sshd[2303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:50.178482 systemd-logind[2005]: New session 5 of user core. Dec 13 01:53:50.188358 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:53:50.303726 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:53:50.304425 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:50.323285 sudo[2306]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:50.346467 sshd[2303]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:50.353637 systemd[1]: sshd@4-172.31.19.153:22-139.178.68.195:39164.service: Deactivated successfully. Dec 13 01:53:50.356913 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:53:50.358656 systemd-logind[2005]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:53:50.360579 systemd-logind[2005]: Removed session 5. Dec 13 01:53:50.389579 systemd[1]: Started sshd@5-172.31.19.153:22-139.178.68.195:39166.service - OpenSSH per-connection server daemon (139.178.68.195:39166). Dec 13 01:53:50.556283 sshd[2311]: Accepted publickey for core from 139.178.68.195 port 39166 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:50.558879 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:50.568179 systemd-logind[2005]: New session 6 of user core. Dec 13 01:53:50.575387 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:53:50.681941 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:53:50.683205 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:50.689783 sudo[2315]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:50.699886 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:53:50.700663 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:50.724629 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:50.738811 auditctl[2318]: No rules Dec 13 01:53:50.739609 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:53:50.739965 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:50.759904 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:53:50.801913 augenrules[2336]: No rules Dec 13 01:53:50.805201 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:53:50.807277 sudo[2314]: pam_unix(sudo:session): session closed for user root Dec 13 01:53:50.831454 sshd[2311]: pam_unix(sshd:session): session closed for user core Dec 13 01:53:50.836426 systemd[1]: sshd@5-172.31.19.153:22-139.178.68.195:39166.service: Deactivated successfully. Dec 13 01:53:50.839519 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:53:50.841987 systemd-logind[2005]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:53:50.843794 systemd-logind[2005]: Removed session 6. Dec 13 01:53:50.875841 systemd[1]: Started sshd@6-172.31.19.153:22-139.178.68.195:39178.service - OpenSSH per-connection server daemon (139.178.68.195:39178). Dec 13 01:53:51.042458 sshd[2344]: Accepted publickey for core from 139.178.68.195 port 39178 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:53:51.045095 sshd[2344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:53:51.054479 systemd-logind[2005]: New session 7 of user core. Dec 13 01:53:51.061400 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:53:51.164782 sudo[2347]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:53:51.165895 sudo[2347]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:53:51.070712 systemd-resolved[1947]: Clock change detected. Flushing caches. Dec 13 01:53:51.079871 systemd-journald[1592]: Time jumped backwards, rotating. Dec 13 01:53:51.477031 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:53:51.477339 (dockerd)[2364]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:53:51.871788 dockerd[2364]: time="2024-12-13T01:53:51.871084999Z" level=info msg="Starting up" Dec 13 01:53:52.011141 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport746692566-merged.mount: Deactivated successfully. Dec 13 01:53:52.283414 dockerd[2364]: time="2024-12-13T01:53:52.282983742Z" level=info msg="Loading containers: start." Dec 13 01:53:52.439690 kernel: Initializing XFRM netlink socket Dec 13 01:53:52.471311 (udev-worker)[2387]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:53:52.551371 systemd-networkd[1946]: docker0: Link UP Dec 13 01:53:52.575979 dockerd[2364]: time="2024-12-13T01:53:52.575827159Z" level=info msg="Loading containers: done." Dec 13 01:53:52.599632 dockerd[2364]: time="2024-12-13T01:53:52.599469559Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:53:52.600451 dockerd[2364]: time="2024-12-13T01:53:52.599850427Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:53:52.600451 dockerd[2364]: time="2024-12-13T01:53:52.600071815Z" level=info msg="Daemon has completed initialization" Dec 13 01:53:52.666099 dockerd[2364]: time="2024-12-13T01:53:52.665993719Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:53:52.667447 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:53:53.783545 containerd[2034]: time="2024-12-13T01:53:53.783182661Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:53:54.518058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662684984.mount: Deactivated successfully. Dec 13 01:53:56.287269 containerd[2034]: time="2024-12-13T01:53:56.287210709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:56.290798 containerd[2034]: time="2024-12-13T01:53:56.290722845Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=32201250" Dec 13 01:53:56.292528 containerd[2034]: time="2024-12-13T01:53:56.292444521Z" level=info msg="ImageCreate event name:\"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:56.298200 containerd[2034]: time="2024-12-13T01:53:56.298146369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:56.301096 containerd[2034]: time="2024-12-13T01:53:56.300495909Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"32198050\" in 2.5172547s" Dec 13 01:53:56.301096 containerd[2034]: time="2024-12-13T01:53:56.300553521Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 01:53:56.339219 containerd[2034]: time="2024-12-13T01:53:56.339155110Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:53:57.464118 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:53:57.472936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:53:57.928891 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:53:57.932805 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:53:58.040098 kubelet[2580]: E1213 01:53:58.040031 2580 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:53:58.051773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:53:58.052099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:53:58.461431 containerd[2034]: time="2024-12-13T01:53:58.459367980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:58.462383 containerd[2034]: time="2024-12-13T01:53:58.462323580Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=29381297" Dec 13 01:53:58.464349 containerd[2034]: time="2024-12-13T01:53:58.464274096Z" level=info msg="ImageCreate event name:\"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:58.476561 containerd[2034]: time="2024-12-13T01:53:58.474694740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:58.478450 containerd[2034]: time="2024-12-13T01:53:58.477751584Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"30783618\" in 2.138530834s" Dec 13 01:53:58.478450 containerd[2034]: time="2024-12-13T01:53:58.477817728Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 01:53:58.516790 containerd[2034]: time="2024-12-13T01:53:58.516742500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:53:59.804147 containerd[2034]: time="2024-12-13T01:53:59.804090891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:59.806676 containerd[2034]: time="2024-12-13T01:53:59.806546007Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=15765640" Dec 13 01:53:59.807534 containerd[2034]: time="2024-12-13T01:53:59.807436431Z" level=info msg="ImageCreate event name:\"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:59.813384 containerd[2034]: time="2024-12-13T01:53:59.813263895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:53:59.815917 containerd[2034]: time="2024-12-13T01:53:59.815741559Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"17167979\" in 1.298752159s" Dec 13 01:53:59.815917 containerd[2034]: time="2024-12-13T01:53:59.815795859Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 01:53:59.853042 containerd[2034]: time="2024-12-13T01:53:59.852692187Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:54:01.298824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4269029420.mount: Deactivated successfully. Dec 13 01:54:01.802845 containerd[2034]: time="2024-12-13T01:54:01.802781369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:01.805145 containerd[2034]: time="2024-12-13T01:54:01.805082945Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=25273977" Dec 13 01:54:01.806953 containerd[2034]: time="2024-12-13T01:54:01.806872301Z" level=info msg="ImageCreate event name:\"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:01.811770 containerd[2034]: time="2024-12-13T01:54:01.811636565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:01.813308 containerd[2034]: time="2024-12-13T01:54:01.813118649Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"25272996\" in 1.96033861s" Dec 13 01:54:01.813308 containerd[2034]: time="2024-12-13T01:54:01.813172313Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 01:54:01.850231 containerd[2034]: time="2024-12-13T01:54:01.850172393Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:54:02.492322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1792362528.mount: Deactivated successfully. Dec 13 01:54:03.651640 containerd[2034]: time="2024-12-13T01:54:03.651021498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:03.653176 containerd[2034]: time="2024-12-13T01:54:03.653104362Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:54:03.655673 containerd[2034]: time="2024-12-13T01:54:03.655556610Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:03.661790 containerd[2034]: time="2024-12-13T01:54:03.661681338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:03.665226 containerd[2034]: time="2024-12-13T01:54:03.664990362Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.814754333s" Dec 13 01:54:03.665226 containerd[2034]: time="2024-12-13T01:54:03.665050926Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:54:03.703621 containerd[2034]: time="2024-12-13T01:54:03.703253982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:54:04.253338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730863740.mount: Deactivated successfully. Dec 13 01:54:04.264559 containerd[2034]: time="2024-12-13T01:54:04.264482117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:04.266611 containerd[2034]: time="2024-12-13T01:54:04.266327561Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:54:04.268090 containerd[2034]: time="2024-12-13T01:54:04.268019969Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:04.272939 containerd[2034]: time="2024-12-13T01:54:04.272844821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:04.274721 containerd[2034]: time="2024-12-13T01:54:04.274465505Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 571.156047ms" Dec 13 01:54:04.274721 containerd[2034]: time="2024-12-13T01:54:04.274518653Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:54:04.311712 containerd[2034]: time="2024-12-13T01:54:04.311642885Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:54:04.891987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376052991.mount: Deactivated successfully. Dec 13 01:54:08.214133 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:54:08.222983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:08.538350 containerd[2034]: time="2024-12-13T01:54:08.538105126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:08.571125 containerd[2034]: time="2024-12-13T01:54:08.571051774Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Dec 13 01:54:08.598940 containerd[2034]: time="2024-12-13T01:54:08.598849655Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:08.645633 containerd[2034]: time="2024-12-13T01:54:08.643107311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:08.648086 containerd[2034]: time="2024-12-13T01:54:08.647984651Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.33627651s" Dec 13 01:54:08.648086 containerd[2034]: time="2024-12-13T01:54:08.648077531Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 01:54:08.801931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:08.807978 (kubelet)[2732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:54:08.891589 kubelet[2732]: E1213 01:54:08.891452 2732 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:54:08.896846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:54:08.897193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:54:14.272115 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:14.284101 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:14.330239 systemd[1]: Reloading requested from client PID 2797 ('systemctl') (unit session-7.scope)... Dec 13 01:54:14.330467 systemd[1]: Reloading... Dec 13 01:54:14.560615 zram_generator::config[2841]: No configuration found. Dec 13 01:54:14.789148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:14.959881 systemd[1]: Reloading finished in 628 ms. Dec 13 01:54:15.056798 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:54:15.056969 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:54:15.057462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:15.066253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:15.315149 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 01:54:15.495892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:15.503135 (kubelet)[2906]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:54:15.588982 kubelet[2906]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:15.588982 kubelet[2906]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:54:15.588982 kubelet[2906]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:15.589538 kubelet[2906]: I1213 01:54:15.589375 2906 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:54:16.581198 kubelet[2906]: I1213 01:54:16.581134 2906 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:54:16.581198 kubelet[2906]: I1213 01:54:16.581187 2906 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:54:16.581685 kubelet[2906]: I1213 01:54:16.581541 2906 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:54:16.617208 kubelet[2906]: I1213 01:54:16.615845 2906 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:16.622513 kubelet[2906]: E1213 01:54:16.622283 2906 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.19.153:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:16.635211 kubelet[2906]: I1213 01:54:16.635128 2906 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:54:16.635616 kubelet[2906]: I1213 01:54:16.635585 2906 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:54:16.635930 kubelet[2906]: I1213 01:54:16.635893 2906 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:54:16.636097 kubelet[2906]: I1213 01:54:16.635932 2906 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:54:16.636097 kubelet[2906]: I1213 01:54:16.635954 2906 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:54:16.638377 kubelet[2906]: I1213 01:54:16.638324 2906 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:16.642759 kubelet[2906]: I1213 01:54:16.642704 2906 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:54:16.642759 kubelet[2906]: I1213 01:54:16.642759 2906 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:54:16.644634 kubelet[2906]: I1213 01:54:16.642807 2906 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:54:16.644634 kubelet[2906]: I1213 01:54:16.642841 2906 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:54:16.646005 kubelet[2906]: W1213 01:54:16.645918 2906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.19.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:16.646005 kubelet[2906]: E1213 01:54:16.646012 2906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:16.646589 kubelet[2906]: W1213 01:54:16.646503 2906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.19.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-153&limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:16.646732 kubelet[2906]: E1213 01:54:16.646625 2906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-153&limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:16.647777 kubelet[2906]: I1213 01:54:16.647745 2906 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:54:16.648493 kubelet[2906]: I1213 01:54:16.648459 2906 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:54:16.649716 kubelet[2906]: W1213 01:54:16.649684 2906 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:54:16.650987 kubelet[2906]: I1213 01:54:16.650945 2906 server.go:1256] "Started kubelet" Dec 13 01:54:16.654194 kubelet[2906]: I1213 01:54:16.654148 2906 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:54:16.655586 kubelet[2906]: I1213 01:54:16.655508 2906 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:54:16.657631 kubelet[2906]: I1213 01:54:16.656855 2906 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:54:16.657631 kubelet[2906]: I1213 01:54:16.657270 2906 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:54:16.661126 kubelet[2906]: E1213 01:54:16.661085 2906 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.153:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.153:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-153.181099b467b7a6d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-153,UID:ip-172-31-19-153,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-153,},FirstTimestamp:2024-12-13 01:54:16.650893015 +0000 UTC m=+1.140757603,LastTimestamp:2024-12-13 01:54:16.650893015 +0000 UTC m=+1.140757603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-153,}" Dec 13 01:54:16.661398 kubelet[2906]: I1213 01:54:16.661326 2906 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:54:16.671496 kubelet[2906]: I1213 01:54:16.671434 2906 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:54:16.672053 kubelet[2906]: I1213 01:54:16.672004 2906 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:54:16.672175 kubelet[2906]: I1213 01:54:16.672129 2906 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:54:16.672323 kubelet[2906]: E1213 01:54:16.672286 2906 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-153?timeout=10s\": dial tcp 172.31.19.153:6443: connect: connection refused" interval="200ms" Dec 13 01:54:16.673900 kubelet[2906]: W1213 01:54:16.673190 2906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.19.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:16.674124 kubelet[2906]: E1213 01:54:16.674099 2906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:16.674709 kubelet[2906]: E1213 01:54:16.674389 2906 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:54:16.677218 kubelet[2906]: I1213 01:54:16.677166 2906 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:54:16.677468 kubelet[2906]: I1213 01:54:16.677391 2906 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:54:16.677824 kubelet[2906]: I1213 01:54:16.677785 2906 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:54:16.693627 kubelet[2906]: I1213 01:54:16.692699 2906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:54:16.694985 kubelet[2906]: I1213 01:54:16.694933 2906 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:54:16.694985 kubelet[2906]: I1213 01:54:16.694980 2906 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:54:16.695164 kubelet[2906]: I1213 01:54:16.695012 2906 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:54:16.695164 kubelet[2906]: E1213 01:54:16.695087 2906 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:54:16.708284 kubelet[2906]: W1213 01:54:16.708146 2906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.19.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:16.708284 kubelet[2906]: E1213 01:54:16.708231 2906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:16.731352 kubelet[2906]: I1213 01:54:16.731303 2906 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:54:16.731935 kubelet[2906]: I1213 01:54:16.731842 2906 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:54:16.732066 kubelet[2906]: I1213 01:54:16.731937 2906 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:16.735275 kubelet[2906]: I1213 01:54:16.735220 2906 policy_none.go:49] "None policy: Start" Dec 13 01:54:16.736472 kubelet[2906]: I1213 01:54:16.736408 2906 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:54:16.736605 kubelet[2906]: I1213 01:54:16.736486 2906 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:54:16.749033 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:54:16.763164 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:54:16.770766 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:54:16.774696 kubelet[2906]: I1213 01:54:16.774661 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-153" Dec 13 01:54:16.775384 kubelet[2906]: E1213 01:54:16.775331 2906 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.153:6443/api/v1/nodes\": dial tcp 172.31.19.153:6443: connect: connection refused" node="ip-172-31-19-153" Dec 13 01:54:16.781405 kubelet[2906]: I1213 01:54:16.781336 2906 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:54:16.781918 kubelet[2906]: I1213 01:54:16.781795 2906 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:54:16.785730 kubelet[2906]: E1213 01:54:16.785609 2906 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-153\" not found" Dec 13 01:54:16.795749 kubelet[2906]: I1213 01:54:16.795693 2906 topology_manager.go:215] "Topology Admit Handler" podUID="ea7af7f4593e3214273eef1e05528015" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-153" Dec 13 01:54:16.798785 kubelet[2906]: I1213 01:54:16.798402 2906 topology_manager.go:215] "Topology Admit Handler" podUID="4fb4c9d28ec1403b469f2c612e1371cb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:16.800633 kubelet[2906]: I1213 01:54:16.800598 2906 topology_manager.go:215] "Topology Admit Handler" podUID="6f1b371e1db191ddecd735ce67fc3bfb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-153" Dec 13 01:54:16.814467 systemd[1]: Created slice kubepods-burstable-podea7af7f4593e3214273eef1e05528015.slice - libcontainer container kubepods-burstable-podea7af7f4593e3214273eef1e05528015.slice. Dec 13 01:54:16.834873 systemd[1]: Created slice kubepods-burstable-pod4fb4c9d28ec1403b469f2c612e1371cb.slice - libcontainer container kubepods-burstable-pod4fb4c9d28ec1403b469f2c612e1371cb.slice. Dec 13 01:54:16.850491 systemd[1]: Created slice kubepods-burstable-pod6f1b371e1db191ddecd735ce67fc3bfb.slice - libcontainer container kubepods-burstable-pod6f1b371e1db191ddecd735ce67fc3bfb.slice. Dec 13 01:54:16.873446 kubelet[2906]: E1213 01:54:16.873382 2906 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-153?timeout=10s\": dial tcp 172.31.19.153:6443: connect: connection refused" interval="400ms" Dec 13 01:54:16.873646 kubelet[2906]: I1213 01:54:16.873602 2906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f1b371e1db191ddecd735ce67fc3bfb-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-153\" (UID: \"6f1b371e1db191ddecd735ce67fc3bfb\") " pod="kube-system/kube-scheduler-ip-172-31-19-153" Dec 13 01:54:16.873787 kubelet[2906]: I1213 01:54:16.873668 2906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:16.873787 kubelet[2906]: I1213 01:54:16.873715 2906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:16.873787 kubelet[2906]: I1213 01:54:16.873766 2906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea7af7f4593e3214273eef1e05528015-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-153\" (UID: \"ea7af7f4593e3214273eef1e05528015\") " pod="kube-system/kube-apiserver-ip-172-31-19-153" Dec 13 01:54:16.874194 kubelet[2906]: I1213 01:54:16.873809 2906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:16.874194 kubelet[2906]: I1213 01:54:16.873852 2906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:16.874194 kubelet[2906]: I1213 01:54:16.873921 2906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:16.874194 kubelet[2906]: I1213 01:54:16.873970 2906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea7af7f4593e3214273eef1e05528015-ca-certs\") pod \"kube-apiserver-ip-172-31-19-153\" (UID: \"ea7af7f4593e3214273eef1e05528015\") " pod="kube-system/kube-apiserver-ip-172-31-19-153" Dec 13 01:54:16.874194 kubelet[2906]: I1213 01:54:16.874014 2906 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea7af7f4593e3214273eef1e05528015-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-153\" (UID: \"ea7af7f4593e3214273eef1e05528015\") " pod="kube-system/kube-apiserver-ip-172-31-19-153" Dec 13 01:54:16.977689 kubelet[2906]: I1213 01:54:16.977557 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-153" Dec 13 01:54:16.978108 kubelet[2906]: E1213 01:54:16.978077 2906 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.153:6443/api/v1/nodes\": dial tcp 172.31.19.153:6443: connect: connection refused" node="ip-172-31-19-153" Dec 13 01:54:17.130968 containerd[2034]: time="2024-12-13T01:54:17.129962117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-153,Uid:ea7af7f4593e3214273eef1e05528015,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:17.142527 containerd[2034]: time="2024-12-13T01:54:17.141975617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-153,Uid:4fb4c9d28ec1403b469f2c612e1371cb,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:17.155910 containerd[2034]: time="2024-12-13T01:54:17.155836985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-153,Uid:6f1b371e1db191ddecd735ce67fc3bfb,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:17.275185 kubelet[2906]: E1213 01:54:17.274761 2906 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-153?timeout=10s\": dial tcp 172.31.19.153:6443: connect: connection refused" interval="800ms" Dec 13 01:54:17.380229 kubelet[2906]: I1213 01:54:17.380174 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-153" Dec 13 01:54:17.380771 kubelet[2906]: E1213 01:54:17.380726 2906 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.153:6443/api/v1/nodes\": dial tcp 172.31.19.153:6443: connect: connection refused" node="ip-172-31-19-153" Dec 13 01:54:17.527033 kubelet[2906]: W1213 01:54:17.526934 2906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.19.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:17.527033 kubelet[2906]: E1213 01:54:17.527035 2906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.19.153:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:17.563898 kubelet[2906]: W1213 01:54:17.563818 2906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.19.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:17.564048 kubelet[2906]: E1213 01:54:17.563913 2906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.19.153:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:17.591885 kubelet[2906]: W1213 01:54:17.591801 2906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.19.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-153&limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:17.592039 kubelet[2906]: E1213 01:54:17.591896 2906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.19.153:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-153&limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:17.707914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679377154.mount: Deactivated successfully. Dec 13 01:54:17.716561 containerd[2034]: time="2024-12-13T01:54:17.716475284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:17.722316 containerd[2034]: time="2024-12-13T01:54:17.722206832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:54:17.723462 containerd[2034]: time="2024-12-13T01:54:17.723420476Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:17.726512 containerd[2034]: time="2024-12-13T01:54:17.726456716Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:54:17.729603 containerd[2034]: time="2024-12-13T01:54:17.728827184Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:17.731447 containerd[2034]: time="2024-12-13T01:54:17.731379032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:54:17.731555 containerd[2034]: time="2024-12-13T01:54:17.731508212Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:17.737718 containerd[2034]: time="2024-12-13T01:54:17.737665316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:54:17.741419 containerd[2034]: time="2024-12-13T01:54:17.741372872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 599.291127ms" Dec 13 01:54:17.746358 containerd[2034]: time="2024-12-13T01:54:17.746280800Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 616.208655ms" Dec 13 01:54:17.760766 containerd[2034]: time="2024-12-13T01:54:17.760482464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 604.531203ms" Dec 13 01:54:17.774264 kubelet[2906]: W1213 01:54:17.774151 2906 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.19.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:17.774264 kubelet[2906]: E1213 01:54:17.774269 2906 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.19.153:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.153:6443: connect: connection refused Dec 13 01:54:17.963183 containerd[2034]: time="2024-12-13T01:54:17.962764641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:17.964361 containerd[2034]: time="2024-12-13T01:54:17.963937581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:17.964361 containerd[2034]: time="2024-12-13T01:54:17.962868573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:17.964361 containerd[2034]: time="2024-12-13T01:54:17.963787737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:17.965172 containerd[2034]: time="2024-12-13T01:54:17.964897905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:17.966014 containerd[2034]: time="2024-12-13T01:54:17.965846061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:17.967006 containerd[2034]: time="2024-12-13T01:54:17.966908733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:17.968017 containerd[2034]: time="2024-12-13T01:54:17.967882833Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:17.968183 containerd[2034]: time="2024-12-13T01:54:17.968036361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:17.968676 containerd[2034]: time="2024-12-13T01:54:17.968193465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:17.968676 containerd[2034]: time="2024-12-13T01:54:17.968409741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:17.970026 containerd[2034]: time="2024-12-13T01:54:17.969119685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:18.020413 systemd[1]: Started cri-containerd-67eeec6f0c7db2ad9ef0ee9328ae1fc77db210c706149c2ef65dd0d3a0229b39.scope - libcontainer container 67eeec6f0c7db2ad9ef0ee9328ae1fc77db210c706149c2ef65dd0d3a0229b39. Dec 13 01:54:18.029830 systemd[1]: Started cri-containerd-894e1c52f0ca3a46cbdad4cc82aac67e68520b72b9e255b2a61c2e6e85b6531e.scope - libcontainer container 894e1c52f0ca3a46cbdad4cc82aac67e68520b72b9e255b2a61c2e6e85b6531e. Dec 13 01:54:18.043925 systemd[1]: Started cri-containerd-e7d82fbe3bb824c130dbc679bc67c6c5031f8e650c67928deb17c2ce0b4b1284.scope - libcontainer container e7d82fbe3bb824c130dbc679bc67c6c5031f8e650c67928deb17c2ce0b4b1284. Dec 13 01:54:18.077235 kubelet[2906]: E1213 01:54:18.077171 2906 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-153?timeout=10s\": dial tcp 172.31.19.153:6443: connect: connection refused" interval="1.6s" Dec 13 01:54:18.122905 containerd[2034]: time="2024-12-13T01:54:18.122852346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-153,Uid:ea7af7f4593e3214273eef1e05528015,Namespace:kube-system,Attempt:0,} returns sandbox id \"67eeec6f0c7db2ad9ef0ee9328ae1fc77db210c706149c2ef65dd0d3a0229b39\"" Dec 13 01:54:18.141281 containerd[2034]: time="2024-12-13T01:54:18.141108114Z" level=info msg="CreateContainer within sandbox \"67eeec6f0c7db2ad9ef0ee9328ae1fc77db210c706149c2ef65dd0d3a0229b39\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:54:18.169121 containerd[2034]: time="2024-12-13T01:54:18.168422574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-153,Uid:6f1b371e1db191ddecd735ce67fc3bfb,Namespace:kube-system,Attempt:0,} returns sandbox id \"e7d82fbe3bb824c130dbc679bc67c6c5031f8e650c67928deb17c2ce0b4b1284\"" Dec 13 01:54:18.175498 containerd[2034]: time="2024-12-13T01:54:18.175187958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-153,Uid:4fb4c9d28ec1403b469f2c612e1371cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"894e1c52f0ca3a46cbdad4cc82aac67e68520b72b9e255b2a61c2e6e85b6531e\"" Dec 13 01:54:18.177610 containerd[2034]: time="2024-12-13T01:54:18.177397302Z" level=info msg="CreateContainer within sandbox \"e7d82fbe3bb824c130dbc679bc67c6c5031f8e650c67928deb17c2ce0b4b1284\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:54:18.184385 kubelet[2906]: I1213 01:54:18.183825 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-153" Dec 13 01:54:18.184385 kubelet[2906]: E1213 01:54:18.184324 2906 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.19.153:6443/api/v1/nodes\": dial tcp 172.31.19.153:6443: connect: connection refused" node="ip-172-31-19-153" Dec 13 01:54:18.187566 containerd[2034]: time="2024-12-13T01:54:18.187513482Z" level=info msg="CreateContainer within sandbox \"894e1c52f0ca3a46cbdad4cc82aac67e68520b72b9e255b2a61c2e6e85b6531e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:54:18.195100 containerd[2034]: time="2024-12-13T01:54:18.195042354Z" level=info msg="CreateContainer within sandbox \"67eeec6f0c7db2ad9ef0ee9328ae1fc77db210c706149c2ef65dd0d3a0229b39\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a05eec66a7f2563348f4a436b4483af1eecb28d84a02f308735898c099a1e384\"" Dec 13 01:54:18.198612 containerd[2034]: time="2024-12-13T01:54:18.196646334Z" level=info msg="StartContainer for \"a05eec66a7f2563348f4a436b4483af1eecb28d84a02f308735898c099a1e384\"" Dec 13 01:54:18.217363 containerd[2034]: time="2024-12-13T01:54:18.217084854Z" level=info msg="CreateContainer within sandbox \"e7d82fbe3bb824c130dbc679bc67c6c5031f8e650c67928deb17c2ce0b4b1284\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80\"" Dec 13 01:54:18.221690 containerd[2034]: time="2024-12-13T01:54:18.220667694Z" level=info msg="StartContainer for \"d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80\"" Dec 13 01:54:18.244622 containerd[2034]: time="2024-12-13T01:54:18.244353870Z" level=info msg="CreateContainer within sandbox \"894e1c52f0ca3a46cbdad4cc82aac67e68520b72b9e255b2a61c2e6e85b6531e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c\"" Dec 13 01:54:18.249255 containerd[2034]: time="2024-12-13T01:54:18.248381178Z" level=info msg="StartContainer for \"231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c\"" Dec 13 01:54:18.275486 systemd[1]: Started cri-containerd-a05eec66a7f2563348f4a436b4483af1eecb28d84a02f308735898c099a1e384.scope - libcontainer container a05eec66a7f2563348f4a436b4483af1eecb28d84a02f308735898c099a1e384. Dec 13 01:54:18.298947 systemd[1]: Started cri-containerd-d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80.scope - libcontainer container d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80. Dec 13 01:54:18.338868 systemd[1]: Started cri-containerd-231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c.scope - libcontainer container 231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c. Dec 13 01:54:18.416312 containerd[2034]: time="2024-12-13T01:54:18.416160895Z" level=info msg="StartContainer for \"a05eec66a7f2563348f4a436b4483af1eecb28d84a02f308735898c099a1e384\" returns successfully" Dec 13 01:54:18.447737 containerd[2034]: time="2024-12-13T01:54:18.445772323Z" level=info msg="StartContainer for \"d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80\" returns successfully" Dec 13 01:54:18.467048 containerd[2034]: time="2024-12-13T01:54:18.466961540Z" level=info msg="StartContainer for \"231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c\" returns successfully" Dec 13 01:54:19.787248 kubelet[2906]: I1213 01:54:19.787184 2906 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-153" Dec 13 01:54:22.642626 kubelet[2906]: E1213 01:54:22.642536 2906 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-19-153\" not found" node="ip-172-31-19-153" Dec 13 01:54:22.645022 kubelet[2906]: I1213 01:54:22.644739 2906 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-153" Dec 13 01:54:22.650735 kubelet[2906]: I1213 01:54:22.650438 2906 apiserver.go:52] "Watching apiserver" Dec 13 01:54:22.672801 kubelet[2906]: I1213 01:54:22.672754 2906 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:54:26.224258 systemd[1]: Reloading requested from client PID 3178 ('systemctl') (unit session-7.scope)... Dec 13 01:54:26.224285 systemd[1]: Reloading... Dec 13 01:54:26.430620 zram_generator::config[3227]: No configuration found. Dec 13 01:54:26.663670 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:54:26.884234 systemd[1]: Reloading finished in 659 ms. Dec 13 01:54:26.971086 kubelet[2906]: I1213 01:54:26.971031 2906 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:26.972039 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:26.987172 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:54:26.987619 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:26.987695 systemd[1]: kubelet.service: Consumed 1.884s CPU time, 112.0M memory peak, 0B memory swap peak. Dec 13 01:54:27.001279 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:54:27.410917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:54:27.423139 (kubelet)[3281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:54:27.533006 kubelet[3281]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:27.533523 kubelet[3281]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:54:27.533962 kubelet[3281]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:54:27.534381 kubelet[3281]: I1213 01:54:27.534282 3281 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:54:27.544708 kubelet[3281]: I1213 01:54:27.544542 3281 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:54:27.545056 kubelet[3281]: I1213 01:54:27.544927 3281 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:54:27.546618 kubelet[3281]: I1213 01:54:27.545469 3281 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:54:27.549202 kubelet[3281]: I1213 01:54:27.549165 3281 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:54:27.553771 kubelet[3281]: I1213 01:54:27.553725 3281 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:54:27.567854 kubelet[3281]: I1213 01:54:27.567805 3281 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:54:27.569208 kubelet[3281]: I1213 01:54:27.569177 3281 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:54:27.569682 kubelet[3281]: I1213 01:54:27.569648 3281 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:54:27.569915 kubelet[3281]: I1213 01:54:27.569893 3281 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:54:27.570024 kubelet[3281]: I1213 01:54:27.570004 3281 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:54:27.570192 kubelet[3281]: I1213 01:54:27.570169 3281 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:27.570515 kubelet[3281]: I1213 01:54:27.570465 3281 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:54:27.571387 kubelet[3281]: I1213 01:54:27.571216 3281 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:54:27.571387 kubelet[3281]: I1213 01:54:27.571280 3281 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:54:27.571387 kubelet[3281]: I1213 01:54:27.571304 3281 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:54:27.574653 kubelet[3281]: I1213 01:54:27.574000 3281 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:54:27.574653 kubelet[3281]: I1213 01:54:27.574332 3281 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:54:27.578518 kubelet[3281]: I1213 01:54:27.578475 3281 server.go:1256] "Started kubelet" Dec 13 01:54:27.594712 kubelet[3281]: I1213 01:54:27.594445 3281 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:54:27.617846 kubelet[3281]: I1213 01:54:27.617807 3281 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:54:27.622609 kubelet[3281]: I1213 01:54:27.620889 3281 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:54:27.629686 kubelet[3281]: I1213 01:54:27.626705 3281 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:54:27.629686 kubelet[3281]: I1213 01:54:27.627053 3281 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:54:27.630152 kubelet[3281]: I1213 01:54:27.630108 3281 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:54:27.635058 kubelet[3281]: I1213 01:54:27.635019 3281 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:54:27.635493 kubelet[3281]: I1213 01:54:27.635472 3281 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:54:27.647036 kubelet[3281]: I1213 01:54:27.646979 3281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:54:27.654857 kubelet[3281]: I1213 01:54:27.654818 3281 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:54:27.655060 kubelet[3281]: I1213 01:54:27.655041 3281 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:54:27.655176 kubelet[3281]: I1213 01:54:27.655158 3281 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:54:27.655343 kubelet[3281]: E1213 01:54:27.655322 3281 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:54:27.695520 kubelet[3281]: I1213 01:54:27.695397 3281 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:54:27.695707 kubelet[3281]: I1213 01:54:27.695687 3281 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:54:27.695948 kubelet[3281]: I1213 01:54:27.695909 3281 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:54:27.712056 kubelet[3281]: E1213 01:54:27.711972 3281 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:54:27.742996 kubelet[3281]: I1213 01:54:27.742944 3281 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-19-153" Dec 13 01:54:27.756531 kubelet[3281]: E1213 01:54:27.756002 3281 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:54:27.763867 kubelet[3281]: I1213 01:54:27.761897 3281 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-19-153" Dec 13 01:54:27.764516 kubelet[3281]: I1213 01:54:27.764148 3281 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-19-153" Dec 13 01:54:27.839729 kubelet[3281]: I1213 01:54:27.839673 3281 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:54:27.839729 kubelet[3281]: I1213 01:54:27.839714 3281 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:54:27.840836 kubelet[3281]: I1213 01:54:27.839748 3281 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:54:27.840836 kubelet[3281]: I1213 01:54:27.840002 3281 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:54:27.840836 kubelet[3281]: I1213 01:54:27.840043 3281 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:54:27.840836 kubelet[3281]: I1213 01:54:27.840062 3281 policy_none.go:49] "None policy: Start" Dec 13 01:54:27.843699 kubelet[3281]: I1213 01:54:27.843347 3281 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:54:27.843699 kubelet[3281]: I1213 01:54:27.843402 3281 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:54:27.844600 kubelet[3281]: I1213 01:54:27.843943 3281 state_mem.go:75] "Updated machine memory state" Dec 13 01:54:27.854629 kubelet[3281]: I1213 01:54:27.854508 3281 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:54:27.859702 kubelet[3281]: I1213 01:54:27.858913 3281 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:54:27.957085 kubelet[3281]: I1213 01:54:27.956948 3281 topology_manager.go:215] "Topology Admit Handler" podUID="ea7af7f4593e3214273eef1e05528015" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-19-153" Dec 13 01:54:27.958614 kubelet[3281]: I1213 01:54:27.957956 3281 topology_manager.go:215] "Topology Admit Handler" podUID="4fb4c9d28ec1403b469f2c612e1371cb" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:27.958750 kubelet[3281]: I1213 01:54:27.958725 3281 topology_manager.go:215] "Topology Admit Handler" podUID="6f1b371e1db191ddecd735ce67fc3bfb" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-19-153" Dec 13 01:54:28.039024 kubelet[3281]: I1213 01:54:28.038970 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ea7af7f4593e3214273eef1e05528015-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-153\" (UID: \"ea7af7f4593e3214273eef1e05528015\") " pod="kube-system/kube-apiserver-ip-172-31-19-153" Dec 13 01:54:28.039186 kubelet[3281]: I1213 01:54:28.039064 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:28.039186 kubelet[3281]: I1213 01:54:28.039118 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:28.039186 kubelet[3281]: I1213 01:54:28.039162 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f1b371e1db191ddecd735ce67fc3bfb-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-153\" (UID: \"6f1b371e1db191ddecd735ce67fc3bfb\") " pod="kube-system/kube-scheduler-ip-172-31-19-153" Dec 13 01:54:28.039353 kubelet[3281]: I1213 01:54:28.039205 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea7af7f4593e3214273eef1e05528015-ca-certs\") pod \"kube-apiserver-ip-172-31-19-153\" (UID: \"ea7af7f4593e3214273eef1e05528015\") " pod="kube-system/kube-apiserver-ip-172-31-19-153" Dec 13 01:54:28.039353 kubelet[3281]: I1213 01:54:28.039249 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ea7af7f4593e3214273eef1e05528015-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-153\" (UID: \"ea7af7f4593e3214273eef1e05528015\") " pod="kube-system/kube-apiserver-ip-172-31-19-153" Dec 13 01:54:28.039353 kubelet[3281]: I1213 01:54:28.039291 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:28.039353 kubelet[3281]: I1213 01:54:28.039334 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:28.039604 kubelet[3281]: I1213 01:54:28.039380 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4fb4c9d28ec1403b469f2c612e1371cb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-153\" (UID: \"4fb4c9d28ec1403b469f2c612e1371cb\") " pod="kube-system/kube-controller-manager-ip-172-31-19-153" Dec 13 01:54:28.594945 kubelet[3281]: I1213 01:54:28.594604 3281 apiserver.go:52] "Watching apiserver" Dec 13 01:54:28.635904 kubelet[3281]: I1213 01:54:28.635824 3281 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:54:28.807824 kubelet[3281]: E1213 01:54:28.805352 3281 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-19-153\" already exists" pod="kube-system/kube-apiserver-ip-172-31-19-153" Dec 13 01:54:28.955647 kubelet[3281]: I1213 01:54:28.955470 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-153" podStartSLOduration=1.955374032 podStartE2EDuration="1.955374032s" podCreationTimestamp="2024-12-13 01:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:28.896874991 +0000 UTC m=+1.461464384" watchObservedRunningTime="2024-12-13 01:54:28.955374032 +0000 UTC m=+1.519963449" Dec 13 01:54:28.988671 kubelet[3281]: I1213 01:54:28.987828 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-153" podStartSLOduration=1.9877688359999999 podStartE2EDuration="1.987768836s" podCreationTimestamp="2024-12-13 01:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:28.957841904 +0000 UTC m=+1.522431297" watchObservedRunningTime="2024-12-13 01:54:28.987768836 +0000 UTC m=+1.552358229" Dec 13 01:54:29.016780 kubelet[3281]: I1213 01:54:29.016709 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-153" podStartSLOduration=2.016629292 podStartE2EDuration="2.016629292s" podCreationTimestamp="2024-12-13 01:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:28.98848214 +0000 UTC m=+1.553071533" watchObservedRunningTime="2024-12-13 01:54:29.016629292 +0000 UTC m=+1.581218709" Dec 13 01:54:29.407733 update_engine[2008]: I20241213 01:54:29.407632 2008 update_attempter.cc:509] Updating boot flags... Dec 13 01:54:29.597008 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3332) Dec 13 01:54:30.083909 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3334) Dec 13 01:54:30.639631 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 46 scanned by (udev-worker) (3334) Dec 13 01:54:33.419015 sudo[2347]: pam_unix(sudo:session): session closed for user root Dec 13 01:54:33.442730 sshd[2344]: pam_unix(sshd:session): session closed for user core Dec 13 01:54:33.449744 systemd[1]: sshd@6-172.31.19.153:22-139.178.68.195:39178.service: Deactivated successfully. Dec 13 01:54:33.453413 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:54:33.453795 systemd[1]: session-7.scope: Consumed 9.071s CPU time, 186.4M memory peak, 0B memory swap peak. Dec 13 01:54:33.456387 systemd-logind[2005]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:54:33.458754 systemd-logind[2005]: Removed session 7. Dec 13 01:54:38.953302 kubelet[3281]: I1213 01:54:38.952162 3281 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:54:38.956841 kubelet[3281]: I1213 01:54:38.955480 3281 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:54:38.956917 containerd[2034]: time="2024-12-13T01:54:38.952820669Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:54:39.941636 kubelet[3281]: I1213 01:54:39.941514 3281 topology_manager.go:215] "Topology Admit Handler" podUID="f20eecb5-2c5b-4575-9deb-1cf55630fa6f" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-s7rx6" Dec 13 01:54:39.950085 kubelet[3281]: I1213 01:54:39.949816 3281 topology_manager.go:215] "Topology Admit Handler" podUID="e3c84a35-b47a-4e4c-af78-8d40db3c61d0" podNamespace="kube-system" podName="kube-proxy-f7nx4" Dec 13 01:54:39.961708 kubelet[3281]: W1213 01:54:39.961649 3281 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-19-153" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-153' and this object Dec 13 01:54:39.962217 kubelet[3281]: E1213 01:54:39.961715 3281 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-19-153" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-153' and this object Dec 13 01:54:39.962217 kubelet[3281]: W1213 01:54:39.961795 3281 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-153" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-153' and this object Dec 13 01:54:39.962217 kubelet[3281]: E1213 01:54:39.961820 3281 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-19-153" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-19-153' and this object Dec 13 01:54:39.966721 systemd[1]: Created slice kubepods-besteffort-podf20eecb5_2c5b_4575_9deb_1cf55630fa6f.slice - libcontainer container kubepods-besteffort-podf20eecb5_2c5b_4575_9deb_1cf55630fa6f.slice. Dec 13 01:54:39.989461 systemd[1]: Created slice kubepods-besteffort-pode3c84a35_b47a_4e4c_af78_8d40db3c61d0.slice - libcontainer container kubepods-besteffort-pode3c84a35_b47a_4e4c_af78_8d40db3c61d0.slice. Dec 13 01:54:40.033450 kubelet[3281]: I1213 01:54:40.033325 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkvnl\" (UniqueName: \"kubernetes.io/projected/f20eecb5-2c5b-4575-9deb-1cf55630fa6f-kube-api-access-kkvnl\") pod \"tigera-operator-c7ccbd65-s7rx6\" (UID: \"f20eecb5-2c5b-4575-9deb-1cf55630fa6f\") " pod="tigera-operator/tigera-operator-c7ccbd65-s7rx6" Dec 13 01:54:40.033450 kubelet[3281]: I1213 01:54:40.033411 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3c84a35-b47a-4e4c-af78-8d40db3c61d0-xtables-lock\") pod \"kube-proxy-f7nx4\" (UID: \"e3c84a35-b47a-4e4c-af78-8d40db3c61d0\") " pod="kube-system/kube-proxy-f7nx4" Dec 13 01:54:40.033450 kubelet[3281]: I1213 01:54:40.033457 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3c84a35-b47a-4e4c-af78-8d40db3c61d0-lib-modules\") pod \"kube-proxy-f7nx4\" (UID: \"e3c84a35-b47a-4e4c-af78-8d40db3c61d0\") " pod="kube-system/kube-proxy-f7nx4" Dec 13 01:54:40.033834 kubelet[3281]: I1213 01:54:40.033504 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtlrd\" (UniqueName: \"kubernetes.io/projected/e3c84a35-b47a-4e4c-af78-8d40db3c61d0-kube-api-access-gtlrd\") pod \"kube-proxy-f7nx4\" (UID: \"e3c84a35-b47a-4e4c-af78-8d40db3c61d0\") " pod="kube-system/kube-proxy-f7nx4" Dec 13 01:54:40.033834 kubelet[3281]: I1213 01:54:40.033555 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f20eecb5-2c5b-4575-9deb-1cf55630fa6f-var-lib-calico\") pod \"tigera-operator-c7ccbd65-s7rx6\" (UID: \"f20eecb5-2c5b-4575-9deb-1cf55630fa6f\") " pod="tigera-operator/tigera-operator-c7ccbd65-s7rx6" Dec 13 01:54:40.034189 kubelet[3281]: I1213 01:54:40.034069 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e3c84a35-b47a-4e4c-af78-8d40db3c61d0-kube-proxy\") pod \"kube-proxy-f7nx4\" (UID: \"e3c84a35-b47a-4e4c-af78-8d40db3c61d0\") " pod="kube-system/kube-proxy-f7nx4" Dec 13 01:54:40.285210 containerd[2034]: time="2024-12-13T01:54:40.285056416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-s7rx6,Uid:f20eecb5-2c5b-4575-9deb-1cf55630fa6f,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:54:40.344681 containerd[2034]: time="2024-12-13T01:54:40.344365780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:40.344681 containerd[2034]: time="2024-12-13T01:54:40.344469868Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:40.345129 containerd[2034]: time="2024-12-13T01:54:40.344551840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:40.345930 containerd[2034]: time="2024-12-13T01:54:40.345731404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:40.384888 systemd[1]: Started cri-containerd-dc17f01b8529846c9ea3969f72d17f4f4c79779b796a2b9340074705aaf2dba9.scope - libcontainer container dc17f01b8529846c9ea3969f72d17f4f4c79779b796a2b9340074705aaf2dba9. Dec 13 01:54:40.443968 containerd[2034]: time="2024-12-13T01:54:40.443830781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-s7rx6,Uid:f20eecb5-2c5b-4575-9deb-1cf55630fa6f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dc17f01b8529846c9ea3969f72d17f4f4c79779b796a2b9340074705aaf2dba9\"" Dec 13 01:54:40.448392 containerd[2034]: time="2024-12-13T01:54:40.448095089Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:54:41.137621 kubelet[3281]: E1213 01:54:41.137541 3281 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:41.138184 kubelet[3281]: E1213 01:54:41.137702 3281 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e3c84a35-b47a-4e4c-af78-8d40db3c61d0-kube-proxy podName:e3c84a35-b47a-4e4c-af78-8d40db3c61d0 nodeName:}" failed. No retries permitted until 2024-12-13 01:54:41.637666368 +0000 UTC m=+14.202255761 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/e3c84a35-b47a-4e4c-af78-8d40db3c61d0-kube-proxy") pod "kube-proxy-f7nx4" (UID: "e3c84a35-b47a-4e4c-af78-8d40db3c61d0") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:41.151987 kubelet[3281]: E1213 01:54:41.151928 3281 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:41.151987 kubelet[3281]: E1213 01:54:41.151985 3281 projected.go:200] Error preparing data for projected volume kube-api-access-gtlrd for pod kube-system/kube-proxy-f7nx4: failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:41.152227 kubelet[3281]: E1213 01:54:41.152086 3281 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3c84a35-b47a-4e4c-af78-8d40db3c61d0-kube-api-access-gtlrd podName:e3c84a35-b47a-4e4c-af78-8d40db3c61d0 nodeName:}" failed. No retries permitted until 2024-12-13 01:54:41.652056228 +0000 UTC m=+14.216645609 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gtlrd" (UniqueName: "kubernetes.io/projected/e3c84a35-b47a-4e4c-af78-8d40db3c61d0-kube-api-access-gtlrd") pod "kube-proxy-f7nx4" (UID: "e3c84a35-b47a-4e4c-af78-8d40db3c61d0") : failed to sync configmap cache: timed out waiting for the condition Dec 13 01:54:41.796703 containerd[2034]: time="2024-12-13T01:54:41.796410487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f7nx4,Uid:e3c84a35-b47a-4e4c-af78-8d40db3c61d0,Namespace:kube-system,Attempt:0,}" Dec 13 01:54:41.837011 containerd[2034]: time="2024-12-13T01:54:41.836507288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:41.837951 containerd[2034]: time="2024-12-13T01:54:41.837540092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:41.837951 containerd[2034]: time="2024-12-13T01:54:41.837612776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:41.837951 containerd[2034]: time="2024-12-13T01:54:41.837787772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:41.878918 systemd[1]: Started cri-containerd-f83f8048e99753869a63a11064a79f56885893648c4d944368fbf79cad2b7ce7.scope - libcontainer container f83f8048e99753869a63a11064a79f56885893648c4d944368fbf79cad2b7ce7. Dec 13 01:54:41.923130 containerd[2034]: time="2024-12-13T01:54:41.923072480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f7nx4,Uid:e3c84a35-b47a-4e4c-af78-8d40db3c61d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f83f8048e99753869a63a11064a79f56885893648c4d944368fbf79cad2b7ce7\"" Dec 13 01:54:41.929606 containerd[2034]: time="2024-12-13T01:54:41.929301440Z" level=info msg="CreateContainer within sandbox \"f83f8048e99753869a63a11064a79f56885893648c4d944368fbf79cad2b7ce7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:54:41.967340 containerd[2034]: time="2024-12-13T01:54:41.967207328Z" level=info msg="CreateContainer within sandbox \"f83f8048e99753869a63a11064a79f56885893648c4d944368fbf79cad2b7ce7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"150f4f0da8ae9464c8d1e1f88564a3f9ac5321cb555a7dd070727a8cb3d865c0\"" Dec 13 01:54:41.968956 containerd[2034]: time="2024-12-13T01:54:41.968893940Z" level=info msg="StartContainer for \"150f4f0da8ae9464c8d1e1f88564a3f9ac5321cb555a7dd070727a8cb3d865c0\"" Dec 13 01:54:42.029903 systemd[1]: Started cri-containerd-150f4f0da8ae9464c8d1e1f88564a3f9ac5321cb555a7dd070727a8cb3d865c0.scope - libcontainer container 150f4f0da8ae9464c8d1e1f88564a3f9ac5321cb555a7dd070727a8cb3d865c0. Dec 13 01:54:42.101675 containerd[2034]: time="2024-12-13T01:54:42.100554677Z" level=info msg="StartContainer for \"150f4f0da8ae9464c8d1e1f88564a3f9ac5321cb555a7dd070727a8cb3d865c0\" returns successfully" Dec 13 01:54:42.755424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount871900887.mount: Deactivated successfully. Dec 13 01:54:43.325014 containerd[2034]: time="2024-12-13T01:54:43.324869119Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:43.326988 containerd[2034]: time="2024-12-13T01:54:43.326912215Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125976" Dec 13 01:54:43.328989 containerd[2034]: time="2024-12-13T01:54:43.328910107Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:43.335009 containerd[2034]: time="2024-12-13T01:54:43.334901275Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:43.336646 containerd[2034]: time="2024-12-13T01:54:43.336559459Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.888401634s" Dec 13 01:54:43.336646 containerd[2034]: time="2024-12-13T01:54:43.336642343Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:54:43.344004 containerd[2034]: time="2024-12-13T01:54:43.343925563Z" level=info msg="CreateContainer within sandbox \"dc17f01b8529846c9ea3969f72d17f4f4c79779b796a2b9340074705aaf2dba9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:54:43.369759 containerd[2034]: time="2024-12-13T01:54:43.369704287Z" level=info msg="CreateContainer within sandbox \"dc17f01b8529846c9ea3969f72d17f4f4c79779b796a2b9340074705aaf2dba9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979\"" Dec 13 01:54:43.370760 containerd[2034]: time="2024-12-13T01:54:43.370526719Z" level=info msg="StartContainer for \"2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979\"" Dec 13 01:54:43.433895 systemd[1]: Started cri-containerd-2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979.scope - libcontainer container 2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979. Dec 13 01:54:43.482835 containerd[2034]: time="2024-12-13T01:54:43.482435444Z" level=info msg="StartContainer for \"2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979\" returns successfully" Dec 13 01:54:43.810475 kubelet[3281]: I1213 01:54:43.810405 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f7nx4" podStartSLOduration=4.810344601 podStartE2EDuration="4.810344601s" podCreationTimestamp="2024-12-13 01:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:54:42.817860117 +0000 UTC m=+15.382449606" watchObservedRunningTime="2024-12-13 01:54:43.810344601 +0000 UTC m=+16.374933994" Dec 13 01:54:47.683769 kubelet[3281]: I1213 01:54:47.683714 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-s7rx6" podStartSLOduration=5.792833003 podStartE2EDuration="8.683640949s" podCreationTimestamp="2024-12-13 01:54:39 +0000 UTC" firstStartedPulling="2024-12-13 01:54:40.446856077 +0000 UTC m=+13.011445470" lastFinishedPulling="2024-12-13 01:54:43.337664035 +0000 UTC m=+15.902253416" observedRunningTime="2024-12-13 01:54:43.811340493 +0000 UTC m=+16.375929910" watchObservedRunningTime="2024-12-13 01:54:47.683640949 +0000 UTC m=+20.248230354" Dec 13 01:54:48.450202 kubelet[3281]: I1213 01:54:48.450143 3281 topology_manager.go:215] "Topology Admit Handler" podUID="de91cbcb-224c-4ab6-9b11-afafcf08b79a" podNamespace="calico-system" podName="calico-typha-64b6cfddbc-xw5bx" Dec 13 01:54:48.467806 systemd[1]: Created slice kubepods-besteffort-podde91cbcb_224c_4ab6_9b11_afafcf08b79a.slice - libcontainer container kubepods-besteffort-podde91cbcb_224c_4ab6_9b11_afafcf08b79a.slice. Dec 13 01:54:48.490840 kubelet[3281]: I1213 01:54:48.490778 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/de91cbcb-224c-4ab6-9b11-afafcf08b79a-typha-certs\") pod \"calico-typha-64b6cfddbc-xw5bx\" (UID: \"de91cbcb-224c-4ab6-9b11-afafcf08b79a\") " pod="calico-system/calico-typha-64b6cfddbc-xw5bx" Dec 13 01:54:48.491050 kubelet[3281]: I1213 01:54:48.491028 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/de91cbcb-224c-4ab6-9b11-afafcf08b79a-tigera-ca-bundle\") pod \"calico-typha-64b6cfddbc-xw5bx\" (UID: \"de91cbcb-224c-4ab6-9b11-afafcf08b79a\") " pod="calico-system/calico-typha-64b6cfddbc-xw5bx" Dec 13 01:54:48.491397 kubelet[3281]: I1213 01:54:48.491374 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrzx\" (UniqueName: \"kubernetes.io/projected/de91cbcb-224c-4ab6-9b11-afafcf08b79a-kube-api-access-dsrzx\") pod \"calico-typha-64b6cfddbc-xw5bx\" (UID: \"de91cbcb-224c-4ab6-9b11-afafcf08b79a\") " pod="calico-system/calico-typha-64b6cfddbc-xw5bx" Dec 13 01:54:48.765701 kubelet[3281]: I1213 01:54:48.765633 3281 topology_manager.go:215] "Topology Admit Handler" podUID="74533a73-514a-4f35-8d6a-c83bd99bd4e5" podNamespace="calico-system" podName="calico-node-j64x9" Dec 13 01:54:48.773922 kubelet[3281]: W1213 01:54:48.773356 3281 reflector.go:539] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-19-153" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-153' and this object Dec 13 01:54:48.773922 kubelet[3281]: E1213 01:54:48.773411 3281 reflector.go:147] object-"calico-system"/"node-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-19-153" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-153' and this object Dec 13 01:54:48.773922 kubelet[3281]: W1213 01:54:48.773554 3281 reflector.go:539] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-19-153" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-153' and this object Dec 13 01:54:48.773922 kubelet[3281]: E1213 01:54:48.773625 3281 reflector.go:147] object-"calico-system"/"cni-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ip-172-31-19-153" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-153' and this object Dec 13 01:54:48.777399 containerd[2034]: time="2024-12-13T01:54:48.776847998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64b6cfddbc-xw5bx,Uid:de91cbcb-224c-4ab6-9b11-afafcf08b79a,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:48.793829 kubelet[3281]: I1213 01:54:48.793760 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74533a73-514a-4f35-8d6a-c83bd99bd4e5-lib-modules\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794007 kubelet[3281]: I1213 01:54:48.793847 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ptq9\" (UniqueName: \"kubernetes.io/projected/74533a73-514a-4f35-8d6a-c83bd99bd4e5-kube-api-access-2ptq9\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794007 kubelet[3281]: I1213 01:54:48.793898 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/74533a73-514a-4f35-8d6a-c83bd99bd4e5-var-lib-calico\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794007 kubelet[3281]: I1213 01:54:48.793941 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/74533a73-514a-4f35-8d6a-c83bd99bd4e5-cni-bin-dir\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794007 kubelet[3281]: I1213 01:54:48.793999 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/74533a73-514a-4f35-8d6a-c83bd99bd4e5-cni-log-dir\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794209 kubelet[3281]: I1213 01:54:48.794045 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/74533a73-514a-4f35-8d6a-c83bd99bd4e5-node-certs\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794209 kubelet[3281]: I1213 01:54:48.794092 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74533a73-514a-4f35-8d6a-c83bd99bd4e5-tigera-ca-bundle\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794209 kubelet[3281]: I1213 01:54:48.794137 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/74533a73-514a-4f35-8d6a-c83bd99bd4e5-var-run-calico\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794209 kubelet[3281]: I1213 01:54:48.794184 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/74533a73-514a-4f35-8d6a-c83bd99bd4e5-policysync\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794409 kubelet[3281]: I1213 01:54:48.794232 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/74533a73-514a-4f35-8d6a-c83bd99bd4e5-flexvol-driver-host\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794409 kubelet[3281]: I1213 01:54:48.794276 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74533a73-514a-4f35-8d6a-c83bd99bd4e5-xtables-lock\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.794409 kubelet[3281]: I1213 01:54:48.794320 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/74533a73-514a-4f35-8d6a-c83bd99bd4e5-cni-net-dir\") pod \"calico-node-j64x9\" (UID: \"74533a73-514a-4f35-8d6a-c83bd99bd4e5\") " pod="calico-system/calico-node-j64x9" Dec 13 01:54:48.795997 systemd[1]: Created slice kubepods-besteffort-pod74533a73_514a_4f35_8d6a_c83bd99bd4e5.slice - libcontainer container kubepods-besteffort-pod74533a73_514a_4f35_8d6a_c83bd99bd4e5.slice. Dec 13 01:54:48.854530 containerd[2034]: time="2024-12-13T01:54:48.854249079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:48.855069 containerd[2034]: time="2024-12-13T01:54:48.854636175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:48.855069 containerd[2034]: time="2024-12-13T01:54:48.854732319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:48.859219 containerd[2034]: time="2024-12-13T01:54:48.855706755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:48.935317 systemd[1]: Started cri-containerd-3931e3f0ab749e80f83229611d19ba13ac6a095460af63c54431811790668c2a.scope - libcontainer container 3931e3f0ab749e80f83229611d19ba13ac6a095460af63c54431811790668c2a. Dec 13 01:54:48.958956 kubelet[3281]: E1213 01:54:48.958903 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:48.958956 kubelet[3281]: W1213 01:54:48.958945 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:48.959128 kubelet[3281]: E1213 01:54:48.958986 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.001265 kubelet[3281]: E1213 01:54:49.001216 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.001600 kubelet[3281]: W1213 01:54:49.001442 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.001600 kubelet[3281]: E1213 01:54:49.001487 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.004664 kubelet[3281]: I1213 01:54:49.004465 3281 topology_manager.go:215] "Topology Admit Handler" podUID="9f62b98e-3864-4ed3-b68c-e8d11f28b312" podNamespace="calico-system" podName="csi-node-driver-d2z4r" Dec 13 01:54:49.010997 kubelet[3281]: E1213 01:54:49.006870 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2z4r" podUID="9f62b98e-3864-4ed3-b68c-e8d11f28b312" Dec 13 01:54:49.078243 kubelet[3281]: E1213 01:54:49.078082 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.078243 kubelet[3281]: W1213 01:54:49.078126 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.078243 kubelet[3281]: E1213 01:54:49.078167 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.080163 kubelet[3281]: E1213 01:54:49.080104 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.080163 kubelet[3281]: W1213 01:54:49.080149 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.081942 kubelet[3281]: E1213 01:54:49.080189 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.082474 kubelet[3281]: E1213 01:54:49.082429 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.082474 kubelet[3281]: W1213 01:54:49.082466 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.083023 kubelet[3281]: E1213 01:54:49.082505 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.083768 kubelet[3281]: E1213 01:54:49.083705 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.083768 kubelet[3281]: W1213 01:54:49.083745 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.083923 kubelet[3281]: E1213 01:54:49.083785 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.085176 kubelet[3281]: E1213 01:54:49.085124 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.085176 kubelet[3281]: W1213 01:54:49.085163 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.085620 kubelet[3281]: E1213 01:54:49.085201 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.085713 kubelet[3281]: E1213 01:54:49.085648 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.085713 kubelet[3281]: W1213 01:54:49.085669 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.085713 kubelet[3281]: E1213 01:54:49.085697 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.086809 kubelet[3281]: E1213 01:54:49.086756 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.086809 kubelet[3281]: W1213 01:54:49.086796 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.087029 kubelet[3281]: E1213 01:54:49.086833 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.088195 kubelet[3281]: E1213 01:54:49.088142 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.088195 kubelet[3281]: W1213 01:54:49.088182 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.088417 kubelet[3281]: E1213 01:54:49.088227 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.090366 kubelet[3281]: E1213 01:54:49.090232 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.090968 kubelet[3281]: W1213 01:54:49.090899 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.090968 kubelet[3281]: E1213 01:54:49.090970 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.091848 kubelet[3281]: E1213 01:54:49.091796 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.091848 kubelet[3281]: W1213 01:54:49.091834 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.091848 kubelet[3281]: E1213 01:54:49.091871 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.092293 kubelet[3281]: E1213 01:54:49.092253 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.092293 kubelet[3281]: W1213 01:54:49.092281 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.092427 kubelet[3281]: E1213 01:54:49.092310 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.092662 kubelet[3281]: E1213 01:54:49.092627 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.092662 kubelet[3281]: W1213 01:54:49.092654 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.092784 kubelet[3281]: E1213 01:54:49.092681 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.093716 kubelet[3281]: E1213 01:54:49.093006 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.093716 kubelet[3281]: W1213 01:54:49.093035 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.093716 kubelet[3281]: E1213 01:54:49.093063 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.093716 kubelet[3281]: E1213 01:54:49.093384 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.093716 kubelet[3281]: W1213 01:54:49.093410 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.093716 kubelet[3281]: E1213 01:54:49.093455 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.094084 kubelet[3281]: E1213 01:54:49.094042 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.094084 kubelet[3281]: W1213 01:54:49.094066 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.094180 kubelet[3281]: E1213 01:54:49.094098 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.094547 kubelet[3281]: E1213 01:54:49.094508 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.094547 kubelet[3281]: W1213 01:54:49.094537 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.094723 kubelet[3281]: E1213 01:54:49.094566 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.095207 kubelet[3281]: E1213 01:54:49.094919 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.095207 kubelet[3281]: W1213 01:54:49.094948 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.095207 kubelet[3281]: E1213 01:54:49.094977 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.096605 kubelet[3281]: E1213 01:54:49.095270 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.096605 kubelet[3281]: W1213 01:54:49.095287 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.096605 kubelet[3281]: E1213 01:54:49.095311 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.096605 kubelet[3281]: E1213 01:54:49.095626 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.096909 kubelet[3281]: W1213 01:54:49.096605 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.096909 kubelet[3281]: E1213 01:54:49.096659 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.097131 kubelet[3281]: E1213 01:54:49.097085 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.097131 kubelet[3281]: W1213 01:54:49.097126 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.097234 kubelet[3281]: E1213 01:54:49.097156 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.103083 kubelet[3281]: E1213 01:54:49.102741 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.103083 kubelet[3281]: W1213 01:54:49.102776 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.103083 kubelet[3281]: E1213 01:54:49.102811 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.103083 kubelet[3281]: I1213 01:54:49.102877 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9f62b98e-3864-4ed3-b68c-e8d11f28b312-registration-dir\") pod \"csi-node-driver-d2z4r\" (UID: \"9f62b98e-3864-4ed3-b68c-e8d11f28b312\") " pod="calico-system/csi-node-driver-d2z4r" Dec 13 01:54:49.103709 kubelet[3281]: E1213 01:54:49.103657 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.103881 kubelet[3281]: W1213 01:54:49.103829 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.104106 kubelet[3281]: E1213 01:54:49.104017 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.104335 kubelet[3281]: I1213 01:54:49.104272 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqmfv\" (UniqueName: \"kubernetes.io/projected/9f62b98e-3864-4ed3-b68c-e8d11f28b312-kube-api-access-nqmfv\") pod \"csi-node-driver-d2z4r\" (UID: \"9f62b98e-3864-4ed3-b68c-e8d11f28b312\") " pod="calico-system/csi-node-driver-d2z4r" Dec 13 01:54:49.104922 kubelet[3281]: E1213 01:54:49.104882 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.104922 kubelet[3281]: W1213 01:54:49.104918 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.105134 kubelet[3281]: E1213 01:54:49.104962 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.105915 kubelet[3281]: E1213 01:54:49.105872 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.105915 kubelet[3281]: W1213 01:54:49.105907 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.106387 kubelet[3281]: E1213 01:54:49.106216 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.106387 kubelet[3281]: E1213 01:54:49.106235 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.106387 kubelet[3281]: W1213 01:54:49.106250 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.107845 kubelet[3281]: E1213 01:54:49.106967 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.107845 kubelet[3281]: I1213 01:54:49.107045 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9f62b98e-3864-4ed3-b68c-e8d11f28b312-varrun\") pod \"csi-node-driver-d2z4r\" (UID: \"9f62b98e-3864-4ed3-b68c-e8d11f28b312\") " pod="calico-system/csi-node-driver-d2z4r" Dec 13 01:54:49.108074 kubelet[3281]: E1213 01:54:49.107928 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.108074 kubelet[3281]: W1213 01:54:49.107958 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.108074 kubelet[3281]: E1213 01:54:49.107997 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.108637 kubelet[3281]: E1213 01:54:49.108398 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.108637 kubelet[3281]: W1213 01:54:49.108431 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.108637 kubelet[3281]: E1213 01:54:49.108464 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.109287 kubelet[3281]: E1213 01:54:49.108889 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.109287 kubelet[3281]: W1213 01:54:49.108916 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.111003 kubelet[3281]: E1213 01:54:49.109648 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.111499 kubelet[3281]: E1213 01:54:49.111184 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.111499 kubelet[3281]: W1213 01:54:49.111207 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.111499 kubelet[3281]: E1213 01:54:49.111253 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.111869 kubelet[3281]: E1213 01:54:49.111844 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.112102 kubelet[3281]: W1213 01:54:49.111948 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.112102 kubelet[3281]: E1213 01:54:49.112003 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.113250 kubelet[3281]: E1213 01:54:49.113207 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.113968 kubelet[3281]: W1213 01:54:49.113657 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.113968 kubelet[3281]: E1213 01:54:49.113722 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.113968 kubelet[3281]: I1213 01:54:49.113814 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9f62b98e-3864-4ed3-b68c-e8d11f28b312-kubelet-dir\") pod \"csi-node-driver-d2z4r\" (UID: \"9f62b98e-3864-4ed3-b68c-e8d11f28b312\") " pod="calico-system/csi-node-driver-d2z4r" Dec 13 01:54:49.116271 kubelet[3281]: E1213 01:54:49.115649 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.116599 kubelet[3281]: W1213 01:54:49.116446 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.116599 kubelet[3281]: E1213 01:54:49.116529 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.116939 kubelet[3281]: I1213 01:54:49.116822 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9f62b98e-3864-4ed3-b68c-e8d11f28b312-socket-dir\") pod \"csi-node-driver-d2z4r\" (UID: \"9f62b98e-3864-4ed3-b68c-e8d11f28b312\") " pod="calico-system/csi-node-driver-d2z4r" Dec 13 01:54:49.118008 kubelet[3281]: E1213 01:54:49.116999 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.118008 kubelet[3281]: W1213 01:54:49.117021 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.118008 kubelet[3281]: E1213 01:54:49.117747 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.118274 kubelet[3281]: E1213 01:54:49.118234 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.118274 kubelet[3281]: W1213 01:54:49.118269 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.118675 kubelet[3281]: E1213 01:54:49.118403 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.118755 kubelet[3281]: E1213 01:54:49.118733 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.118807 kubelet[3281]: W1213 01:54:49.118753 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.118807 kubelet[3281]: E1213 01:54:49.118784 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.119707 kubelet[3281]: E1213 01:54:49.119083 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.119707 kubelet[3281]: W1213 01:54:49.119703 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.119880 kubelet[3281]: E1213 01:54:49.119753 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.194902 containerd[2034]: time="2024-12-13T01:54:49.194820252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64b6cfddbc-xw5bx,Uid:de91cbcb-224c-4ab6-9b11-afafcf08b79a,Namespace:calico-system,Attempt:0,} returns sandbox id \"3931e3f0ab749e80f83229611d19ba13ac6a095460af63c54431811790668c2a\"" Dec 13 01:54:49.198625 containerd[2034]: time="2024-12-13T01:54:49.198529272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:54:49.218071 kubelet[3281]: E1213 01:54:49.218011 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.218071 kubelet[3281]: W1213 01:54:49.218053 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.218972 kubelet[3281]: E1213 01:54:49.218093 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.219805 kubelet[3281]: E1213 01:54:49.219539 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.219805 kubelet[3281]: W1213 01:54:49.219609 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.219805 kubelet[3281]: E1213 01:54:49.219672 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.221055 kubelet[3281]: E1213 01:54:49.220490 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.221055 kubelet[3281]: W1213 01:54:49.220520 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.221055 kubelet[3281]: E1213 01:54:49.220563 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.221549 kubelet[3281]: E1213 01:54:49.221381 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.221549 kubelet[3281]: W1213 01:54:49.221406 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.221549 kubelet[3281]: E1213 01:54:49.221469 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.223286 kubelet[3281]: E1213 01:54:49.223061 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.223286 kubelet[3281]: W1213 01:54:49.223093 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.223286 kubelet[3281]: E1213 01:54:49.223154 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.223882 kubelet[3281]: E1213 01:54:49.223708 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.223882 kubelet[3281]: W1213 01:54:49.223732 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.223882 kubelet[3281]: E1213 01:54:49.223828 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.225192 kubelet[3281]: E1213 01:54:49.224750 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.225192 kubelet[3281]: W1213 01:54:49.224779 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.225192 kubelet[3281]: E1213 01:54:49.224874 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.226504 kubelet[3281]: E1213 01:54:49.225541 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.226504 kubelet[3281]: W1213 01:54:49.225566 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.226504 kubelet[3281]: E1213 01:54:49.225734 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.227914 kubelet[3281]: E1213 01:54:49.227354 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.227914 kubelet[3281]: W1213 01:54:49.227692 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.227914 kubelet[3281]: E1213 01:54:49.227775 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.230249 kubelet[3281]: E1213 01:54:49.229356 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.230249 kubelet[3281]: W1213 01:54:49.229393 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.230249 kubelet[3281]: E1213 01:54:49.229443 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.230249 kubelet[3281]: E1213 01:54:49.229957 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.230249 kubelet[3281]: W1213 01:54:49.229985 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.230249 kubelet[3281]: E1213 01:54:49.230048 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.232366 kubelet[3281]: E1213 01:54:49.231726 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.232366 kubelet[3281]: W1213 01:54:49.232198 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.232366 kubelet[3281]: E1213 01:54:49.232272 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.235010 kubelet[3281]: E1213 01:54:49.234729 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.235010 kubelet[3281]: W1213 01:54:49.234765 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.235010 kubelet[3281]: E1213 01:54:49.234833 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.235905 kubelet[3281]: E1213 01:54:49.235468 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.235905 kubelet[3281]: W1213 01:54:49.235500 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.235905 kubelet[3281]: E1213 01:54:49.235629 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.237944 kubelet[3281]: E1213 01:54:49.236347 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.237944 kubelet[3281]: W1213 01:54:49.236378 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.237944 kubelet[3281]: E1213 01:54:49.236511 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.237944 kubelet[3281]: E1213 01:54:49.237864 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.237944 kubelet[3281]: W1213 01:54:49.237889 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.238366 kubelet[3281]: E1213 01:54:49.238309 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.239213 kubelet[3281]: E1213 01:54:49.239167 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.239811 kubelet[3281]: W1213 01:54:49.239421 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.240096 kubelet[3281]: E1213 01:54:49.240042 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.240463 kubelet[3281]: E1213 01:54:49.240419 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.241250 kubelet[3281]: W1213 01:54:49.240456 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.241250 kubelet[3281]: E1213 01:54:49.240955 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.242412 kubelet[3281]: E1213 01:54:49.242366 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.242412 kubelet[3281]: W1213 01:54:49.242403 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.243158 kubelet[3281]: E1213 01:54:49.243033 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.244604 kubelet[3281]: E1213 01:54:49.243821 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.244604 kubelet[3281]: W1213 01:54:49.244060 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.244919 kubelet[3281]: E1213 01:54:49.244827 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.245566 kubelet[3281]: E1213 01:54:49.245496 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.245566 kubelet[3281]: W1213 01:54:49.245535 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.246942 kubelet[3281]: E1213 01:54:49.246467 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.246942 kubelet[3281]: E1213 01:54:49.246760 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.246942 kubelet[3281]: W1213 01:54:49.246780 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.247623 kubelet[3281]: E1213 01:54:49.247507 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.248956 kubelet[3281]: E1213 01:54:49.248913 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.248956 kubelet[3281]: W1213 01:54:49.248950 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.249482 kubelet[3281]: E1213 01:54:49.249453 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.250565 kubelet[3281]: E1213 01:54:49.250523 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.250565 kubelet[3281]: W1213 01:54:49.250560 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.250763 kubelet[3281]: E1213 01:54:49.250721 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.252095 kubelet[3281]: E1213 01:54:49.251303 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.252095 kubelet[3281]: W1213 01:54:49.251338 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.252095 kubelet[3281]: E1213 01:54:49.251381 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.253923 kubelet[3281]: E1213 01:54:49.253851 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.253923 kubelet[3281]: W1213 01:54:49.253895 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.254198 kubelet[3281]: E1213 01:54:49.253943 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.272497 kubelet[3281]: E1213 01:54:49.272447 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.272497 kubelet[3281]: W1213 01:54:49.272483 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.272742 kubelet[3281]: E1213 01:54:49.272520 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.338393 kubelet[3281]: E1213 01:54:49.338246 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.338393 kubelet[3281]: W1213 01:54:49.338304 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.338393 kubelet[3281]: E1213 01:54:49.338345 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.439925 kubelet[3281]: E1213 01:54:49.439791 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.439925 kubelet[3281]: W1213 01:54:49.439853 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.440277 kubelet[3281]: E1213 01:54:49.440021 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.541736 kubelet[3281]: E1213 01:54:49.541685 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.541736 kubelet[3281]: W1213 01:54:49.541723 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.542058 kubelet[3281]: E1213 01:54:49.541762 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.607504 systemd[1]: run-containerd-runc-k8s.io-3931e3f0ab749e80f83229611d19ba13ac6a095460af63c54431811790668c2a-runc.oNGp17.mount: Deactivated successfully. Dec 13 01:54:49.643689 kubelet[3281]: E1213 01:54:49.643517 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.643689 kubelet[3281]: W1213 01:54:49.643549 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.643689 kubelet[3281]: E1213 01:54:49.643616 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.745745 kubelet[3281]: E1213 01:54:49.745689 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.745745 kubelet[3281]: W1213 01:54:49.745732 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.745947 kubelet[3281]: E1213 01:54:49.745773 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.846998 kubelet[3281]: E1213 01:54:49.846876 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.846998 kubelet[3281]: W1213 01:54:49.846914 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.846998 kubelet[3281]: E1213 01:54:49.846952 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:49.902415 kubelet[3281]: E1213 01:54:49.901689 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:49.902415 kubelet[3281]: W1213 01:54:49.901722 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:49.902415 kubelet[3281]: E1213 01:54:49.902353 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:50.015542 containerd[2034]: time="2024-12-13T01:54:50.015423024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j64x9,Uid:74533a73-514a-4f35-8d6a-c83bd99bd4e5,Namespace:calico-system,Attempt:0,}" Dec 13 01:54:50.059055 containerd[2034]: time="2024-12-13T01:54:50.058878480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:54:50.059055 containerd[2034]: time="2024-12-13T01:54:50.058988844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:54:50.059337 containerd[2034]: time="2024-12-13T01:54:50.059027988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:50.059337 containerd[2034]: time="2024-12-13T01:54:50.059199336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:54:50.104155 systemd[1]: Started cri-containerd-23008bdf334e0399d51b84816c028193b9006b155ca939c2551b89e0d81c8a06.scope - libcontainer container 23008bdf334e0399d51b84816c028193b9006b155ca939c2551b89e0d81c8a06. Dec 13 01:54:50.175218 containerd[2034]: time="2024-12-13T01:54:50.174941305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j64x9,Uid:74533a73-514a-4f35-8d6a-c83bd99bd4e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"23008bdf334e0399d51b84816c028193b9006b155ca939c2551b89e0d81c8a06\"" Dec 13 01:54:50.655898 kubelet[3281]: E1213 01:54:50.655814 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2z4r" podUID="9f62b98e-3864-4ed3-b68c-e8d11f28b312" Dec 13 01:54:50.666875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3718613184.mount: Deactivated successfully. Dec 13 01:54:51.468690 containerd[2034]: time="2024-12-13T01:54:51.468632392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:51.471226 containerd[2034]: time="2024-12-13T01:54:51.471149656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:54:51.472949 containerd[2034]: time="2024-12-13T01:54:51.472903720Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:51.479189 containerd[2034]: time="2024-12-13T01:54:51.479085724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:51.481655 containerd[2034]: time="2024-12-13T01:54:51.480904024Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.281208916s" Dec 13 01:54:51.481655 containerd[2034]: time="2024-12-13T01:54:51.480960424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:54:51.482813 containerd[2034]: time="2024-12-13T01:54:51.482457616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:54:51.521385 containerd[2034]: time="2024-12-13T01:54:51.521039872Z" level=info msg="CreateContainer within sandbox \"3931e3f0ab749e80f83229611d19ba13ac6a095460af63c54431811790668c2a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:54:51.553243 containerd[2034]: time="2024-12-13T01:54:51.552771676Z" level=info msg="CreateContainer within sandbox \"3931e3f0ab749e80f83229611d19ba13ac6a095460af63c54431811790668c2a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3d5f915ae3a8e3d3d10bfadd23a9e1f886c5bef950e74455aae58b97c7c551d2\"" Dec 13 01:54:51.555322 containerd[2034]: time="2024-12-13T01:54:51.555149944Z" level=info msg="StartContainer for \"3d5f915ae3a8e3d3d10bfadd23a9e1f886c5bef950e74455aae58b97c7c551d2\"" Dec 13 01:54:51.606280 systemd[1]: Started cri-containerd-3d5f915ae3a8e3d3d10bfadd23a9e1f886c5bef950e74455aae58b97c7c551d2.scope - libcontainer container 3d5f915ae3a8e3d3d10bfadd23a9e1f886c5bef950e74455aae58b97c7c551d2. Dec 13 01:54:51.711306 containerd[2034]: time="2024-12-13T01:54:51.710987357Z" level=info msg="StartContainer for \"3d5f915ae3a8e3d3d10bfadd23a9e1f886c5bef950e74455aae58b97c7c551d2\" returns successfully" Dec 13 01:54:51.923930 kubelet[3281]: E1213 01:54:51.923635 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.923930 kubelet[3281]: W1213 01:54:51.923697 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.923930 kubelet[3281]: E1213 01:54:51.923735 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.926722 kubelet[3281]: E1213 01:54:51.925916 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.926722 kubelet[3281]: W1213 01:54:51.925974 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.926722 kubelet[3281]: E1213 01:54:51.926013 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.927797 kubelet[3281]: E1213 01:54:51.927238 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.927797 kubelet[3281]: W1213 01:54:51.927262 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.927797 kubelet[3281]: E1213 01:54:51.927295 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.928513 kubelet[3281]: E1213 01:54:51.928343 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.928513 kubelet[3281]: W1213 01:54:51.928491 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.928513 kubelet[3281]: E1213 01:54:51.928524 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.929605 kubelet[3281]: E1213 01:54:51.929417 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.929605 kubelet[3281]: W1213 01:54:51.929469 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.929605 kubelet[3281]: E1213 01:54:51.929503 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.930252 kubelet[3281]: E1213 01:54:51.929997 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.930252 kubelet[3281]: W1213 01:54:51.930026 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.930252 kubelet[3281]: E1213 01:54:51.930055 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.930476 kubelet[3281]: E1213 01:54:51.930423 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.930476 kubelet[3281]: W1213 01:54:51.930443 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.930476 kubelet[3281]: E1213 01:54:51.930468 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.931724 kubelet[3281]: E1213 01:54:51.930838 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.931724 kubelet[3281]: W1213 01:54:51.930868 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.931724 kubelet[3281]: E1213 01:54:51.930895 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.931724 kubelet[3281]: E1213 01:54:51.931249 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.931724 kubelet[3281]: W1213 01:54:51.931269 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.931724 kubelet[3281]: E1213 01:54:51.931294 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.931724 kubelet[3281]: E1213 01:54:51.931649 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.931724 kubelet[3281]: W1213 01:54:51.931666 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.931724 kubelet[3281]: E1213 01:54:51.931690 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.932284 kubelet[3281]: E1213 01:54:51.932019 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.932284 kubelet[3281]: W1213 01:54:51.932036 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.932284 kubelet[3281]: E1213 01:54:51.932062 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.932440 kubelet[3281]: E1213 01:54:51.932341 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.932440 kubelet[3281]: W1213 01:54:51.932355 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.932440 kubelet[3281]: E1213 01:54:51.932381 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.933331 kubelet[3281]: E1213 01:54:51.932688 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.933331 kubelet[3281]: W1213 01:54:51.932716 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.933331 kubelet[3281]: E1213 01:54:51.932743 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.933331 kubelet[3281]: E1213 01:54:51.933083 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.933331 kubelet[3281]: W1213 01:54:51.933123 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.933331 kubelet[3281]: E1213 01:54:51.933150 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.934413 kubelet[3281]: E1213 01:54:51.933634 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.934413 kubelet[3281]: W1213 01:54:51.933659 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.934413 kubelet[3281]: E1213 01:54:51.933779 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.965176 kubelet[3281]: E1213 01:54:51.965125 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.965176 kubelet[3281]: W1213 01:54:51.965162 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.965407 kubelet[3281]: E1213 01:54:51.965220 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.965807 kubelet[3281]: E1213 01:54:51.965767 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.965807 kubelet[3281]: W1213 01:54:51.965797 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.965982 kubelet[3281]: E1213 01:54:51.965844 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.966523 kubelet[3281]: E1213 01:54:51.966316 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.966523 kubelet[3281]: W1213 01:54:51.966371 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.967899 kubelet[3281]: E1213 01:54:51.966767 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.967899 kubelet[3281]: E1213 01:54:51.966919 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.967899 kubelet[3281]: W1213 01:54:51.967054 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.967899 kubelet[3281]: E1213 01:54:51.967097 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.969941 kubelet[3281]: E1213 01:54:51.969680 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.969941 kubelet[3281]: W1213 01:54:51.969718 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.969941 kubelet[3281]: E1213 01:54:51.969801 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.971077 kubelet[3281]: E1213 01:54:51.970969 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.971538 kubelet[3281]: W1213 01:54:51.971446 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.972341 kubelet[3281]: E1213 01:54:51.971776 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.973652 kubelet[3281]: E1213 01:54:51.973594 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.973851 kubelet[3281]: W1213 01:54:51.973824 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.974485 kubelet[3281]: E1213 01:54:51.973990 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.975778 kubelet[3281]: E1213 01:54:51.975030 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.975778 kubelet[3281]: W1213 01:54:51.975063 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.975778 kubelet[3281]: E1213 01:54:51.975211 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.976933 kubelet[3281]: E1213 01:54:51.976876 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.978341 kubelet[3281]: W1213 01:54:51.978048 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.978341 kubelet[3281]: E1213 01:54:51.978266 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.979067 kubelet[3281]: E1213 01:54:51.979019 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.979067 kubelet[3281]: W1213 01:54:51.979055 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.979415 kubelet[3281]: E1213 01:54:51.979250 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.979556 kubelet[3281]: E1213 01:54:51.979509 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.979556 kubelet[3281]: W1213 01:54:51.979526 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.979749 kubelet[3281]: E1213 01:54:51.979643 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.980298 kubelet[3281]: E1213 01:54:51.980268 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.980298 kubelet[3281]: W1213 01:54:51.980296 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.980565 kubelet[3281]: E1213 01:54:51.980441 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.980699 kubelet[3281]: E1213 01:54:51.980657 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.980699 kubelet[3281]: W1213 01:54:51.980673 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.980815 kubelet[3281]: E1213 01:54:51.980782 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.981193 kubelet[3281]: E1213 01:54:51.981164 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.981193 kubelet[3281]: W1213 01:54:51.981190 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.981320 kubelet[3281]: E1213 01:54:51.981234 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.981672 kubelet[3281]: E1213 01:54:51.981643 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.981672 kubelet[3281]: W1213 01:54:51.981669 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.981825 kubelet[3281]: E1213 01:54:51.981717 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.982682 kubelet[3281]: E1213 01:54:51.982641 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.982682 kubelet[3281]: W1213 01:54:51.982670 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.982838 kubelet[3281]: E1213 01:54:51.982701 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.983918 kubelet[3281]: E1213 01:54:51.983876 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.983918 kubelet[3281]: W1213 01:54:51.983912 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.984116 kubelet[3281]: E1213 01:54:51.983970 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:51.984411 kubelet[3281]: E1213 01:54:51.984363 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:51.984411 kubelet[3281]: W1213 01:54:51.984391 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:51.984563 kubelet[3281]: E1213 01:54:51.984420 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.657698 kubelet[3281]: E1213 01:54:52.657639 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2z4r" podUID="9f62b98e-3864-4ed3-b68c-e8d11f28b312" Dec 13 01:54:52.796682 containerd[2034]: time="2024-12-13T01:54:52.796403214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:52.798947 containerd[2034]: time="2024-12-13T01:54:52.798647934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:54:52.800677 containerd[2034]: time="2024-12-13T01:54:52.800595810Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:52.806710 containerd[2034]: time="2024-12-13T01:54:52.806654478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:52.808366 containerd[2034]: time="2024-12-13T01:54:52.808141770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.32548061s" Dec 13 01:54:52.808366 containerd[2034]: time="2024-12-13T01:54:52.808205274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:54:52.813078 containerd[2034]: time="2024-12-13T01:54:52.812839506Z" level=info msg="CreateContainer within sandbox \"23008bdf334e0399d51b84816c028193b9006b155ca939c2551b89e0d81c8a06\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:54:52.836483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3969908154.mount: Deactivated successfully. Dec 13 01:54:52.843101 kubelet[3281]: I1213 01:54:52.843046 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:54:52.851202 containerd[2034]: time="2024-12-13T01:54:52.851062554Z" level=info msg="CreateContainer within sandbox \"23008bdf334e0399d51b84816c028193b9006b155ca939c2551b89e0d81c8a06\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b\"" Dec 13 01:54:52.852986 containerd[2034]: time="2024-12-13T01:54:52.851888382Z" level=info msg="StartContainer for \"2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b\"" Dec 13 01:54:52.924246 systemd[1]: run-containerd-runc-k8s.io-2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b-runc.jKYee0.mount: Deactivated successfully. Dec 13 01:54:52.936894 systemd[1]: Started cri-containerd-2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b.scope - libcontainer container 2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b. Dec 13 01:54:52.941816 kubelet[3281]: E1213 01:54:52.941775 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.943289 kubelet[3281]: W1213 01:54:52.943213 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.943289 kubelet[3281]: E1213 01:54:52.943266 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.943815 kubelet[3281]: E1213 01:54:52.943758 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.943815 kubelet[3281]: W1213 01:54:52.943789 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.943815 kubelet[3281]: E1213 01:54:52.943821 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.945711 kubelet[3281]: E1213 01:54:52.944242 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.945711 kubelet[3281]: W1213 01:54:52.944264 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.945711 kubelet[3281]: E1213 01:54:52.944292 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.945711 kubelet[3281]: E1213 01:54:52.944748 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.945711 kubelet[3281]: W1213 01:54:52.944776 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.945711 kubelet[3281]: E1213 01:54:52.944809 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.946836 kubelet[3281]: E1213 01:54:52.946018 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.946836 kubelet[3281]: W1213 01:54:52.946044 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.946836 kubelet[3281]: E1213 01:54:52.946094 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.946836 kubelet[3281]: E1213 01:54:52.946477 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.946836 kubelet[3281]: W1213 01:54:52.946498 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.946836 kubelet[3281]: E1213 01:54:52.946525 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.946836 kubelet[3281]: E1213 01:54:52.946828 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.946836 kubelet[3281]: W1213 01:54:52.946845 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.947212 kubelet[3281]: E1213 01:54:52.946869 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.947212 kubelet[3281]: E1213 01:54:52.947125 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.947212 kubelet[3281]: W1213 01:54:52.947140 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.947212 kubelet[3281]: E1213 01:54:52.947162 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.948179 kubelet[3281]: E1213 01:54:52.947418 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.948179 kubelet[3281]: W1213 01:54:52.947445 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.948179 kubelet[3281]: E1213 01:54:52.947471 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.948179 kubelet[3281]: E1213 01:54:52.947805 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.948179 kubelet[3281]: W1213 01:54:52.947825 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.948179 kubelet[3281]: E1213 01:54:52.947851 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.948179 kubelet[3281]: E1213 01:54:52.948127 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.948179 kubelet[3281]: W1213 01:54:52.948144 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.948179 kubelet[3281]: E1213 01:54:52.948171 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.948179 kubelet[3281]: E1213 01:54:52.948466 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.951228 kubelet[3281]: W1213 01:54:52.948487 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.951228 kubelet[3281]: E1213 01:54:52.948513 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.951228 kubelet[3281]: E1213 01:54:52.948840 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.951228 kubelet[3281]: W1213 01:54:52.948858 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.951228 kubelet[3281]: E1213 01:54:52.948884 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.951228 kubelet[3281]: E1213 01:54:52.949201 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.951228 kubelet[3281]: W1213 01:54:52.949219 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.951228 kubelet[3281]: E1213 01:54:52.949245 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.951228 kubelet[3281]: E1213 01:54:52.949585 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.951228 kubelet[3281]: W1213 01:54:52.949605 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.953038 kubelet[3281]: E1213 01:54:52.949630 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.976514 kubelet[3281]: E1213 01:54:52.976289 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.976514 kubelet[3281]: W1213 01:54:52.976321 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.976514 kubelet[3281]: E1213 01:54:52.976383 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.977327 kubelet[3281]: E1213 01:54:52.977210 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.977327 kubelet[3281]: W1213 01:54:52.977259 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.978606 kubelet[3281]: E1213 01:54:52.978499 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.979045 kubelet[3281]: E1213 01:54:52.978751 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.979045 kubelet[3281]: W1213 01:54:52.978773 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.979045 kubelet[3281]: E1213 01:54:52.978942 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.981288 kubelet[3281]: E1213 01:54:52.981042 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.981288 kubelet[3281]: W1213 01:54:52.981122 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.983357 kubelet[3281]: E1213 01:54:52.981172 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.986695 kubelet[3281]: E1213 01:54:52.985145 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.986695 kubelet[3281]: W1213 01:54:52.985180 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.986695 kubelet[3281]: E1213 01:54:52.986140 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.986695 kubelet[3281]: W1213 01:54:52.986181 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.988263 kubelet[3281]: E1213 01:54:52.987271 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.988263 kubelet[3281]: W1213 01:54:52.987303 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.988263 kubelet[3281]: E1213 01:54:52.987339 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.988263 kubelet[3281]: E1213 01:54:52.987840 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.988263 kubelet[3281]: W1213 01:54:52.987857 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.988263 kubelet[3281]: E1213 01:54:52.987917 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.988263 kubelet[3281]: E1213 01:54:52.988041 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.988263 kubelet[3281]: E1213 01:54:52.988110 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.989001 kubelet[3281]: E1213 01:54:52.988475 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.989001 kubelet[3281]: W1213 01:54:52.988493 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.989001 kubelet[3281]: E1213 01:54:52.988528 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.989001 kubelet[3281]: E1213 01:54:52.988971 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.989001 kubelet[3281]: W1213 01:54:52.988987 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.989260 kubelet[3281]: E1213 01:54:52.989023 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.991850 kubelet[3281]: E1213 01:54:52.990919 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.991850 kubelet[3281]: W1213 01:54:52.990962 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.991850 kubelet[3281]: E1213 01:54:52.991016 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.992321 kubelet[3281]: E1213 01:54:52.992021 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.992321 kubelet[3281]: W1213 01:54:52.992047 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.992321 kubelet[3281]: E1213 01:54:52.992092 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.993537 kubelet[3281]: E1213 01:54:52.993148 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.993537 kubelet[3281]: W1213 01:54:52.993180 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.993537 kubelet[3281]: E1213 01:54:52.993355 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:52.998154 kubelet[3281]: E1213 01:54:52.998089 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:52.999457 kubelet[3281]: W1213 01:54:52.998328 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:52.999457 kubelet[3281]: E1213 01:54:52.998667 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:53.003372 kubelet[3281]: E1213 01:54:53.003099 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:53.003372 kubelet[3281]: W1213 01:54:53.003128 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:53.004051 kubelet[3281]: E1213 01:54:53.003791 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:53.004356 kubelet[3281]: E1213 01:54:53.004320 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:53.004527 kubelet[3281]: W1213 01:54:53.004453 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:53.004527 kubelet[3281]: E1213 01:54:53.004492 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:53.006329 kubelet[3281]: E1213 01:54:53.006151 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:53.006329 kubelet[3281]: W1213 01:54:53.006186 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:53.006329 kubelet[3281]: E1213 01:54:53.006245 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:53.008971 kubelet[3281]: E1213 01:54:53.008701 3281 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:54:53.008971 kubelet[3281]: W1213 01:54:53.008734 3281 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:54:53.008971 kubelet[3281]: E1213 01:54:53.008770 3281 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:54:53.011050 containerd[2034]: time="2024-12-13T01:54:53.010870419Z" level=info msg="StartContainer for \"2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b\" returns successfully" Dec 13 01:54:53.038668 systemd[1]: cri-containerd-2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b.scope: Deactivated successfully. Dec 13 01:54:53.577468 containerd[2034]: time="2024-12-13T01:54:53.577323810Z" level=info msg="shim disconnected" id=2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b namespace=k8s.io Dec 13 01:54:53.577468 containerd[2034]: time="2024-12-13T01:54:53.577403238Z" level=warning msg="cleaning up after shim disconnected" id=2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b namespace=k8s.io Dec 13 01:54:53.577468 containerd[2034]: time="2024-12-13T01:54:53.577427550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:54:53.831045 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2bff99eddb59da7f59219c919b9ba2a9a1cf95e1baf3d1db4e28f55302242a5b-rootfs.mount: Deactivated successfully. Dec 13 01:54:53.851286 containerd[2034]: time="2024-12-13T01:54:53.851212267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:54:53.880428 kubelet[3281]: I1213 01:54:53.879048 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-64b6cfddbc-xw5bx" podStartSLOduration=3.595216863 podStartE2EDuration="5.878966983s" podCreationTimestamp="2024-12-13 01:54:48 +0000 UTC" firstStartedPulling="2024-12-13 01:54:49.197630856 +0000 UTC m=+21.762220249" lastFinishedPulling="2024-12-13 01:54:51.481380976 +0000 UTC m=+24.045970369" observedRunningTime="2024-12-13 01:54:51.87854025 +0000 UTC m=+24.443129643" watchObservedRunningTime="2024-12-13 01:54:53.878966983 +0000 UTC m=+26.443556376" Dec 13 01:54:54.655868 kubelet[3281]: E1213 01:54:54.655787 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2z4r" podUID="9f62b98e-3864-4ed3-b68c-e8d11f28b312" Dec 13 01:54:55.615463 kubelet[3281]: I1213 01:54:55.615342 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:54:56.656038 kubelet[3281]: E1213 01:54:56.655991 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2z4r" podUID="9f62b98e-3864-4ed3-b68c-e8d11f28b312" Dec 13 01:54:58.104250 containerd[2034]: time="2024-12-13T01:54:58.104171444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:58.105998 containerd[2034]: time="2024-12-13T01:54:58.105900452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:54:58.107915 containerd[2034]: time="2024-12-13T01:54:58.107834324Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:58.112672 containerd[2034]: time="2024-12-13T01:54:58.112565469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:54:58.114214 containerd[2034]: time="2024-12-13T01:54:58.114165777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.262888002s" Dec 13 01:54:58.114467 containerd[2034]: time="2024-12-13T01:54:58.114347877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:54:58.118167 containerd[2034]: time="2024-12-13T01:54:58.117959409Z" level=info msg="CreateContainer within sandbox \"23008bdf334e0399d51b84816c028193b9006b155ca939c2551b89e0d81c8a06\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:54:58.149939 containerd[2034]: time="2024-12-13T01:54:58.149800953Z" level=info msg="CreateContainer within sandbox \"23008bdf334e0399d51b84816c028193b9006b155ca939c2551b89e0d81c8a06\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8be66313c07fd5d3cabf7a2f78e704c04c3d0a94eccd8f72307fa4583b49464c\"" Dec 13 01:54:58.152433 containerd[2034]: time="2024-12-13T01:54:58.150514905Z" level=info msg="StartContainer for \"8be66313c07fd5d3cabf7a2f78e704c04c3d0a94eccd8f72307fa4583b49464c\"" Dec 13 01:54:58.218135 systemd[1]: Started cri-containerd-8be66313c07fd5d3cabf7a2f78e704c04c3d0a94eccd8f72307fa4583b49464c.scope - libcontainer container 8be66313c07fd5d3cabf7a2f78e704c04c3d0a94eccd8f72307fa4583b49464c. Dec 13 01:54:58.293952 containerd[2034]: time="2024-12-13T01:54:58.293873541Z" level=info msg="StartContainer for \"8be66313c07fd5d3cabf7a2f78e704c04c3d0a94eccd8f72307fa4583b49464c\" returns successfully" Dec 13 01:54:58.656307 kubelet[3281]: E1213 01:54:58.656235 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d2z4r" podUID="9f62b98e-3864-4ed3-b68c-e8d11f28b312" Dec 13 01:54:59.738391 containerd[2034]: time="2024-12-13T01:54:59.738305305Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:54:59.743254 systemd[1]: cri-containerd-8be66313c07fd5d3cabf7a2f78e704c04c3d0a94eccd8f72307fa4583b49464c.scope: Deactivated successfully. Dec 13 01:54:59.749617 kubelet[3281]: I1213 01:54:59.745841 3281 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:54:59.802033 kubelet[3281]: I1213 01:54:59.801976 3281 topology_manager.go:215] "Topology Admit Handler" podUID="8fb4856f-34b3-468a-a336-454740015a6b" podNamespace="kube-system" podName="coredns-76f75df574-fql84" Dec 13 01:54:59.815330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8be66313c07fd5d3cabf7a2f78e704c04c3d0a94eccd8f72307fa4583b49464c-rootfs.mount: Deactivated successfully. Dec 13 01:54:59.833533 kubelet[3281]: I1213 01:54:59.832002 3281 topology_manager.go:215] "Topology Admit Handler" podUID="b5d5aafc-d67a-4e3e-a8b1-c9d750914db8" podNamespace="calico-system" podName="calico-kube-controllers-75f8859b4-fmpls" Dec 13 01:54:59.838611 kubelet[3281]: I1213 01:54:59.835093 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8fb4856f-34b3-468a-a336-454740015a6b-config-volume\") pod \"coredns-76f75df574-fql84\" (UID: \"8fb4856f-34b3-468a-a336-454740015a6b\") " pod="kube-system/coredns-76f75df574-fql84" Dec 13 01:54:59.838611 kubelet[3281]: I1213 01:54:59.835738 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzckh\" (UniqueName: \"kubernetes.io/projected/8fb4856f-34b3-468a-a336-454740015a6b-kube-api-access-tzckh\") pod \"coredns-76f75df574-fql84\" (UID: \"8fb4856f-34b3-468a-a336-454740015a6b\") " pod="kube-system/coredns-76f75df574-fql84" Dec 13 01:54:59.834733 systemd[1]: Created slice kubepods-burstable-pod8fb4856f_34b3_468a_a336_454740015a6b.slice - libcontainer container kubepods-burstable-pod8fb4856f_34b3_468a_a336_454740015a6b.slice. Dec 13 01:54:59.842188 kubelet[3281]: I1213 01:54:59.838826 3281 topology_manager.go:215] "Topology Admit Handler" podUID="49193545-8953-4b0d-8299-dd1e3ecf467d" podNamespace="kube-system" podName="coredns-76f75df574-bthsx" Dec 13 01:54:59.847622 kubelet[3281]: I1213 01:54:59.846169 3281 topology_manager.go:215] "Topology Admit Handler" podUID="235db396-35e0-49e0-bcd3-929b0c0c50eb" podNamespace="calico-apiserver" podName="calico-apiserver-5bbbb5bd5-svpxb" Dec 13 01:54:59.849242 kubelet[3281]: I1213 01:54:59.849189 3281 topology_manager.go:215] "Topology Admit Handler" podUID="439df88a-c40e-4828-a6b8-8bfb2c3a7727" podNamespace="calico-apiserver" podName="calico-apiserver-5bbbb5bd5-jlszc" Dec 13 01:54:59.866869 systemd[1]: Created slice kubepods-besteffort-podb5d5aafc_d67a_4e3e_a8b1_c9d750914db8.slice - libcontainer container kubepods-besteffort-podb5d5aafc_d67a_4e3e_a8b1_c9d750914db8.slice. Dec 13 01:54:59.891270 systemd[1]: Created slice kubepods-burstable-pod49193545_8953_4b0d_8299_dd1e3ecf467d.slice - libcontainer container kubepods-burstable-pod49193545_8953_4b0d_8299_dd1e3ecf467d.slice. Dec 13 01:54:59.908728 systemd[1]: Created slice kubepods-besteffort-pod439df88a_c40e_4828_a6b8_8bfb2c3a7727.slice - libcontainer container kubepods-besteffort-pod439df88a_c40e_4828_a6b8_8bfb2c3a7727.slice. Dec 13 01:54:59.926641 systemd[1]: Created slice kubepods-besteffort-pod235db396_35e0_49e0_bcd3_929b0c0c50eb.slice - libcontainer container kubepods-besteffort-pod235db396_35e0_49e0_bcd3_929b0c0c50eb.slice. Dec 13 01:54:59.936959 kubelet[3281]: I1213 01:54:59.936893 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49193545-8953-4b0d-8299-dd1e3ecf467d-config-volume\") pod \"coredns-76f75df574-bthsx\" (UID: \"49193545-8953-4b0d-8299-dd1e3ecf467d\") " pod="kube-system/coredns-76f75df574-bthsx" Dec 13 01:54:59.937637 kubelet[3281]: I1213 01:54:59.937097 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjffk\" (UniqueName: \"kubernetes.io/projected/b5d5aafc-d67a-4e3e-a8b1-c9d750914db8-kube-api-access-jjffk\") pod \"calico-kube-controllers-75f8859b4-fmpls\" (UID: \"b5d5aafc-d67a-4e3e-a8b1-c9d750914db8\") " pod="calico-system/calico-kube-controllers-75f8859b4-fmpls" Dec 13 01:54:59.937637 kubelet[3281]: I1213 01:54:59.937151 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlqnt\" (UniqueName: \"kubernetes.io/projected/49193545-8953-4b0d-8299-dd1e3ecf467d-kube-api-access-hlqnt\") pod \"coredns-76f75df574-bthsx\" (UID: \"49193545-8953-4b0d-8299-dd1e3ecf467d\") " pod="kube-system/coredns-76f75df574-bthsx" Dec 13 01:54:59.937637 kubelet[3281]: I1213 01:54:59.937197 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5d5aafc-d67a-4e3e-a8b1-c9d750914db8-tigera-ca-bundle\") pod \"calico-kube-controllers-75f8859b4-fmpls\" (UID: \"b5d5aafc-d67a-4e3e-a8b1-c9d750914db8\") " pod="calico-system/calico-kube-controllers-75f8859b4-fmpls" Dec 13 01:54:59.937637 kubelet[3281]: I1213 01:54:59.937277 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/235db396-35e0-49e0-bcd3-929b0c0c50eb-calico-apiserver-certs\") pod \"calico-apiserver-5bbbb5bd5-svpxb\" (UID: \"235db396-35e0-49e0-bcd3-929b0c0c50eb\") " pod="calico-apiserver/calico-apiserver-5bbbb5bd5-svpxb" Dec 13 01:54:59.937637 kubelet[3281]: I1213 01:54:59.937334 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6p5c\" (UniqueName: \"kubernetes.io/projected/235db396-35e0-49e0-bcd3-929b0c0c50eb-kube-api-access-b6p5c\") pod \"calico-apiserver-5bbbb5bd5-svpxb\" (UID: \"235db396-35e0-49e0-bcd3-929b0c0c50eb\") " pod="calico-apiserver/calico-apiserver-5bbbb5bd5-svpxb" Dec 13 01:54:59.937939 kubelet[3281]: I1213 01:54:59.937385 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/439df88a-c40e-4828-a6b8-8bfb2c3a7727-calico-apiserver-certs\") pod \"calico-apiserver-5bbbb5bd5-jlszc\" (UID: \"439df88a-c40e-4828-a6b8-8bfb2c3a7727\") " pod="calico-apiserver/calico-apiserver-5bbbb5bd5-jlszc" Dec 13 01:54:59.937939 kubelet[3281]: I1213 01:54:59.937431 3281 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bq59\" (UniqueName: \"kubernetes.io/projected/439df88a-c40e-4828-a6b8-8bfb2c3a7727-kube-api-access-7bq59\") pod \"calico-apiserver-5bbbb5bd5-jlszc\" (UID: \"439df88a-c40e-4828-a6b8-8bfb2c3a7727\") " pod="calico-apiserver/calico-apiserver-5bbbb5bd5-jlszc" Dec 13 01:55:00.147870 containerd[2034]: time="2024-12-13T01:55:00.147447695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fql84,Uid:8fb4856f-34b3-468a-a336-454740015a6b,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:00.189380 containerd[2034]: time="2024-12-13T01:55:00.189017999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f8859b4-fmpls,Uid:b5d5aafc-d67a-4e3e-a8b1-c9d750914db8,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:00.199953 containerd[2034]: time="2024-12-13T01:55:00.199902515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bthsx,Uid:49193545-8953-4b0d-8299-dd1e3ecf467d,Namespace:kube-system,Attempt:0,}" Dec 13 01:55:00.219502 containerd[2034]: time="2024-12-13T01:55:00.219327719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbb5bd5-jlszc,Uid:439df88a-c40e-4828-a6b8-8bfb2c3a7727,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:55:00.242115 containerd[2034]: time="2024-12-13T01:55:00.242026679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbb5bd5-svpxb,Uid:235db396-35e0-49e0-bcd3-929b0c0c50eb,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:55:00.666751 systemd[1]: Created slice kubepods-besteffort-pod9f62b98e_3864_4ed3_b68c_e8d11f28b312.slice - libcontainer container kubepods-besteffort-pod9f62b98e_3864_4ed3_b68c_e8d11f28b312.slice. Dec 13 01:55:00.672154 containerd[2034]: time="2024-12-13T01:55:00.672103213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2z4r,Uid:9f62b98e-3864-4ed3-b68c-e8d11f28b312,Namespace:calico-system,Attempt:0,}" Dec 13 01:55:00.723533 containerd[2034]: time="2024-12-13T01:55:00.723167473Z" level=info msg="shim disconnected" id=8be66313c07fd5d3cabf7a2f78e704c04c3d0a94eccd8f72307fa4583b49464c namespace=k8s.io Dec 13 01:55:00.723533 containerd[2034]: time="2024-12-13T01:55:00.723266713Z" level=warning msg="cleaning up after shim disconnected" id=8be66313c07fd5d3cabf7a2f78e704c04c3d0a94eccd8f72307fa4583b49464c namespace=k8s.io Dec 13 01:55:00.723533 containerd[2034]: time="2024-12-13T01:55:00.723287317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:55:00.896384 containerd[2034]: time="2024-12-13T01:55:00.896047802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:55:01.144314 containerd[2034]: time="2024-12-13T01:55:01.144252300Z" level=error msg="Failed to destroy network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.150215 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356-shm.mount: Deactivated successfully. Dec 13 01:55:01.153496 containerd[2034]: time="2024-12-13T01:55:01.153244608Z" level=error msg="encountered an error cleaning up failed sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.154252 containerd[2034]: time="2024-12-13T01:55:01.153906720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f8859b4-fmpls,Uid:b5d5aafc-d67a-4e3e-a8b1-c9d750914db8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.155603 kubelet[3281]: E1213 01:55:01.155493 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.157966 kubelet[3281]: E1213 01:55:01.155983 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f8859b4-fmpls" Dec 13 01:55:01.157966 kubelet[3281]: E1213 01:55:01.156248 3281 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75f8859b4-fmpls" Dec 13 01:55:01.157966 kubelet[3281]: E1213 01:55:01.156707 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75f8859b4-fmpls_calico-system(b5d5aafc-d67a-4e3e-a8b1-c9d750914db8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75f8859b4-fmpls_calico-system(b5d5aafc-d67a-4e3e-a8b1-c9d750914db8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75f8859b4-fmpls" podUID="b5d5aafc-d67a-4e3e-a8b1-c9d750914db8" Dec 13 01:55:01.162675 containerd[2034]: time="2024-12-13T01:55:01.161631348Z" level=error msg="Failed to destroy network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.172217 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8-shm.mount: Deactivated successfully. Dec 13 01:55:01.183492 containerd[2034]: time="2024-12-13T01:55:01.179769468Z" level=error msg="encountered an error cleaning up failed sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.183492 containerd[2034]: time="2024-12-13T01:55:01.182731560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fql84,Uid:8fb4856f-34b3-468a-a336-454740015a6b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.183725 kubelet[3281]: E1213 01:55:01.183056 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.183725 kubelet[3281]: E1213 01:55:01.183131 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fql84" Dec 13 01:55:01.183725 kubelet[3281]: E1213 01:55:01.183167 3281 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-fql84" Dec 13 01:55:01.183923 kubelet[3281]: E1213 01:55:01.183245 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fql84_kube-system(8fb4856f-34b3-468a-a336-454740015a6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fql84_kube-system(8fb4856f-34b3-468a-a336-454740015a6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fql84" podUID="8fb4856f-34b3-468a-a336-454740015a6b" Dec 13 01:55:01.226757 containerd[2034]: time="2024-12-13T01:55:01.224158740Z" level=error msg="Failed to destroy network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.228770 containerd[2034]: time="2024-12-13T01:55:01.228704520Z" level=error msg="encountered an error cleaning up failed sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.229757 containerd[2034]: time="2024-12-13T01:55:01.228944772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2z4r,Uid:9f62b98e-3864-4ed3-b68c-e8d11f28b312,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.230187 kubelet[3281]: E1213 01:55:01.230149 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.230548 kubelet[3281]: E1213 01:55:01.230524 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d2z4r" Dec 13 01:55:01.230926 kubelet[3281]: E1213 01:55:01.230899 3281 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d2z4r" Dec 13 01:55:01.231745 kubelet[3281]: E1213 01:55:01.231695 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d2z4r_calico-system(9f62b98e-3864-4ed3-b68c-e8d11f28b312)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d2z4r_calico-system(9f62b98e-3864-4ed3-b68c-e8d11f28b312)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d2z4r" podUID="9f62b98e-3864-4ed3-b68c-e8d11f28b312" Dec 13 01:55:01.235355 containerd[2034]: time="2024-12-13T01:55:01.235249560Z" level=error msg="Failed to destroy network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.237084 containerd[2034]: time="2024-12-13T01:55:01.237007356Z" level=error msg="encountered an error cleaning up failed sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.237356 containerd[2034]: time="2024-12-13T01:55:01.237303300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bthsx,Uid:49193545-8953-4b0d-8299-dd1e3ecf467d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.237895 kubelet[3281]: E1213 01:55:01.237842 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.240040 kubelet[3281]: E1213 01:55:01.239995 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bthsx" Dec 13 01:55:01.240320 kubelet[3281]: E1213 01:55:01.240283 3281 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bthsx" Dec 13 01:55:01.241207 kubelet[3281]: E1213 01:55:01.240518 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-bthsx_kube-system(49193545-8953-4b0d-8299-dd1e3ecf467d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-bthsx_kube-system(49193545-8953-4b0d-8299-dd1e3ecf467d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bthsx" podUID="49193545-8953-4b0d-8299-dd1e3ecf467d" Dec 13 01:55:01.241958 containerd[2034]: time="2024-12-13T01:55:01.241884372Z" level=error msg="Failed to destroy network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.243919 containerd[2034]: time="2024-12-13T01:55:01.243258564Z" level=error msg="encountered an error cleaning up failed sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.244911 containerd[2034]: time="2024-12-13T01:55:01.244559580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbb5bd5-svpxb,Uid:235db396-35e0-49e0-bcd3-929b0c0c50eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.245924 kubelet[3281]: E1213 01:55:01.245876 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.246942 kubelet[3281]: E1213 01:55:01.245963 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-svpxb" Dec 13 01:55:01.246942 kubelet[3281]: E1213 01:55:01.246003 3281 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-svpxb" Dec 13 01:55:01.246942 kubelet[3281]: E1213 01:55:01.246101 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bbbb5bd5-svpxb_calico-apiserver(235db396-35e0-49e0-bcd3-929b0c0c50eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bbbb5bd5-svpxb_calico-apiserver(235db396-35e0-49e0-bcd3-929b0c0c50eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-svpxb" podUID="235db396-35e0-49e0-bcd3-929b0c0c50eb" Dec 13 01:55:01.257052 containerd[2034]: time="2024-12-13T01:55:01.256943280Z" level=error msg="Failed to destroy network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.257879 containerd[2034]: time="2024-12-13T01:55:01.257809284Z" level=error msg="encountered an error cleaning up failed sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.257983 containerd[2034]: time="2024-12-13T01:55:01.257932104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbb5bd5-jlszc,Uid:439df88a-c40e-4828-a6b8-8bfb2c3a7727,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.258621 kubelet[3281]: E1213 01:55:01.258294 3281 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:01.258621 kubelet[3281]: E1213 01:55:01.258365 3281 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-jlszc" Dec 13 01:55:01.258621 kubelet[3281]: E1213 01:55:01.258401 3281 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-jlszc" Dec 13 01:55:01.259043 kubelet[3281]: E1213 01:55:01.258487 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5bbbb5bd5-jlszc_calico-apiserver(439df88a-c40e-4828-a6b8-8bfb2c3a7727)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5bbbb5bd5-jlszc_calico-apiserver(439df88a-c40e-4828-a6b8-8bfb2c3a7727)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-jlszc" podUID="439df88a-c40e-4828-a6b8-8bfb2c3a7727" Dec 13 01:55:01.814384 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0-shm.mount: Deactivated successfully. Dec 13 01:55:01.814556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c-shm.mount: Deactivated successfully. Dec 13 01:55:01.815001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6-shm.mount: Deactivated successfully. Dec 13 01:55:01.815137 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad-shm.mount: Deactivated successfully. Dec 13 01:55:01.895286 kubelet[3281]: I1213 01:55:01.895217 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:01.897424 containerd[2034]: time="2024-12-13T01:55:01.896455167Z" level=info msg="StopPodSandbox for \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\"" Dec 13 01:55:01.897424 containerd[2034]: time="2024-12-13T01:55:01.896769243Z" level=info msg="Ensure that sandbox cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0 in task-service has been cleanup successfully" Dec 13 01:55:01.902159 kubelet[3281]: I1213 01:55:01.902098 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:01.907350 containerd[2034]: time="2024-12-13T01:55:01.905427963Z" level=info msg="StopPodSandbox for \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\"" Dec 13 01:55:01.907350 containerd[2034]: time="2024-12-13T01:55:01.906027519Z" level=info msg="Ensure that sandbox ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6 in task-service has been cleanup successfully" Dec 13 01:55:01.911113 kubelet[3281]: I1213 01:55:01.910520 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:01.912792 containerd[2034]: time="2024-12-13T01:55:01.912738975Z" level=info msg="StopPodSandbox for \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\"" Dec 13 01:55:01.915379 containerd[2034]: time="2024-12-13T01:55:01.915320151Z" level=info msg="Ensure that sandbox 1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8 in task-service has been cleanup successfully" Dec 13 01:55:01.920295 kubelet[3281]: I1213 01:55:01.920245 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:01.924196 containerd[2034]: time="2024-12-13T01:55:01.922179063Z" level=info msg="StopPodSandbox for \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\"" Dec 13 01:55:01.925621 containerd[2034]: time="2024-12-13T01:55:01.924972579Z" level=info msg="Ensure that sandbox 8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356 in task-service has been cleanup successfully" Dec 13 01:55:01.930475 kubelet[3281]: I1213 01:55:01.929843 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:01.936005 containerd[2034]: time="2024-12-13T01:55:01.934999071Z" level=info msg="StopPodSandbox for \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\"" Dec 13 01:55:01.940981 containerd[2034]: time="2024-12-13T01:55:01.940906912Z" level=info msg="Ensure that sandbox 1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad in task-service has been cleanup successfully" Dec 13 01:55:01.942906 kubelet[3281]: I1213 01:55:01.941819 3281 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:01.948048 containerd[2034]: time="2024-12-13T01:55:01.947975992Z" level=info msg="StopPodSandbox for \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\"" Dec 13 01:55:01.948369 containerd[2034]: time="2024-12-13T01:55:01.948311500Z" level=info msg="Ensure that sandbox d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c in task-service has been cleanup successfully" Dec 13 01:55:02.088224 containerd[2034]: time="2024-12-13T01:55:02.088048968Z" level=error msg="StopPodSandbox for \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\" failed" error="failed to destroy network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.091625 kubelet[3281]: E1213 01:55:02.090297 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:02.091625 kubelet[3281]: E1213 01:55:02.090420 3281 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6"} Dec 13 01:55:02.091625 kubelet[3281]: E1213 01:55:02.090495 3281 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"49193545-8953-4b0d-8299-dd1e3ecf467d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:02.091625 kubelet[3281]: E1213 01:55:02.090550 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"49193545-8953-4b0d-8299-dd1e3ecf467d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bthsx" podUID="49193545-8953-4b0d-8299-dd1e3ecf467d" Dec 13 01:55:02.099839 containerd[2034]: time="2024-12-13T01:55:02.099771144Z" level=error msg="StopPodSandbox for \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\" failed" error="failed to destroy network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.100357 kubelet[3281]: E1213 01:55:02.100297 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:02.100487 kubelet[3281]: E1213 01:55:02.100381 3281 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0"} Dec 13 01:55:02.100487 kubelet[3281]: E1213 01:55:02.100444 3281 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f62b98e-3864-4ed3-b68c-e8d11f28b312\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:02.100702 kubelet[3281]: E1213 01:55:02.100502 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f62b98e-3864-4ed3-b68c-e8d11f28b312\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d2z4r" podUID="9f62b98e-3864-4ed3-b68c-e8d11f28b312" Dec 13 01:55:02.135004 containerd[2034]: time="2024-12-13T01:55:02.134924052Z" level=error msg="StopPodSandbox for \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\" failed" error="failed to destroy network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.135481 kubelet[3281]: E1213 01:55:02.135393 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:02.135671 kubelet[3281]: E1213 01:55:02.135635 3281 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356"} Dec 13 01:55:02.135817 kubelet[3281]: E1213 01:55:02.135785 3281 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b5d5aafc-d67a-4e3e-a8b1-c9d750914db8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:02.135928 kubelet[3281]: E1213 01:55:02.135872 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b5d5aafc-d67a-4e3e-a8b1-c9d750914db8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75f8859b4-fmpls" podUID="b5d5aafc-d67a-4e3e-a8b1-c9d750914db8" Dec 13 01:55:02.139657 containerd[2034]: time="2024-12-13T01:55:02.139533109Z" level=error msg="StopPodSandbox for \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\" failed" error="failed to destroy network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.140005 kubelet[3281]: E1213 01:55:02.139963 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:02.140099 kubelet[3281]: E1213 01:55:02.140032 3281 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad"} Dec 13 01:55:02.140774 kubelet[3281]: E1213 01:55:02.140707 3281 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"439df88a-c40e-4828-a6b8-8bfb2c3a7727\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:02.140918 kubelet[3281]: E1213 01:55:02.140833 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"439df88a-c40e-4828-a6b8-8bfb2c3a7727\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-jlszc" podUID="439df88a-c40e-4828-a6b8-8bfb2c3a7727" Dec 13 01:55:02.142500 containerd[2034]: time="2024-12-13T01:55:02.142413865Z" level=error msg="StopPodSandbox for \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\" failed" error="failed to destroy network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.143531 kubelet[3281]: E1213 01:55:02.142776 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:02.143531 kubelet[3281]: E1213 01:55:02.142842 3281 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c"} Dec 13 01:55:02.143531 kubelet[3281]: E1213 01:55:02.142910 3281 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"235db396-35e0-49e0-bcd3-929b0c0c50eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:02.143531 kubelet[3281]: E1213 01:55:02.142960 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"235db396-35e0-49e0-bcd3-929b0c0c50eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-svpxb" podUID="235db396-35e0-49e0-bcd3-929b0c0c50eb" Dec 13 01:55:02.144167 containerd[2034]: time="2024-12-13T01:55:02.144108241Z" level=error msg="StopPodSandbox for \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\" failed" error="failed to destroy network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:55:02.145737 kubelet[3281]: E1213 01:55:02.145672 3281 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:02.145857 kubelet[3281]: E1213 01:55:02.145780 3281 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8"} Dec 13 01:55:02.145910 kubelet[3281]: E1213 01:55:02.145868 3281 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8fb4856f-34b3-468a-a336-454740015a6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:55:02.146034 kubelet[3281]: E1213 01:55:02.145952 3281 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8fb4856f-34b3-468a-a336-454740015a6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-fql84" podUID="8fb4856f-34b3-468a-a336-454740015a6b" Dec 13 01:55:07.187726 systemd[1]: Started sshd@7-172.31.19.153:22-139.178.68.195:52550.service - OpenSSH per-connection server daemon (139.178.68.195:52550). Dec 13 01:55:07.385731 sshd[4652]: Accepted publickey for core from 139.178.68.195 port 52550 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:07.388860 sshd[4652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:07.401176 systemd-logind[2005]: New session 8 of user core. Dec 13 01:55:07.407922 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:55:07.740033 sshd[4652]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:07.750440 systemd[1]: sshd@7-172.31.19.153:22-139.178.68.195:52550.service: Deactivated successfully. Dec 13 01:55:07.756720 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:55:07.760551 systemd-logind[2005]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:55:07.765172 systemd-logind[2005]: Removed session 8. Dec 13 01:55:09.737912 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617500503.mount: Deactivated successfully. Dec 13 01:55:09.823632 containerd[2034]: time="2024-12-13T01:55:09.823022387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:09.825079 containerd[2034]: time="2024-12-13T01:55:09.824753939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:55:09.827117 containerd[2034]: time="2024-12-13T01:55:09.827055119Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:09.835069 containerd[2034]: time="2024-12-13T01:55:09.834998759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:09.840950 containerd[2034]: time="2024-12-13T01:55:09.840746855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 8.944466877s" Dec 13 01:55:09.840950 containerd[2034]: time="2024-12-13T01:55:09.840815243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:55:09.871674 containerd[2034]: time="2024-12-13T01:55:09.871227371Z" level=info msg="CreateContainer within sandbox \"23008bdf334e0399d51b84816c028193b9006b155ca939c2551b89e0d81c8a06\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:55:09.913493 containerd[2034]: time="2024-12-13T01:55:09.913346363Z" level=info msg="CreateContainer within sandbox \"23008bdf334e0399d51b84816c028193b9006b155ca939c2551b89e0d81c8a06\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e55b803db829f63990f24fb5432c0a5427082c800630b2fe7350a2d225c6ddbe\"" Dec 13 01:55:09.915973 containerd[2034]: time="2024-12-13T01:55:09.914563463Z" level=info msg="StartContainer for \"e55b803db829f63990f24fb5432c0a5427082c800630b2fe7350a2d225c6ddbe\"" Dec 13 01:55:09.960942 systemd[1]: Started cri-containerd-e55b803db829f63990f24fb5432c0a5427082c800630b2fe7350a2d225c6ddbe.scope - libcontainer container e55b803db829f63990f24fb5432c0a5427082c800630b2fe7350a2d225c6ddbe. Dec 13 01:55:10.042634 containerd[2034]: time="2024-12-13T01:55:10.042172340Z" level=info msg="StartContainer for \"e55b803db829f63990f24fb5432c0a5427082c800630b2fe7350a2d225c6ddbe\" returns successfully" Dec 13 01:55:10.170535 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:55:10.170932 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:55:11.021849 kubelet[3281]: I1213 01:55:11.021481 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-j64x9" podStartSLOduration=3.358025439 podStartE2EDuration="23.021389205s" podCreationTimestamp="2024-12-13 01:54:48 +0000 UTC" firstStartedPulling="2024-12-13 01:54:50.177988417 +0000 UTC m=+22.742577822" lastFinishedPulling="2024-12-13 01:55:09.841352183 +0000 UTC m=+42.405941588" observedRunningTime="2024-12-13 01:55:11.013865409 +0000 UTC m=+43.578454910" watchObservedRunningTime="2024-12-13 01:55:11.021389205 +0000 UTC m=+43.585978622" Dec 13 01:55:12.080610 systemd[1]: run-containerd-runc-k8s.io-e55b803db829f63990f24fb5432c0a5427082c800630b2fe7350a2d225c6ddbe-runc.kGt5aj.mount: Deactivated successfully. Dec 13 01:55:12.420628 kernel: bpftool[4908]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:55:12.659630 containerd[2034]: time="2024-12-13T01:55:12.659515609Z" level=info msg="StopPodSandbox for \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\"" Dec 13 01:55:12.663498 containerd[2034]: time="2024-12-13T01:55:12.659515537Z" level=info msg="StopPodSandbox for \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\"" Dec 13 01:55:12.787156 systemd[1]: Started sshd@8-172.31.19.153:22-139.178.68.195:52554.service - OpenSSH per-connection server daemon (139.178.68.195:52554). Dec 13 01:55:12.812985 systemd-networkd[1946]: vxlan.calico: Link UP Dec 13 01:55:12.813002 systemd-networkd[1946]: vxlan.calico: Gained carrier Dec 13 01:55:12.814994 (udev-worker)[4717]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:12.886234 (udev-worker)[4721]: Network interface NamePolicy= disabled on kernel command line. Dec 13 01:55:13.063592 sshd[4964]: Accepted publickey for core from 139.178.68.195 port 52554 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:13.068460 sshd[4964]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:13.084748 systemd-logind[2005]: New session 9 of user core. Dec 13 01:55:13.090881 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:12.915 [INFO][4949] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:12.916 [INFO][4949] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" iface="eth0" netns="/var/run/netns/cni-ed60deea-d387-21bd-d33a-3e16a7f6c821" Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:12.926 [INFO][4949] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" iface="eth0" netns="/var/run/netns/cni-ed60deea-d387-21bd-d33a-3e16a7f6c821" Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:12.956 [INFO][4949] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" iface="eth0" netns="/var/run/netns/cni-ed60deea-d387-21bd-d33a-3e16a7f6c821" Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:12.957 [INFO][4949] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:12.957 [INFO][4949] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:13.119 [INFO][4989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" HandleID="k8s-pod-network.1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:13.120 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:13.121 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:13.140 [WARNING][4989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" HandleID="k8s-pod-network.1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:13.140 [INFO][4989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" HandleID="k8s-pod-network.1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:13.144 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:13.165362 containerd[2034]: 2024-12-13 01:55:13.158 [INFO][4949] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:13.165362 containerd[2034]: time="2024-12-13T01:55:13.161684915Z" level=info msg="TearDown network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\" successfully" Dec 13 01:55:13.165362 containerd[2034]: time="2024-12-13T01:55:13.161764559Z" level=info msg="StopPodSandbox for \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\" returns successfully" Dec 13 01:55:13.172690 containerd[2034]: time="2024-12-13T01:55:13.170624891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbb5bd5-jlszc,Uid:439df88a-c40e-4828-a6b8-8bfb2c3a7727,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:55:13.172440 systemd[1]: run-netns-cni\x2ded60deea\x2dd387\x2d21bd\x2dd33a\x2d3e16a7f6c821.mount: Deactivated successfully. Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:12.927 [INFO][4948] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:12.935 [INFO][4948] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" iface="eth0" netns="/var/run/netns/cni-be96f1d8-e492-7ce8-c220-3f26e4d30951" Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:12.942 [INFO][4948] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" iface="eth0" netns="/var/run/netns/cni-be96f1d8-e492-7ce8-c220-3f26e4d30951" Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:12.955 [INFO][4948] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" iface="eth0" netns="/var/run/netns/cni-be96f1d8-e492-7ce8-c220-3f26e4d30951" Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:12.955 [INFO][4948] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:12.955 [INFO][4948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:13.131 [INFO][4988] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" HandleID="k8s-pod-network.d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:13.136 [INFO][4988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:13.145 [INFO][4988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:13.171 [WARNING][4988] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" HandleID="k8s-pod-network.d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:13.171 [INFO][4988] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" HandleID="k8s-pod-network.d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:13.178 [INFO][4988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:13.189743 containerd[2034]: 2024-12-13 01:55:13.182 [INFO][4948] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:13.196614 containerd[2034]: time="2024-12-13T01:55:13.193898327Z" level=info msg="TearDown network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\" successfully" Dec 13 01:55:13.196614 containerd[2034]: time="2024-12-13T01:55:13.193955291Z" level=info msg="StopPodSandbox for \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\" returns successfully" Dec 13 01:55:13.196614 containerd[2034]: time="2024-12-13T01:55:13.196138559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbb5bd5-svpxb,Uid:235db396-35e0-49e0-bcd3-929b0c0c50eb,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:55:13.198950 systemd[1]: run-netns-cni\x2dbe96f1d8\x2de492\x2d7ce8\x2dc220\x2d3f26e4d30951.mount: Deactivated successfully. Dec 13 01:55:13.531967 sshd[4964]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:13.542553 systemd[1]: sshd@8-172.31.19.153:22-139.178.68.195:52554.service: Deactivated successfully. Dec 13 01:55:13.551283 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:55:13.560994 systemd-logind[2005]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:55:13.567989 systemd-logind[2005]: Removed session 9. Dec 13 01:55:13.686024 containerd[2034]: time="2024-12-13T01:55:13.685878902Z" level=info msg="StopPodSandbox for \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\"" Dec 13 01:55:13.767015 systemd-networkd[1946]: cali16ab40fb99b: Link UP Dec 13 01:55:13.771040 systemd-networkd[1946]: cali16ab40fb99b: Gained carrier Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.438 [INFO][5020] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0 calico-apiserver-5bbbb5bd5- calico-apiserver 235db396-35e0-49e0-bcd3-929b0c0c50eb 841 0 2024-12-13 01:54:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bbbb5bd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-153 calico-apiserver-5bbbb5bd5-svpxb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali16ab40fb99b [] []}} ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-svpxb" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.438 [INFO][5020] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-svpxb" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.568 [INFO][5034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" HandleID="k8s-pod-network.2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.641 [INFO][5034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" HandleID="k8s-pod-network.2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000283430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-153", "pod":"calico-apiserver-5bbbb5bd5-svpxb", "timestamp":"2024-12-13 01:55:13.568416781 +0000 UTC"}, Hostname:"ip-172-31-19-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.641 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.641 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.641 [INFO][5034] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-153' Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.648 [INFO][5034] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" host="ip-172-31-19-153" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.663 [INFO][5034] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-153" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.692 [INFO][5034] ipam/ipam.go 489: Trying affinity for 192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.701 [INFO][5034] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.710 [INFO][5034] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.710 [INFO][5034] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" host="ip-172-31-19-153" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.715 [INFO][5034] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4 Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.726 [INFO][5034] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" host="ip-172-31-19-153" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.739 [INFO][5034] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.1/26] block=192.168.49.0/26 handle="k8s-pod-network.2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" host="ip-172-31-19-153" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.740 [INFO][5034] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.1/26] handle="k8s-pod-network.2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" host="ip-172-31-19-153" Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.740 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:13.845908 containerd[2034]: 2024-12-13 01:55:13.740 [INFO][5034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.1/26] IPv6=[] ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" HandleID="k8s-pod-network.2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.847905 containerd[2034]: 2024-12-13 01:55:13.752 [INFO][5020] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-svpxb" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0", GenerateName:"calico-apiserver-5bbbb5bd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"235db396-35e0-49e0-bcd3-929b0c0c50eb", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbb5bd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"", Pod:"calico-apiserver-5bbbb5bd5-svpxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16ab40fb99b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:13.847905 containerd[2034]: 2024-12-13 01:55:13.752 [INFO][5020] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.1/32] ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-svpxb" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.847905 containerd[2034]: 2024-12-13 01:55:13.753 [INFO][5020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16ab40fb99b ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-svpxb" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.847905 containerd[2034]: 2024-12-13 01:55:13.780 [INFO][5020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-svpxb" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.847905 containerd[2034]: 2024-12-13 01:55:13.781 [INFO][5020] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-svpxb" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0", GenerateName:"calico-apiserver-5bbbb5bd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"235db396-35e0-49e0-bcd3-929b0c0c50eb", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbb5bd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4", Pod:"calico-apiserver-5bbbb5bd5-svpxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16ab40fb99b", MAC:"26:62:f6:e0:12:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:13.847905 containerd[2034]: 2024-12-13 01:55:13.825 [INFO][5020] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-svpxb" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:13.925628 containerd[2034]: time="2024-12-13T01:55:13.923369595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:13.925628 containerd[2034]: time="2024-12-13T01:55:13.924233187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:13.925628 containerd[2034]: time="2024-12-13T01:55:13.924279927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:13.925628 containerd[2034]: time="2024-12-13T01:55:13.924435351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:13.959763 systemd-networkd[1946]: cali2785bdfd1c5: Link UP Dec 13 01:55:13.960264 systemd-networkd[1946]: cali2785bdfd1c5: Gained carrier Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.452 [INFO][5007] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0 calico-apiserver-5bbbb5bd5- calico-apiserver 439df88a-c40e-4828-a6b8-8bfb2c3a7727 840 0 2024-12-13 01:54:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5bbbb5bd5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-153 calico-apiserver-5bbbb5bd5-jlszc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2785bdfd1c5 [] []}} ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-jlszc" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.453 [INFO][5007] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-jlszc" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.628 [INFO][5038] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" HandleID="k8s-pod-network.92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.660 [INFO][5038] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" HandleID="k8s-pod-network.92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e04d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-153", "pod":"calico-apiserver-5bbbb5bd5-jlszc", "timestamp":"2024-12-13 01:55:13.628466966 +0000 UTC"}, Hostname:"ip-172-31-19-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.663 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.741 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.741 [INFO][5038] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-153' Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.750 [INFO][5038] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" host="ip-172-31-19-153" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.765 [INFO][5038] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-153" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.792 [INFO][5038] ipam/ipam.go 489: Trying affinity for 192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.803 [INFO][5038] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.843 [INFO][5038] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.844 [INFO][5038] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" host="ip-172-31-19-153" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.855 [INFO][5038] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079 Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.873 [INFO][5038] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" host="ip-172-31-19-153" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.911 [INFO][5038] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.2/26] block=192.168.49.0/26 handle="k8s-pod-network.92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" host="ip-172-31-19-153" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.911 [INFO][5038] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.2/26] handle="k8s-pod-network.92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" host="ip-172-31-19-153" Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.911 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:14.043765 containerd[2034]: 2024-12-13 01:55:13.911 [INFO][5038] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.2/26] IPv6=[] ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" HandleID="k8s-pod-network.92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:14.044886 containerd[2034]: 2024-12-13 01:55:13.938 [INFO][5007] cni-plugin/k8s.go 386: Populated endpoint ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-jlszc" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0", GenerateName:"calico-apiserver-5bbbb5bd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"439df88a-c40e-4828-a6b8-8bfb2c3a7727", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbb5bd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"", Pod:"calico-apiserver-5bbbb5bd5-jlszc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2785bdfd1c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:14.044886 containerd[2034]: 2024-12-13 01:55:13.940 [INFO][5007] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.2/32] ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-jlszc" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:14.044886 containerd[2034]: 2024-12-13 01:55:13.941 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2785bdfd1c5 ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-jlszc" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:14.044886 containerd[2034]: 2024-12-13 01:55:13.961 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-jlszc" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:14.044886 containerd[2034]: 2024-12-13 01:55:13.965 [INFO][5007] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-jlszc" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0", GenerateName:"calico-apiserver-5bbbb5bd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"439df88a-c40e-4828-a6b8-8bfb2c3a7727", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbb5bd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079", Pod:"calico-apiserver-5bbbb5bd5-jlszc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2785bdfd1c5", MAC:"02:ff:61:2c:1d:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:14.044886 containerd[2034]: 2024-12-13 01:55:14.038 [INFO][5007] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079" Namespace="calico-apiserver" Pod="calico-apiserver-5bbbb5bd5-jlszc" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:14.145620 containerd[2034]: time="2024-12-13T01:55:14.143644212Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:14.145620 containerd[2034]: time="2024-12-13T01:55:14.143753100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:14.145620 containerd[2034]: time="2024-12-13T01:55:14.143793276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:14.145620 containerd[2034]: time="2024-12-13T01:55:14.143990556Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:14.180526 systemd[1]: Started cri-containerd-2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4.scope - libcontainer container 2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4. Dec 13 01:55:14.215899 systemd[1]: Started cri-containerd-92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079.scope - libcontainer container 92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079. Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.000 [INFO][5077] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.001 [INFO][5077] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" iface="eth0" netns="/var/run/netns/cni-189abf78-bbe8-0b90-6d07-a68e676e48ec" Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.002 [INFO][5077] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" iface="eth0" netns="/var/run/netns/cni-189abf78-bbe8-0b90-6d07-a68e676e48ec" Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.006 [INFO][5077] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" iface="eth0" netns="/var/run/netns/cni-189abf78-bbe8-0b90-6d07-a68e676e48ec" Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.006 [INFO][5077] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.006 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.133 [INFO][5135] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" HandleID="k8s-pod-network.ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.133 [INFO][5135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.133 [INFO][5135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.186 [WARNING][5135] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" HandleID="k8s-pod-network.ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.186 [INFO][5135] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" HandleID="k8s-pod-network.ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.204 [INFO][5135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:14.227349 containerd[2034]: 2024-12-13 01:55:14.213 [INFO][5077] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:14.232868 systemd[1]: run-netns-cni\x2d189abf78\x2dbbe8\x2d0b90\x2d6d07\x2da68e676e48ec.mount: Deactivated successfully. Dec 13 01:55:14.235494 containerd[2034]: time="2024-12-13T01:55:14.234783301Z" level=info msg="TearDown network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\" successfully" Dec 13 01:55:14.235494 containerd[2034]: time="2024-12-13T01:55:14.234836773Z" level=info msg="StopPodSandbox for \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\" returns successfully" Dec 13 01:55:14.236903 containerd[2034]: time="2024-12-13T01:55:14.235988461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bthsx,Uid:49193545-8953-4b0d-8299-dd1e3ecf467d,Namespace:kube-system,Attempt:1,}" Dec 13 01:55:14.432392 containerd[2034]: time="2024-12-13T01:55:14.431853638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbb5bd5-svpxb,Uid:235db396-35e0-49e0-bcd3-929b0c0c50eb,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4\"" Dec 13 01:55:14.438864 containerd[2034]: time="2024-12-13T01:55:14.438401222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:55:14.477628 containerd[2034]: time="2024-12-13T01:55:14.477375782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5bbbb5bd5-jlszc,Uid:439df88a-c40e-4828-a6b8-8bfb2c3a7727,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079\"" Dec 13 01:55:14.600299 systemd-networkd[1946]: cali7f93bf97d1c: Link UP Dec 13 01:55:14.600791 systemd-networkd[1946]: cali7f93bf97d1c: Gained carrier Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.456 [INFO][5199] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0 coredns-76f75df574- kube-system 49193545-8953-4b0d-8299-dd1e3ecf467d 856 0 2024-12-13 01:54:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-153 coredns-76f75df574-bthsx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7f93bf97d1c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Namespace="kube-system" Pod="coredns-76f75df574-bthsx" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.456 [INFO][5199] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Namespace="kube-system" Pod="coredns-76f75df574-bthsx" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.526 [INFO][5225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" HandleID="k8s-pod-network.8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.543 [INFO][5225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" HandleID="k8s-pod-network.8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002eb500), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-153", "pod":"coredns-76f75df574-bthsx", "timestamp":"2024-12-13 01:55:14.526119554 +0000 UTC"}, Hostname:"ip-172-31-19-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.544 [INFO][5225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.544 [INFO][5225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.544 [INFO][5225] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-153' Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.547 [INFO][5225] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" host="ip-172-31-19-153" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.554 [INFO][5225] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-153" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.562 [INFO][5225] ipam/ipam.go 489: Trying affinity for 192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.566 [INFO][5225] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.570 [INFO][5225] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.571 [INFO][5225] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" host="ip-172-31-19-153" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.574 [INFO][5225] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.580 [INFO][5225] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" host="ip-172-31-19-153" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.591 [INFO][5225] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.3/26] block=192.168.49.0/26 handle="k8s-pod-network.8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" host="ip-172-31-19-153" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.591 [INFO][5225] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.3/26] handle="k8s-pod-network.8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" host="ip-172-31-19-153" Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.591 [INFO][5225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:14.629525 containerd[2034]: 2024-12-13 01:55:14.591 [INFO][5225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.3/26] IPv6=[] ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" HandleID="k8s-pod-network.8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.630879 containerd[2034]: 2024-12-13 01:55:14.595 [INFO][5199] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Namespace="kube-system" Pod="coredns-76f75df574-bthsx" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"49193545-8953-4b0d-8299-dd1e3ecf467d", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"", Pod:"coredns-76f75df574-bthsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f93bf97d1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:14.630879 containerd[2034]: 2024-12-13 01:55:14.595 [INFO][5199] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.3/32] ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Namespace="kube-system" Pod="coredns-76f75df574-bthsx" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.630879 containerd[2034]: 2024-12-13 01:55:14.595 [INFO][5199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f93bf97d1c ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Namespace="kube-system" Pod="coredns-76f75df574-bthsx" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.630879 containerd[2034]: 2024-12-13 01:55:14.602 [INFO][5199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Namespace="kube-system" Pod="coredns-76f75df574-bthsx" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.630879 containerd[2034]: 2024-12-13 01:55:14.603 [INFO][5199] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Namespace="kube-system" Pod="coredns-76f75df574-bthsx" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"49193545-8953-4b0d-8299-dd1e3ecf467d", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d", Pod:"coredns-76f75df574-bthsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f93bf97d1c", MAC:"22:e0:59:35:b7:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:14.630879 containerd[2034]: 2024-12-13 01:55:14.625 [INFO][5199] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d" Namespace="kube-system" Pod="coredns-76f75df574-bthsx" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:14.659417 containerd[2034]: time="2024-12-13T01:55:14.658442391Z" level=info msg="StopPodSandbox for \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\"" Dec 13 01:55:14.689717 containerd[2034]: time="2024-12-13T01:55:14.688607331Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:14.689717 containerd[2034]: time="2024-12-13T01:55:14.688848747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:14.689717 containerd[2034]: time="2024-12-13T01:55:14.688909791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:14.690374 containerd[2034]: time="2024-12-13T01:55:14.689647527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:14.775287 systemd[1]: Started cri-containerd-8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d.scope - libcontainer container 8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d. Dec 13 01:55:14.871746 systemd-networkd[1946]: vxlan.calico: Gained IPv6LL Dec 13 01:55:14.886065 containerd[2034]: time="2024-12-13T01:55:14.885994420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bthsx,Uid:49193545-8953-4b0d-8299-dd1e3ecf467d,Namespace:kube-system,Attempt:1,} returns sandbox id \"8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d\"" Dec 13 01:55:14.893278 containerd[2034]: time="2024-12-13T01:55:14.893201272Z" level=info msg="CreateContainer within sandbox \"8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.814 [INFO][5268] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.815 [INFO][5268] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" iface="eth0" netns="/var/run/netns/cni-aa8d60d5-f433-3843-11b3-5061f5ea94f4" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.816 [INFO][5268] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" iface="eth0" netns="/var/run/netns/cni-aa8d60d5-f433-3843-11b3-5061f5ea94f4" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.817 [INFO][5268] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" iface="eth0" netns="/var/run/netns/cni-aa8d60d5-f433-3843-11b3-5061f5ea94f4" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.817 [INFO][5268] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.818 [INFO][5268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.878 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" HandleID="k8s-pod-network.1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.878 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.878 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.900 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" HandleID="k8s-pod-network.1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.900 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" HandleID="k8s-pod-network.1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.903 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:14.909869 containerd[2034]: 2024-12-13 01:55:14.906 [INFO][5268] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:14.911090 containerd[2034]: time="2024-12-13T01:55:14.910118152Z" level=info msg="TearDown network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\" successfully" Dec 13 01:55:14.911090 containerd[2034]: time="2024-12-13T01:55:14.910162300Z" level=info msg="StopPodSandbox for \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\" returns successfully" Dec 13 01:55:14.912052 containerd[2034]: time="2024-12-13T01:55:14.911705320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fql84,Uid:8fb4856f-34b3-468a-a336-454740015a6b,Namespace:kube-system,Attempt:1,}" Dec 13 01:55:14.927006 containerd[2034]: time="2024-12-13T01:55:14.926924044Z" level=info msg="CreateContainer within sandbox \"8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd934653c0b3f986c6d1f458079405350f177c4bb2b20ed16bc77203b3ef651a\"" Dec 13 01:55:14.928949 containerd[2034]: time="2024-12-13T01:55:14.928850548Z" level=info msg="StartContainer for \"bd934653c0b3f986c6d1f458079405350f177c4bb2b20ed16bc77203b3ef651a\"" Dec 13 01:55:14.998300 systemd[1]: Started cri-containerd-bd934653c0b3f986c6d1f458079405350f177c4bb2b20ed16bc77203b3ef651a.scope - libcontainer container bd934653c0b3f986c6d1f458079405350f177c4bb2b20ed16bc77203b3ef651a. Dec 13 01:55:15.086650 containerd[2034]: time="2024-12-13T01:55:15.086281453Z" level=info msg="StartContainer for \"bd934653c0b3f986c6d1f458079405350f177c4bb2b20ed16bc77203b3ef651a\" returns successfully" Dec 13 01:55:15.198398 systemd[1]: run-netns-cni\x2daa8d60d5\x2df433\x2d3843\x2d11b3\x2d5061f5ea94f4.mount: Deactivated successfully. Dec 13 01:55:15.255221 systemd-networkd[1946]: calif7be845a663: Link UP Dec 13 01:55:15.258023 systemd-networkd[1946]: calif7be845a663: Gained carrier Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.055 [INFO][5316] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0 coredns-76f75df574- kube-system 8fb4856f-34b3-468a-a336-454740015a6b 869 0 2024-12-13 01:54:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-153 coredns-76f75df574-fql84 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7be845a663 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Namespace="kube-system" Pod="coredns-76f75df574-fql84" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.055 [INFO][5316] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Namespace="kube-system" Pod="coredns-76f75df574-fql84" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.141 [INFO][5349] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" HandleID="k8s-pod-network.8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.163 [INFO][5349] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" HandleID="k8s-pod-network.8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003168f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-153", "pod":"coredns-76f75df574-fql84", "timestamp":"2024-12-13 01:55:15.141317233 +0000 UTC"}, Hostname:"ip-172-31-19-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.163 [INFO][5349] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.164 [INFO][5349] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.165 [INFO][5349] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-153' Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.170 [INFO][5349] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" host="ip-172-31-19-153" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.189 [INFO][5349] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-153" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.221 [INFO][5349] ipam/ipam.go 489: Trying affinity for 192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.224 [INFO][5349] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.228 [INFO][5349] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.228 [INFO][5349] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" host="ip-172-31-19-153" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.231 [INFO][5349] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254 Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.237 [INFO][5349] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" host="ip-172-31-19-153" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.246 [INFO][5349] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.4/26] block=192.168.49.0/26 handle="k8s-pod-network.8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" host="ip-172-31-19-153" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.246 [INFO][5349] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.4/26] handle="k8s-pod-network.8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" host="ip-172-31-19-153" Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.247 [INFO][5349] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:15.283836 containerd[2034]: 2024-12-13 01:55:15.247 [INFO][5349] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.4/26] IPv6=[] ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" HandleID="k8s-pod-network.8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:15.285056 containerd[2034]: 2024-12-13 01:55:15.249 [INFO][5316] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Namespace="kube-system" Pod="coredns-76f75df574-fql84" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fb4856f-34b3-468a-a336-454740015a6b", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"", Pod:"coredns-76f75df574-fql84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7be845a663", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:15.285056 containerd[2034]: 2024-12-13 01:55:15.250 [INFO][5316] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.4/32] ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Namespace="kube-system" Pod="coredns-76f75df574-fql84" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:15.285056 containerd[2034]: 2024-12-13 01:55:15.250 [INFO][5316] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7be845a663 ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Namespace="kube-system" Pod="coredns-76f75df574-fql84" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:15.285056 containerd[2034]: 2024-12-13 01:55:15.255 [INFO][5316] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Namespace="kube-system" Pod="coredns-76f75df574-fql84" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:15.285056 containerd[2034]: 2024-12-13 01:55:15.257 [INFO][5316] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Namespace="kube-system" Pod="coredns-76f75df574-fql84" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fb4856f-34b3-468a-a336-454740015a6b", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254", Pod:"coredns-76f75df574-fql84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7be845a663", MAC:"16:f3:9f:42:9e:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:15.285056 containerd[2034]: 2024-12-13 01:55:15.278 [INFO][5316] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254" Namespace="kube-system" Pod="coredns-76f75df574-fql84" WorkloadEndpoint="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:15.372398 containerd[2034]: time="2024-12-13T01:55:15.372224750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:15.372398 containerd[2034]: time="2024-12-13T01:55:15.372315974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:15.372398 containerd[2034]: time="2024-12-13T01:55:15.372357422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:15.373188 containerd[2034]: time="2024-12-13T01:55:15.372521534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:15.417874 systemd[1]: Started cri-containerd-8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254.scope - libcontainer container 8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254. Dec 13 01:55:15.490006 containerd[2034]: time="2024-12-13T01:55:15.489948039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fql84,Uid:8fb4856f-34b3-468a-a336-454740015a6b,Namespace:kube-system,Attempt:1,} returns sandbox id \"8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254\"" Dec 13 01:55:15.496959 containerd[2034]: time="2024-12-13T01:55:15.496654263Z" level=info msg="CreateContainer within sandbox \"8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:55:15.527718 containerd[2034]: time="2024-12-13T01:55:15.527440659Z" level=info msg="CreateContainer within sandbox \"8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a56656c9036f385f19d4cf60165e05858f71d6296571584875ac1fa6785c09ff\"" Dec 13 01:55:15.533159 containerd[2034]: time="2024-12-13T01:55:15.530347407Z" level=info msg="StartContainer for \"a56656c9036f385f19d4cf60165e05858f71d6296571584875ac1fa6785c09ff\"" Dec 13 01:55:15.581887 systemd[1]: Started cri-containerd-a56656c9036f385f19d4cf60165e05858f71d6296571584875ac1fa6785c09ff.scope - libcontainer container a56656c9036f385f19d4cf60165e05858f71d6296571584875ac1fa6785c09ff. Dec 13 01:55:15.645074 containerd[2034]: time="2024-12-13T01:55:15.645004672Z" level=info msg="StartContainer for \"a56656c9036f385f19d4cf60165e05858f71d6296571584875ac1fa6785c09ff\" returns successfully" Dec 13 01:55:15.664048 containerd[2034]: time="2024-12-13T01:55:15.663971656Z" level=info msg="StopPodSandbox for \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\"" Dec 13 01:55:15.766851 systemd-networkd[1946]: cali16ab40fb99b: Gained IPv6LL Dec 13 01:55:15.894108 systemd-networkd[1946]: cali7f93bf97d1c: Gained IPv6LL Dec 13 01:55:15.894614 systemd-networkd[1946]: cali2785bdfd1c5: Gained IPv6LL Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.827 [INFO][5465] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.827 [INFO][5465] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" iface="eth0" netns="/var/run/netns/cni-fc9e3ddb-0dc2-2cdb-0a3b-ae271418c6ce" Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.828 [INFO][5465] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" iface="eth0" netns="/var/run/netns/cni-fc9e3ddb-0dc2-2cdb-0a3b-ae271418c6ce" Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.830 [INFO][5465] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" iface="eth0" netns="/var/run/netns/cni-fc9e3ddb-0dc2-2cdb-0a3b-ae271418c6ce" Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.830 [INFO][5465] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.830 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.885 [INFO][5472] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" HandleID="k8s-pod-network.8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.885 [INFO][5472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.886 [INFO][5472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.902 [WARNING][5472] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" HandleID="k8s-pod-network.8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.902 [INFO][5472] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" HandleID="k8s-pod-network.8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.905 [INFO][5472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:15.910257 containerd[2034]: 2024-12-13 01:55:15.907 [INFO][5465] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:15.911780 containerd[2034]: time="2024-12-13T01:55:15.911051717Z" level=info msg="TearDown network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\" successfully" Dec 13 01:55:15.911780 containerd[2034]: time="2024-12-13T01:55:15.911098805Z" level=info msg="StopPodSandbox for \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\" returns successfully" Dec 13 01:55:15.912392 containerd[2034]: time="2024-12-13T01:55:15.912344837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f8859b4-fmpls,Uid:b5d5aafc-d67a-4e3e-a8b1-c9d750914db8,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:16.167940 kubelet[3281]: I1213 01:55:16.165745 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fql84" podStartSLOduration=37.16568183 podStartE2EDuration="37.16568183s" podCreationTimestamp="2024-12-13 01:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:16.09933131 +0000 UTC m=+48.663920727" watchObservedRunningTime="2024-12-13 01:55:16.16568183 +0000 UTC m=+48.730271223" Dec 13 01:55:16.198115 systemd[1]: run-netns-cni\x2dfc9e3ddb\x2d0dc2\x2d2cdb\x2d0a3b\x2dae271418c6ce.mount: Deactivated successfully. Dec 13 01:55:16.241667 kubelet[3281]: I1213 01:55:16.241231 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bthsx" podStartSLOduration=37.241172523 podStartE2EDuration="37.241172523s" podCreationTimestamp="2024-12-13 01:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:55:16.240238827 +0000 UTC m=+48.804828232" watchObservedRunningTime="2024-12-13 01:55:16.241172523 +0000 UTC m=+48.805761928" Dec 13 01:55:16.252168 systemd-networkd[1946]: calic5e2eb4190f: Link UP Dec 13 01:55:16.255784 systemd-networkd[1946]: calic5e2eb4190f: Gained carrier Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:15.992 [INFO][5479] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0 calico-kube-controllers-75f8859b4- calico-system b5d5aafc-d67a-4e3e-a8b1-c9d750914db8 881 0 2024-12-13 01:54:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75f8859b4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-153 calico-kube-controllers-75f8859b4-fmpls eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic5e2eb4190f [] []}} ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Namespace="calico-system" Pod="calico-kube-controllers-75f8859b4-fmpls" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:15.993 [INFO][5479] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Namespace="calico-system" Pod="calico-kube-controllers-75f8859b4-fmpls" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.042 [INFO][5490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" HandleID="k8s-pod-network.abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.071 [INFO][5490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" HandleID="k8s-pod-network.abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c700), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-153", "pod":"calico-kube-controllers-75f8859b4-fmpls", "timestamp":"2024-12-13 01:55:16.042940202 +0000 UTC"}, Hostname:"ip-172-31-19-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.075 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.075 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.076 [INFO][5490] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-153' Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.088 [INFO][5490] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" host="ip-172-31-19-153" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.121 [INFO][5490] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-153" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.136 [INFO][5490] ipam/ipam.go 489: Trying affinity for 192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.144 [INFO][5490] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.151 [INFO][5490] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.151 [INFO][5490] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" host="ip-172-31-19-153" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.169 [INFO][5490] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392 Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.200 [INFO][5490] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" host="ip-172-31-19-153" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.237 [INFO][5490] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.5/26] block=192.168.49.0/26 handle="k8s-pod-network.abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" host="ip-172-31-19-153" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.237 [INFO][5490] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.5/26] handle="k8s-pod-network.abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" host="ip-172-31-19-153" Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.237 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:16.297716 containerd[2034]: 2024-12-13 01:55:16.237 [INFO][5490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.5/26] IPv6=[] ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" HandleID="k8s-pod-network.abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:16.298956 containerd[2034]: 2024-12-13 01:55:16.243 [INFO][5479] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Namespace="calico-system" Pod="calico-kube-controllers-75f8859b4-fmpls" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0", GenerateName:"calico-kube-controllers-75f8859b4-", Namespace:"calico-system", SelfLink:"", UID:"b5d5aafc-d67a-4e3e-a8b1-c9d750914db8", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f8859b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"", Pod:"calico-kube-controllers-75f8859b4-fmpls", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5e2eb4190f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:16.298956 containerd[2034]: 2024-12-13 01:55:16.243 [INFO][5479] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.5/32] ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Namespace="calico-system" Pod="calico-kube-controllers-75f8859b4-fmpls" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:16.298956 containerd[2034]: 2024-12-13 01:55:16.243 [INFO][5479] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic5e2eb4190f ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Namespace="calico-system" Pod="calico-kube-controllers-75f8859b4-fmpls" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:16.298956 containerd[2034]: 2024-12-13 01:55:16.255 [INFO][5479] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Namespace="calico-system" Pod="calico-kube-controllers-75f8859b4-fmpls" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:16.298956 containerd[2034]: 2024-12-13 01:55:16.257 [INFO][5479] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Namespace="calico-system" Pod="calico-kube-controllers-75f8859b4-fmpls" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0", GenerateName:"calico-kube-controllers-75f8859b4-", Namespace:"calico-system", SelfLink:"", UID:"b5d5aafc-d67a-4e3e-a8b1-c9d750914db8", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f8859b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392", Pod:"calico-kube-controllers-75f8859b4-fmpls", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5e2eb4190f", MAC:"4e:10:67:14:b6:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:16.298956 containerd[2034]: 2024-12-13 01:55:16.292 [INFO][5479] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392" Namespace="calico-system" Pod="calico-kube-controllers-75f8859b4-fmpls" WorkloadEndpoint="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:16.341700 systemd-networkd[1946]: calif7be845a663: Gained IPv6LL Dec 13 01:55:16.366661 containerd[2034]: time="2024-12-13T01:55:16.365697039Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:16.366661 containerd[2034]: time="2024-12-13T01:55:16.365803791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:16.366661 containerd[2034]: time="2024-12-13T01:55:16.365841759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:16.367463 containerd[2034]: time="2024-12-13T01:55:16.366059523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:16.428933 systemd[1]: Started cri-containerd-abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392.scope - libcontainer container abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392. Dec 13 01:55:16.496398 containerd[2034]: time="2024-12-13T01:55:16.496284208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75f8859b4-fmpls,Uid:b5d5aafc-d67a-4e3e-a8b1-c9d750914db8,Namespace:calico-system,Attempt:1,} returns sandbox id \"abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392\"" Dec 13 01:55:16.658116 containerd[2034]: time="2024-12-13T01:55:16.658024925Z" level=info msg="StopPodSandbox for \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\"" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.743 [INFO][5568] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.745 [INFO][5568] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" iface="eth0" netns="/var/run/netns/cni-dc6a5798-7a7c-1888-9f56-cc09a984a3a7" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.746 [INFO][5568] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" iface="eth0" netns="/var/run/netns/cni-dc6a5798-7a7c-1888-9f56-cc09a984a3a7" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.746 [INFO][5568] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" iface="eth0" netns="/var/run/netns/cni-dc6a5798-7a7c-1888-9f56-cc09a984a3a7" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.746 [INFO][5568] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.746 [INFO][5568] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.785 [INFO][5574] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" HandleID="k8s-pod-network.cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.785 [INFO][5574] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.785 [INFO][5574] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.798 [WARNING][5574] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" HandleID="k8s-pod-network.cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.798 [INFO][5574] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" HandleID="k8s-pod-network.cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.803 [INFO][5574] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:16.812606 containerd[2034]: 2024-12-13 01:55:16.805 [INFO][5568] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:16.812606 containerd[2034]: time="2024-12-13T01:55:16.810005057Z" level=info msg="TearDown network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\" successfully" Dec 13 01:55:16.812606 containerd[2034]: time="2024-12-13T01:55:16.810046421Z" level=info msg="StopPodSandbox for \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\" returns successfully" Dec 13 01:55:16.814274 containerd[2034]: time="2024-12-13T01:55:16.814192313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2z4r,Uid:9f62b98e-3864-4ed3-b68c-e8d11f28b312,Namespace:calico-system,Attempt:1,}" Dec 13 01:55:16.819466 systemd[1]: run-netns-cni\x2ddc6a5798\x2d7a7c\x2d1888\x2d9f56\x2dcc09a984a3a7.mount: Deactivated successfully. Dec 13 01:55:17.064921 systemd-networkd[1946]: cali0f0ee58fc57: Link UP Dec 13 01:55:17.067986 systemd-networkd[1946]: cali0f0ee58fc57: Gained carrier Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:16.920 [INFO][5580] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0 csi-node-driver- calico-system 9f62b98e-3864-4ed3-b68c-e8d11f28b312 901 0 2024-12-13 01:54:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-19-153 csi-node-driver-d2z4r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0f0ee58fc57 [] []}} ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Namespace="calico-system" Pod="csi-node-driver-d2z4r" WorkloadEndpoint="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:16.921 [INFO][5580] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Namespace="calico-system" Pod="csi-node-driver-d2z4r" WorkloadEndpoint="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:16.973 [INFO][5592] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" HandleID="k8s-pod-network.f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:16.997 [INFO][5592] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" HandleID="k8s-pod-network.f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a8f00), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-153", "pod":"csi-node-driver-d2z4r", "timestamp":"2024-12-13 01:55:16.97384269 +0000 UTC"}, Hostname:"ip-172-31-19-153", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:16.997 [INFO][5592] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:16.997 [INFO][5592] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:16.997 [INFO][5592] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-153' Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.002 [INFO][5592] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" host="ip-172-31-19-153" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.011 [INFO][5592] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-153" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.019 [INFO][5592] ipam/ipam.go 489: Trying affinity for 192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.023 [INFO][5592] ipam/ipam.go 155: Attempting to load block cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.027 [INFO][5592] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.49.0/26 host="ip-172-31-19-153" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.027 [INFO][5592] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.49.0/26 handle="k8s-pod-network.f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" host="ip-172-31-19-153" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.030 [INFO][5592] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6 Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.038 [INFO][5592] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.49.0/26 handle="k8s-pod-network.f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" host="ip-172-31-19-153" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.051 [INFO][5592] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.49.6/26] block=192.168.49.0/26 handle="k8s-pod-network.f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" host="ip-172-31-19-153" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.051 [INFO][5592] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.49.6/26] handle="k8s-pod-network.f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" host="ip-172-31-19-153" Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.051 [INFO][5592] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:17.116114 containerd[2034]: 2024-12-13 01:55:17.051 [INFO][5592] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.49.6/26] IPv6=[] ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" HandleID="k8s-pod-network.f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:17.118312 containerd[2034]: 2024-12-13 01:55:17.054 [INFO][5580] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Namespace="calico-system" Pod="csi-node-driver-d2z4r" WorkloadEndpoint="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f62b98e-3864-4ed3-b68c-e8d11f28b312", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"", Pod:"csi-node-driver-d2z4r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f0ee58fc57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:17.118312 containerd[2034]: 2024-12-13 01:55:17.055 [INFO][5580] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.49.6/32] ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Namespace="calico-system" Pod="csi-node-driver-d2z4r" WorkloadEndpoint="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:17.118312 containerd[2034]: 2024-12-13 01:55:17.055 [INFO][5580] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f0ee58fc57 ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Namespace="calico-system" Pod="csi-node-driver-d2z4r" WorkloadEndpoint="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:17.118312 containerd[2034]: 2024-12-13 01:55:17.066 [INFO][5580] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Namespace="calico-system" Pod="csi-node-driver-d2z4r" WorkloadEndpoint="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:17.118312 containerd[2034]: 2024-12-13 01:55:17.066 [INFO][5580] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Namespace="calico-system" Pod="csi-node-driver-d2z4r" WorkloadEndpoint="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f62b98e-3864-4ed3-b68c-e8d11f28b312", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6", Pod:"csi-node-driver-d2z4r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f0ee58fc57", MAC:"96:ae:1e:f3:e7:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:17.118312 containerd[2034]: 2024-12-13 01:55:17.101 [INFO][5580] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6" Namespace="calico-system" Pod="csi-node-driver-d2z4r" WorkloadEndpoint="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:17.186460 containerd[2034]: time="2024-12-13T01:55:17.185081499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:55:17.186460 containerd[2034]: time="2024-12-13T01:55:17.185193723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:55:17.186460 containerd[2034]: time="2024-12-13T01:55:17.185219583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:17.186460 containerd[2034]: time="2024-12-13T01:55:17.185366859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:55:17.290000 systemd[1]: Started cri-containerd-f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6.scope - libcontainer container f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6. Dec 13 01:55:17.389150 containerd[2034]: time="2024-12-13T01:55:17.389096572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d2z4r,Uid:9f62b98e-3864-4ed3-b68c-e8d11f28b312,Namespace:calico-system,Attempt:1,} returns sandbox id \"f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6\"" Dec 13 01:55:18.132917 systemd-networkd[1946]: calic5e2eb4190f: Gained IPv6LL Dec 13 01:55:18.426333 containerd[2034]: time="2024-12-13T01:55:18.426166601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.431933 containerd[2034]: time="2024-12-13T01:55:18.431780957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:55:18.436394 containerd[2034]: time="2024-12-13T01:55:18.434724161Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.442317 containerd[2034]: time="2024-12-13T01:55:18.442259297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.444338 containerd[2034]: time="2024-12-13T01:55:18.443862101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 4.005075511s" Dec 13 01:55:18.444338 containerd[2034]: time="2024-12-13T01:55:18.443960441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:55:18.448047 containerd[2034]: time="2024-12-13T01:55:18.447833274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:55:18.453851 containerd[2034]: time="2024-12-13T01:55:18.453776622Z" level=info msg="CreateContainer within sandbox \"2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:55:18.489227 containerd[2034]: time="2024-12-13T01:55:18.489131226Z" level=info msg="CreateContainer within sandbox \"2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5f00f91f1fc2f0500b9fa2d163a549073596c307bca12b59cc83bd2e19ca5df8\"" Dec 13 01:55:18.492061 containerd[2034]: time="2024-12-13T01:55:18.491980290Z" level=info msg="StartContainer for \"5f00f91f1fc2f0500b9fa2d163a549073596c307bca12b59cc83bd2e19ca5df8\"" Dec 13 01:55:18.582946 systemd[1]: Started cri-containerd-5f00f91f1fc2f0500b9fa2d163a549073596c307bca12b59cc83bd2e19ca5df8.scope - libcontainer container 5f00f91f1fc2f0500b9fa2d163a549073596c307bca12b59cc83bd2e19ca5df8. Dec 13 01:55:18.590106 systemd[1]: Started sshd@9-172.31.19.153:22-139.178.68.195:34194.service - OpenSSH per-connection server daemon (139.178.68.195:34194). Dec 13 01:55:18.752919 containerd[2034]: time="2024-12-13T01:55:18.752738647Z" level=info msg="StartContainer for \"5f00f91f1fc2f0500b9fa2d163a549073596c307bca12b59cc83bd2e19ca5df8\" returns successfully" Dec 13 01:55:18.835774 sshd[5680]: Accepted publickey for core from 139.178.68.195 port 34194 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:18.841186 sshd[5680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:18.856050 systemd-logind[2005]: New session 10 of user core. Dec 13 01:55:18.864791 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:55:18.895760 containerd[2034]: time="2024-12-13T01:55:18.895687796Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:18.897495 containerd[2034]: time="2024-12-13T01:55:18.897448244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:55:18.906310 containerd[2034]: time="2024-12-13T01:55:18.906222596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 458.317358ms" Dec 13 01:55:18.906310 containerd[2034]: time="2024-12-13T01:55:18.906288668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:55:18.909096 containerd[2034]: time="2024-12-13T01:55:18.908202548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:55:18.913078 containerd[2034]: time="2024-12-13T01:55:18.912998300Z" level=info msg="CreateContainer within sandbox \"92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:55:18.944109 containerd[2034]: time="2024-12-13T01:55:18.943897928Z" level=info msg="CreateContainer within sandbox \"92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bd7ef91bfb9879ecf9d2c32b0712a4a3eb47ab564172661acf158fa979c74381\"" Dec 13 01:55:18.947457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3915768435.mount: Deactivated successfully. Dec 13 01:55:18.959161 containerd[2034]: time="2024-12-13T01:55:18.958242344Z" level=info msg="StartContainer for \"bd7ef91bfb9879ecf9d2c32b0712a4a3eb47ab564172661acf158fa979c74381\"" Dec 13 01:55:19.029080 systemd-networkd[1946]: cali0f0ee58fc57: Gained IPv6LL Dec 13 01:55:19.057938 systemd[1]: Started cri-containerd-bd7ef91bfb9879ecf9d2c32b0712a4a3eb47ab564172661acf158fa979c74381.scope - libcontainer container bd7ef91bfb9879ecf9d2c32b0712a4a3eb47ab564172661acf158fa979c74381. Dec 13 01:55:19.297005 sshd[5680]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:19.312312 systemd[1]: sshd@9-172.31.19.153:22-139.178.68.195:34194.service: Deactivated successfully. Dec 13 01:55:19.327251 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:55:19.332940 systemd-logind[2005]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:55:19.339644 systemd-logind[2005]: Removed session 10. Dec 13 01:55:19.351070 containerd[2034]: time="2024-12-13T01:55:19.349493250Z" level=info msg="StartContainer for \"bd7ef91bfb9879ecf9d2c32b0712a4a3eb47ab564172661acf158fa979c74381\" returns successfully" Dec 13 01:55:20.187767 kubelet[3281]: I1213 01:55:20.187434 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:20.215741 kubelet[3281]: I1213 01:55:20.215310 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-svpxb" podStartSLOduration=29.207074583 podStartE2EDuration="33.215211834s" podCreationTimestamp="2024-12-13 01:54:47 +0000 UTC" firstStartedPulling="2024-12-13 01:55:14.436538354 +0000 UTC m=+47.001127735" lastFinishedPulling="2024-12-13 01:55:18.444675521 +0000 UTC m=+51.009264986" observedRunningTime="2024-12-13 01:55:19.227055077 +0000 UTC m=+51.791644494" watchObservedRunningTime="2024-12-13 01:55:20.215211834 +0000 UTC m=+52.779801227" Dec 13 01:55:20.215969 kubelet[3281]: I1213 01:55:20.215923 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5bbbb5bd5-jlszc" podStartSLOduration=28.794948104 podStartE2EDuration="33.215871354s" podCreationTimestamp="2024-12-13 01:54:47 +0000 UTC" firstStartedPulling="2024-12-13 01:55:14.485787266 +0000 UTC m=+47.050376659" lastFinishedPulling="2024-12-13 01:55:18.906710504 +0000 UTC m=+51.471299909" observedRunningTime="2024-12-13 01:55:20.21571137 +0000 UTC m=+52.780300787" watchObservedRunningTime="2024-12-13 01:55:20.215871354 +0000 UTC m=+52.780460759" Dec 13 01:55:21.070615 ntpd[2000]: Listen normally on 7 vxlan.calico 192.168.49.0:123 Dec 13 01:55:21.072402 ntpd[2000]: 13 Dec 01:55:21 ntpd[2000]: Listen normally on 7 vxlan.calico 192.168.49.0:123 Dec 13 01:55:21.072402 ntpd[2000]: 13 Dec 01:55:21 ntpd[2000]: Listen normally on 8 vxlan.calico [fe80::6475:fbff:fe82:3b7a%4]:123 Dec 13 01:55:21.072402 ntpd[2000]: 13 Dec 01:55:21 ntpd[2000]: Listen normally on 9 cali16ab40fb99b [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:21.072402 ntpd[2000]: 13 Dec 01:55:21 ntpd[2000]: Listen normally on 10 cali2785bdfd1c5 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:55:21.071728 ntpd[2000]: Listen normally on 8 vxlan.calico [fe80::6475:fbff:fe82:3b7a%4]:123 Dec 13 01:55:21.074303 ntpd[2000]: 13 Dec 01:55:21 ntpd[2000]: Listen normally on 11 cali7f93bf97d1c [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:55:21.074303 ntpd[2000]: 13 Dec 01:55:21 ntpd[2000]: Listen normally on 12 calif7be845a663 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:55:21.074303 ntpd[2000]: 13 Dec 01:55:21 ntpd[2000]: Listen normally on 13 calic5e2eb4190f [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:55:21.074303 ntpd[2000]: 13 Dec 01:55:21 ntpd[2000]: Listen normally on 14 cali0f0ee58fc57 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:55:21.071811 ntpd[2000]: Listen normally on 9 cali16ab40fb99b [fe80::ecee:eeff:feee:eeee%7]:123 Dec 13 01:55:21.071880 ntpd[2000]: Listen normally on 10 cali2785bdfd1c5 [fe80::ecee:eeff:feee:eeee%8]:123 Dec 13 01:55:21.072600 ntpd[2000]: Listen normally on 11 cali7f93bf97d1c [fe80::ecee:eeff:feee:eeee%9]:123 Dec 13 01:55:21.072695 ntpd[2000]: Listen normally on 12 calif7be845a663 [fe80::ecee:eeff:feee:eeee%10]:123 Dec 13 01:55:21.072763 ntpd[2000]: Listen normally on 13 calic5e2eb4190f [fe80::ecee:eeff:feee:eeee%11]:123 Dec 13 01:55:21.072840 ntpd[2000]: Listen normally on 14 cali0f0ee58fc57 [fe80::ecee:eeff:feee:eeee%12]:123 Dec 13 01:55:23.143616 containerd[2034]: time="2024-12-13T01:55:23.142074693Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:23.145629 containerd[2034]: time="2024-12-13T01:55:23.145541781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:55:23.150015 containerd[2034]: time="2024-12-13T01:55:23.148141401Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:23.156198 containerd[2034]: time="2024-12-13T01:55:23.156132717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:23.158276 containerd[2034]: time="2024-12-13T01:55:23.157394349Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 4.249126449s" Dec 13 01:55:23.159326 containerd[2034]: time="2024-12-13T01:55:23.159271737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:55:23.163818 containerd[2034]: time="2024-12-13T01:55:23.163148181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:55:23.223988 containerd[2034]: time="2024-12-13T01:55:23.223704837Z" level=info msg="CreateContainer within sandbox \"abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:55:23.319532 containerd[2034]: time="2024-12-13T01:55:23.319349926Z" level=info msg="CreateContainer within sandbox \"abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"218ae44da800df15f348e5af873f7b1fd0fd7859dc353d1ebd4112ba4220898e\"" Dec 13 01:55:23.321555 containerd[2034]: time="2024-12-13T01:55:23.320602270Z" level=info msg="StartContainer for \"218ae44da800df15f348e5af873f7b1fd0fd7859dc353d1ebd4112ba4220898e\"" Dec 13 01:55:23.446649 systemd[1]: Started cri-containerd-218ae44da800df15f348e5af873f7b1fd0fd7859dc353d1ebd4112ba4220898e.scope - libcontainer container 218ae44da800df15f348e5af873f7b1fd0fd7859dc353d1ebd4112ba4220898e. Dec 13 01:55:23.645116 containerd[2034]: time="2024-12-13T01:55:23.644666279Z" level=info msg="StartContainer for \"218ae44da800df15f348e5af873f7b1fd0fd7859dc353d1ebd4112ba4220898e\" returns successfully" Dec 13 01:55:24.262387 kubelet[3281]: I1213 01:55:24.262313 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75f8859b4-fmpls" podStartSLOduration=28.601515497 podStartE2EDuration="35.262237666s" podCreationTimestamp="2024-12-13 01:54:49 +0000 UTC" firstStartedPulling="2024-12-13 01:55:16.499249144 +0000 UTC m=+49.063838537" lastFinishedPulling="2024-12-13 01:55:23.159971313 +0000 UTC m=+55.724560706" observedRunningTime="2024-12-13 01:55:24.260704198 +0000 UTC m=+56.825293627" watchObservedRunningTime="2024-12-13 01:55:24.262237666 +0000 UTC m=+56.826827059" Dec 13 01:55:24.336807 systemd[1]: Started sshd@10-172.31.19.153:22-139.178.68.195:34204.service - OpenSSH per-connection server daemon (139.178.68.195:34204). Dec 13 01:55:24.525496 sshd[5815]: Accepted publickey for core from 139.178.68.195 port 34204 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:24.528702 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:24.537981 systemd-logind[2005]: New session 11 of user core. Dec 13 01:55:24.543906 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:55:24.834509 sshd[5815]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:24.845179 systemd[1]: sshd@10-172.31.19.153:22-139.178.68.195:34204.service: Deactivated successfully. Dec 13 01:55:24.852448 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:55:24.857731 systemd-logind[2005]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:55:24.880599 systemd[1]: Started sshd@11-172.31.19.153:22-139.178.68.195:34220.service - OpenSSH per-connection server daemon (139.178.68.195:34220). Dec 13 01:55:24.883932 systemd-logind[2005]: Removed session 11. Dec 13 01:55:24.951535 containerd[2034]: time="2024-12-13T01:55:24.950879138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.953258 containerd[2034]: time="2024-12-13T01:55:24.953200574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:55:24.954381 containerd[2034]: time="2024-12-13T01:55:24.954300314Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.959612 containerd[2034]: time="2024-12-13T01:55:24.959220650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:24.963815 containerd[2034]: time="2024-12-13T01:55:24.963525398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.800316065s" Dec 13 01:55:24.963815 containerd[2034]: time="2024-12-13T01:55:24.963609758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:55:24.970170 containerd[2034]: time="2024-12-13T01:55:24.970089410Z" level=info msg="CreateContainer within sandbox \"f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:55:25.000800 containerd[2034]: time="2024-12-13T01:55:25.000612862Z" level=info msg="CreateContainer within sandbox \"f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"02dee25aab94368237ccc2b8d755837f6943a0c1e7b61e539db0c95497f9f3b1\"" Dec 13 01:55:25.001600 containerd[2034]: time="2024-12-13T01:55:25.001521130Z" level=info msg="StartContainer for \"02dee25aab94368237ccc2b8d755837f6943a0c1e7b61e539db0c95497f9f3b1\"" Dec 13 01:55:25.071933 systemd[1]: Started cri-containerd-02dee25aab94368237ccc2b8d755837f6943a0c1e7b61e539db0c95497f9f3b1.scope - libcontainer container 02dee25aab94368237ccc2b8d755837f6943a0c1e7b61e539db0c95497f9f3b1. Dec 13 01:55:25.074112 sshd[5834]: Accepted publickey for core from 139.178.68.195 port 34220 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:25.079120 sshd[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:25.090151 systemd-logind[2005]: New session 12 of user core. Dec 13 01:55:25.099878 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:55:25.150552 containerd[2034]: time="2024-12-13T01:55:25.150475763Z" level=info msg="StartContainer for \"02dee25aab94368237ccc2b8d755837f6943a0c1e7b61e539db0c95497f9f3b1\" returns successfully" Dec 13 01:55:25.154988 containerd[2034]: time="2024-12-13T01:55:25.154915307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:55:25.303107 systemd[1]: run-containerd-runc-k8s.io-218ae44da800df15f348e5af873f7b1fd0fd7859dc353d1ebd4112ba4220898e-runc.crtbwE.mount: Deactivated successfully. Dec 13 01:55:25.529648 sshd[5834]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:25.540410 systemd[1]: sshd@11-172.31.19.153:22-139.178.68.195:34220.service: Deactivated successfully. Dec 13 01:55:25.550210 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:55:25.560200 systemd-logind[2005]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:55:25.591517 systemd[1]: Started sshd@12-172.31.19.153:22-139.178.68.195:34222.service - OpenSSH per-connection server daemon (139.178.68.195:34222). Dec 13 01:55:25.595763 systemd-logind[2005]: Removed session 12. Dec 13 01:55:25.785402 sshd[5900]: Accepted publickey for core from 139.178.68.195 port 34222 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:25.788915 sshd[5900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:25.803046 systemd-logind[2005]: New session 13 of user core. Dec 13 01:55:25.812980 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:55:26.127242 sshd[5900]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:26.137247 systemd[1]: sshd@12-172.31.19.153:22-139.178.68.195:34222.service: Deactivated successfully. Dec 13 01:55:26.143789 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:55:26.148539 systemd-logind[2005]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:55:26.153190 systemd-logind[2005]: Removed session 13. Dec 13 01:55:26.558641 containerd[2034]: time="2024-12-13T01:55:26.558527858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:26.560770 containerd[2034]: time="2024-12-13T01:55:26.560641694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:55:26.562337 containerd[2034]: time="2024-12-13T01:55:26.562203230Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:26.566768 containerd[2034]: time="2024-12-13T01:55:26.566674814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:55:26.568626 containerd[2034]: time="2024-12-13T01:55:26.568527254Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.413327667s" Dec 13 01:55:26.568931 containerd[2034]: time="2024-12-13T01:55:26.568763870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:55:26.575373 containerd[2034]: time="2024-12-13T01:55:26.575288174Z" level=info msg="CreateContainer within sandbox \"f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:55:26.604055 containerd[2034]: time="2024-12-13T01:55:26.603915362Z" level=info msg="CreateContainer within sandbox \"f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d47c992ff03173b7ec0cb5625fb4f78c0a6ab4823806e61786695a1c3e6389d4\"" Dec 13 01:55:26.605272 containerd[2034]: time="2024-12-13T01:55:26.605187926Z" level=info msg="StartContainer for \"d47c992ff03173b7ec0cb5625fb4f78c0a6ab4823806e61786695a1c3e6389d4\"" Dec 13 01:55:26.670920 systemd[1]: Started cri-containerd-d47c992ff03173b7ec0cb5625fb4f78c0a6ab4823806e61786695a1c3e6389d4.scope - libcontainer container d47c992ff03173b7ec0cb5625fb4f78c0a6ab4823806e61786695a1c3e6389d4. Dec 13 01:55:26.771024 containerd[2034]: time="2024-12-13T01:55:26.770951775Z" level=info msg="StartContainer for \"d47c992ff03173b7ec0cb5625fb4f78c0a6ab4823806e61786695a1c3e6389d4\" returns successfully" Dec 13 01:55:26.908918 kubelet[3281]: I1213 01:55:26.908746 3281 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:55:26.908918 kubelet[3281]: I1213 01:55:26.908817 3281 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:55:27.282167 kubelet[3281]: I1213 01:55:27.282083 3281 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-d2z4r" podStartSLOduration=30.104173551 podStartE2EDuration="39.282021601s" podCreationTimestamp="2024-12-13 01:54:48 +0000 UTC" firstStartedPulling="2024-12-13 01:55:17.391717576 +0000 UTC m=+49.956306969" lastFinishedPulling="2024-12-13 01:55:26.569565626 +0000 UTC m=+59.134155019" observedRunningTime="2024-12-13 01:55:27.281460073 +0000 UTC m=+59.846049526" watchObservedRunningTime="2024-12-13 01:55:27.282021601 +0000 UTC m=+59.846610994" Dec 13 01:55:27.708295 containerd[2034]: time="2024-12-13T01:55:27.708106528Z" level=info msg="StopPodSandbox for \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\"" Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.770 [WARNING][5991] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fb4856f-34b3-468a-a336-454740015a6b", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254", Pod:"coredns-76f75df574-fql84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7be845a663", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.771 [INFO][5991] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.771 [INFO][5991] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" iface="eth0" netns="" Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.771 [INFO][5991] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.771 [INFO][5991] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.812 [INFO][5998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" HandleID="k8s-pod-network.1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.812 [INFO][5998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.812 [INFO][5998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.824 [WARNING][5998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" HandleID="k8s-pod-network.1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.824 [INFO][5998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" HandleID="k8s-pod-network.1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.827 [INFO][5998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:27.832408 containerd[2034]: 2024-12-13 01:55:27.829 [INFO][5991] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:27.833522 containerd[2034]: time="2024-12-13T01:55:27.833309404Z" level=info msg="TearDown network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\" successfully" Dec 13 01:55:27.833522 containerd[2034]: time="2024-12-13T01:55:27.833377972Z" level=info msg="StopPodSandbox for \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\" returns successfully" Dec 13 01:55:27.834778 containerd[2034]: time="2024-12-13T01:55:27.834240568Z" level=info msg="RemovePodSandbox for \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\"" Dec 13 01:55:27.834778 containerd[2034]: time="2024-12-13T01:55:27.834308752Z" level=info msg="Forcibly stopping sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\"" Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.904 [WARNING][6016] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"8fb4856f-34b3-468a-a336-454740015a6b", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"8e555af876bcd3e5e6ce96c89101e963e75e8fd7e7e0ab9fbda5d443bb167254", Pod:"coredns-76f75df574-fql84", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7be845a663", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.905 [INFO][6016] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.905 [INFO][6016] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" iface="eth0" netns="" Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.905 [INFO][6016] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.905 [INFO][6016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.949 [INFO][6022] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" HandleID="k8s-pod-network.1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.949 [INFO][6022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.949 [INFO][6022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.962 [WARNING][6022] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" HandleID="k8s-pod-network.1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.962 [INFO][6022] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" HandleID="k8s-pod-network.1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--fql84-eth0" Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.966 [INFO][6022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:27.970671 containerd[2034]: 2024-12-13 01:55:27.968 [INFO][6016] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8" Dec 13 01:55:27.971972 containerd[2034]: time="2024-12-13T01:55:27.970998197Z" level=info msg="TearDown network for sandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\" successfully" Dec 13 01:55:27.977062 containerd[2034]: time="2024-12-13T01:55:27.977000825Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:27.977232 containerd[2034]: time="2024-12-13T01:55:27.977109809Z" level=info msg="RemovePodSandbox \"1da734b48928982f424232a7e6f7940321d3c685afbfcf4c39f117a49b2804a8\" returns successfully" Dec 13 01:55:27.978167 containerd[2034]: time="2024-12-13T01:55:27.978110513Z" level=info msg="StopPodSandbox for \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\"" Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.049 [WARNING][6041] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0", GenerateName:"calico-apiserver-5bbbb5bd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"439df88a-c40e-4828-a6b8-8bfb2c3a7727", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbb5bd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079", Pod:"calico-apiserver-5bbbb5bd5-jlszc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2785bdfd1c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.050 [INFO][6041] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.050 [INFO][6041] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" iface="eth0" netns="" Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.050 [INFO][6041] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.050 [INFO][6041] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.090 [INFO][6048] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" HandleID="k8s-pod-network.1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.090 [INFO][6048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.090 [INFO][6048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.106 [WARNING][6048] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" HandleID="k8s-pod-network.1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.106 [INFO][6048] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" HandleID="k8s-pod-network.1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.109 [INFO][6048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.115075 containerd[2034]: 2024-12-13 01:55:28.112 [INFO][6041] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:28.116986 containerd[2034]: time="2024-12-13T01:55:28.115660874Z" level=info msg="TearDown network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\" successfully" Dec 13 01:55:28.116986 containerd[2034]: time="2024-12-13T01:55:28.115724438Z" level=info msg="StopPodSandbox for \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\" returns successfully" Dec 13 01:55:28.118014 containerd[2034]: time="2024-12-13T01:55:28.117744734Z" level=info msg="RemovePodSandbox for \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\"" Dec 13 01:55:28.118014 containerd[2034]: time="2024-12-13T01:55:28.117803846Z" level=info msg="Forcibly stopping sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\"" Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.182 [WARNING][6067] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0", GenerateName:"calico-apiserver-5bbbb5bd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"439df88a-c40e-4828-a6b8-8bfb2c3a7727", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbb5bd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"92074f4174a5904a64f0c1d802fa66706f47daaad84c604cac325b12bcf9a079", Pod:"calico-apiserver-5bbbb5bd5-jlszc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2785bdfd1c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.182 [INFO][6067] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.182 [INFO][6067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" iface="eth0" netns="" Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.182 [INFO][6067] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.182 [INFO][6067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.219 [INFO][6073] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" HandleID="k8s-pod-network.1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.220 [INFO][6073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.220 [INFO][6073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.234 [WARNING][6073] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" HandleID="k8s-pod-network.1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.234 [INFO][6073] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" HandleID="k8s-pod-network.1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--jlszc-eth0" Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.236 [INFO][6073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.242083 containerd[2034]: 2024-12-13 01:55:28.239 [INFO][6067] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad" Dec 13 01:55:28.242083 containerd[2034]: time="2024-12-13T01:55:28.241658822Z" level=info msg="TearDown network for sandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\" successfully" Dec 13 01:55:28.250372 containerd[2034]: time="2024-12-13T01:55:28.250076246Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:28.250372 containerd[2034]: time="2024-12-13T01:55:28.250173422Z" level=info msg="RemovePodSandbox \"1aee81fdefe7c03de91ae28c9535a8875c14881fbe75ed1fedd33022b83cfcad\" returns successfully" Dec 13 01:55:28.251590 containerd[2034]: time="2024-12-13T01:55:28.251522066Z" level=info msg="StopPodSandbox for \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\"" Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.402 [WARNING][6093] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"49193545-8953-4b0d-8299-dd1e3ecf467d", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d", Pod:"coredns-76f75df574-bthsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f93bf97d1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.403 [INFO][6093] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.403 [INFO][6093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" iface="eth0" netns="" Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.403 [INFO][6093] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.403 [INFO][6093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.450 [INFO][6104] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" HandleID="k8s-pod-network.ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.450 [INFO][6104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.451 [INFO][6104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.462 [WARNING][6104] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" HandleID="k8s-pod-network.ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.462 [INFO][6104] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" HandleID="k8s-pod-network.ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.467 [INFO][6104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.471659 containerd[2034]: 2024-12-13 01:55:28.469 [INFO][6093] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:28.472996 containerd[2034]: time="2024-12-13T01:55:28.472335435Z" level=info msg="TearDown network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\" successfully" Dec 13 01:55:28.472996 containerd[2034]: time="2024-12-13T01:55:28.472395363Z" level=info msg="StopPodSandbox for \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\" returns successfully" Dec 13 01:55:28.473366 containerd[2034]: time="2024-12-13T01:55:28.473104071Z" level=info msg="RemovePodSandbox for \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\"" Dec 13 01:55:28.473457 containerd[2034]: time="2024-12-13T01:55:28.473421663Z" level=info msg="Forcibly stopping sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\"" Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.546 [WARNING][6123] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"49193545-8953-4b0d-8299-dd1e3ecf467d", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"8e0e49ec8cb0ad3aa7fd627a3c05af5553a8c449754c33249f7f6c51439d864d", Pod:"coredns-76f75df574-bthsx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.49.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7f93bf97d1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.546 [INFO][6123] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.546 [INFO][6123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" iface="eth0" netns="" Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.546 [INFO][6123] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.547 [INFO][6123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.597 [INFO][6130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" HandleID="k8s-pod-network.ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.597 [INFO][6130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.598 [INFO][6130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.615 [WARNING][6130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" HandleID="k8s-pod-network.ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.615 [INFO][6130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" HandleID="k8s-pod-network.ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Workload="ip--172--31--19--153-k8s-coredns--76f75df574--bthsx-eth0" Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.619 [INFO][6130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.625790 containerd[2034]: 2024-12-13 01:55:28.622 [INFO][6123] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6" Dec 13 01:55:28.625790 containerd[2034]: time="2024-12-13T01:55:28.625627276Z" level=info msg="TearDown network for sandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\" successfully" Dec 13 01:55:28.633818 containerd[2034]: time="2024-12-13T01:55:28.633461800Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:28.633818 containerd[2034]: time="2024-12-13T01:55:28.633604528Z" level=info msg="RemovePodSandbox \"ff68ee48e08dac998a786bdb0b040c4bf36f5d246acc7dd6cb893393791c6dc6\" returns successfully" Dec 13 01:55:28.635244 containerd[2034]: time="2024-12-13T01:55:28.634778860Z" level=info msg="StopPodSandbox for \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\"" Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.727 [WARNING][6149] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0", GenerateName:"calico-kube-controllers-75f8859b4-", Namespace:"calico-system", SelfLink:"", UID:"b5d5aafc-d67a-4e3e-a8b1-c9d750914db8", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f8859b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392", Pod:"calico-kube-controllers-75f8859b4-fmpls", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5e2eb4190f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.727 [INFO][6149] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.727 [INFO][6149] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" iface="eth0" netns="" Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.727 [INFO][6149] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.727 [INFO][6149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.772 [INFO][6156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" HandleID="k8s-pod-network.8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.772 [INFO][6156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.773 [INFO][6156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.785 [WARNING][6156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" HandleID="k8s-pod-network.8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.785 [INFO][6156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" HandleID="k8s-pod-network.8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.788 [INFO][6156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.793730 containerd[2034]: 2024-12-13 01:55:28.791 [INFO][6149] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:28.796469 containerd[2034]: time="2024-12-13T01:55:28.793785629Z" level=info msg="TearDown network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\" successfully" Dec 13 01:55:28.796469 containerd[2034]: time="2024-12-13T01:55:28.793824185Z" level=info msg="StopPodSandbox for \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\" returns successfully" Dec 13 01:55:28.796469 containerd[2034]: time="2024-12-13T01:55:28.794943065Z" level=info msg="RemovePodSandbox for \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\"" Dec 13 01:55:28.796469 containerd[2034]: time="2024-12-13T01:55:28.794989169Z" level=info msg="Forcibly stopping sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\"" Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.855 [WARNING][6175] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0", GenerateName:"calico-kube-controllers-75f8859b4-", Namespace:"calico-system", SelfLink:"", UID:"b5d5aafc-d67a-4e3e-a8b1-c9d750914db8", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75f8859b4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"abc3f0b331454ae4ae1f0e2f7bd4f1b76101143f88083b66e4ff8f5abbca8392", Pod:"calico-kube-controllers-75f8859b4-fmpls", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.49.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic5e2eb4190f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.855 [INFO][6175] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.856 [INFO][6175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" iface="eth0" netns="" Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.856 [INFO][6175] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.856 [INFO][6175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.892 [INFO][6181] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" HandleID="k8s-pod-network.8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.892 [INFO][6181] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.893 [INFO][6181] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.908 [WARNING][6181] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" HandleID="k8s-pod-network.8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.908 [INFO][6181] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" HandleID="k8s-pod-network.8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Workload="ip--172--31--19--153-k8s-calico--kube--controllers--75f8859b4--fmpls-eth0" Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.911 [INFO][6181] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:28.916859 containerd[2034]: 2024-12-13 01:55:28.913 [INFO][6175] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356" Dec 13 01:55:28.916859 containerd[2034]: time="2024-12-13T01:55:28.915779118Z" level=info msg="TearDown network for sandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\" successfully" Dec 13 01:55:28.921793 containerd[2034]: time="2024-12-13T01:55:28.921715338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:28.921948 containerd[2034]: time="2024-12-13T01:55:28.921829230Z" level=info msg="RemovePodSandbox \"8b0df2ce0633f915a4cd6dbf3030e1c5346fc010a8e14705dbb02bafab454356\" returns successfully" Dec 13 01:55:28.922620 containerd[2034]: time="2024-12-13T01:55:28.922482666Z" level=info msg="StopPodSandbox for \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\"" Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:28.988 [WARNING][6199] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f62b98e-3864-4ed3-b68c-e8d11f28b312", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6", Pod:"csi-node-driver-d2z4r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f0ee58fc57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:28.988 [INFO][6199] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:28.988 [INFO][6199] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" iface="eth0" netns="" Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:28.988 [INFO][6199] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:28.988 [INFO][6199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:29.024 [INFO][6206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" HandleID="k8s-pod-network.cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:29.024 [INFO][6206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:29.024 [INFO][6206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:29.039 [WARNING][6206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" HandleID="k8s-pod-network.cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:29.039 [INFO][6206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" HandleID="k8s-pod-network.cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:29.042 [INFO][6206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.046624 containerd[2034]: 2024-12-13 01:55:29.044 [INFO][6199] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:29.047427 containerd[2034]: time="2024-12-13T01:55:29.046638134Z" level=info msg="TearDown network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\" successfully" Dec 13 01:55:29.047427 containerd[2034]: time="2024-12-13T01:55:29.046711574Z" level=info msg="StopPodSandbox for \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\" returns successfully" Dec 13 01:55:29.047992 containerd[2034]: time="2024-12-13T01:55:29.047900402Z" level=info msg="RemovePodSandbox for \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\"" Dec 13 01:55:29.048097 containerd[2034]: time="2024-12-13T01:55:29.048024518Z" level=info msg="Forcibly stopping sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\"" Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.115 [WARNING][6224] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9f62b98e-3864-4ed3-b68c-e8d11f28b312", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"f032e2350667c06bc14b8806401b38556be68e4fc625680d9d2bd8749b1e72d6", Pod:"csi-node-driver-d2z4r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.49.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0f0ee58fc57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.116 [INFO][6224] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.116 [INFO][6224] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" iface="eth0" netns="" Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.116 [INFO][6224] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.116 [INFO][6224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.167 [INFO][6230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" HandleID="k8s-pod-network.cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.168 [INFO][6230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.168 [INFO][6230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.183 [WARNING][6230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" HandleID="k8s-pod-network.cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.183 [INFO][6230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" HandleID="k8s-pod-network.cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Workload="ip--172--31--19--153-k8s-csi--node--driver--d2z4r-eth0" Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.187 [INFO][6230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.192475 containerd[2034]: 2024-12-13 01:55:29.189 [INFO][6224] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0" Dec 13 01:55:29.192475 containerd[2034]: time="2024-12-13T01:55:29.192426363Z" level=info msg="TearDown network for sandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\" successfully" Dec 13 01:55:29.200702 containerd[2034]: time="2024-12-13T01:55:29.200274639Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:29.200702 containerd[2034]: time="2024-12-13T01:55:29.200366943Z" level=info msg="RemovePodSandbox \"cfd2dffa49050ca12a0e9a07f15aa25b91d4955bd6d75b87974ce53a965a31a0\" returns successfully" Dec 13 01:55:29.201333 containerd[2034]: time="2024-12-13T01:55:29.201085443Z" level=info msg="StopPodSandbox for \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\"" Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.270 [WARNING][6249] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0", GenerateName:"calico-apiserver-5bbbb5bd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"235db396-35e0-49e0-bcd3-929b0c0c50eb", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbb5bd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4", Pod:"calico-apiserver-5bbbb5bd5-svpxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16ab40fb99b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.273 [INFO][6249] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.273 [INFO][6249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" iface="eth0" netns="" Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.273 [INFO][6249] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.273 [INFO][6249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.329 [INFO][6255] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" HandleID="k8s-pod-network.d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.329 [INFO][6255] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.329 [INFO][6255] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.342 [WARNING][6255] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" HandleID="k8s-pod-network.d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.342 [INFO][6255] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" HandleID="k8s-pod-network.d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.344 [INFO][6255] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.349971 containerd[2034]: 2024-12-13 01:55:29.347 [INFO][6249] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:29.351767 containerd[2034]: time="2024-12-13T01:55:29.350752588Z" level=info msg="TearDown network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\" successfully" Dec 13 01:55:29.351767 containerd[2034]: time="2024-12-13T01:55:29.350815948Z" level=info msg="StopPodSandbox for \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\" returns successfully" Dec 13 01:55:29.351767 containerd[2034]: time="2024-12-13T01:55:29.351518776Z" level=info msg="RemovePodSandbox for \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\"" Dec 13 01:55:29.351767 containerd[2034]: time="2024-12-13T01:55:29.351564076Z" level=info msg="Forcibly stopping sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\"" Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.413 [WARNING][6273] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0", GenerateName:"calico-apiserver-5bbbb5bd5-", Namespace:"calico-apiserver", SelfLink:"", UID:"235db396-35e0-49e0-bcd3-929b0c0c50eb", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 54, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5bbbb5bd5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-153", ContainerID:"2a0d003fc8ae26e329dfac4fa7d653485adbbdbf624f58b9ec0f13c703e449f4", Pod:"calico-apiserver-5bbbb5bd5-svpxb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.49.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali16ab40fb99b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.414 [INFO][6273] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.414 [INFO][6273] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" iface="eth0" netns="" Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.414 [INFO][6273] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.414 [INFO][6273] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.460 [INFO][6279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" HandleID="k8s-pod-network.d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.460 [INFO][6279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.460 [INFO][6279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.472 [WARNING][6279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" HandleID="k8s-pod-network.d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.472 [INFO][6279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" HandleID="k8s-pod-network.d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Workload="ip--172--31--19--153-k8s-calico--apiserver--5bbbb5bd5--svpxb-eth0" Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.475 [INFO][6279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:55:29.479930 containerd[2034]: 2024-12-13 01:55:29.477 [INFO][6273] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c" Dec 13 01:55:29.481328 containerd[2034]: time="2024-12-13T01:55:29.479975644Z" level=info msg="TearDown network for sandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\" successfully" Dec 13 01:55:29.485425 containerd[2034]: time="2024-12-13T01:55:29.485353336Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:55:29.485852 containerd[2034]: time="2024-12-13T01:55:29.485464060Z" level=info msg="RemovePodSandbox \"d3f7c166a321fd18be2c11e2b0cce944a26260a0f8d76c1288f341440746525c\" returns successfully" Dec 13 01:55:31.169161 systemd[1]: Started sshd@13-172.31.19.153:22-139.178.68.195:35246.service - OpenSSH per-connection server daemon (139.178.68.195:35246). Dec 13 01:55:31.348094 sshd[6305]: Accepted publickey for core from 139.178.68.195 port 35246 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:31.351903 sshd[6305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:31.359566 systemd-logind[2005]: New session 14 of user core. Dec 13 01:55:31.367856 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:55:31.616721 sshd[6305]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:31.622063 systemd[1]: sshd@13-172.31.19.153:22-139.178.68.195:35246.service: Deactivated successfully. Dec 13 01:55:31.626426 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:55:31.631393 systemd-logind[2005]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:55:31.633358 systemd-logind[2005]: Removed session 14. Dec 13 01:55:36.655084 systemd[1]: Started sshd@14-172.31.19.153:22-139.178.68.195:36850.service - OpenSSH per-connection server daemon (139.178.68.195:36850). Dec 13 01:55:36.839300 sshd[6323]: Accepted publickey for core from 139.178.68.195 port 36850 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:36.842065 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:36.851473 systemd-logind[2005]: New session 15 of user core. Dec 13 01:55:36.859944 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:55:37.110881 sshd[6323]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:37.117511 systemd[1]: sshd@14-172.31.19.153:22-139.178.68.195:36850.service: Deactivated successfully. Dec 13 01:55:37.121533 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:55:37.123048 systemd-logind[2005]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:55:37.124824 systemd-logind[2005]: Removed session 15. Dec 13 01:55:37.691353 kubelet[3281]: I1213 01:55:37.690912 3281 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:55:42.150205 systemd[1]: Started sshd@15-172.31.19.153:22-139.178.68.195:36858.service - OpenSSH per-connection server daemon (139.178.68.195:36858). Dec 13 01:55:42.338426 sshd[6341]: Accepted publickey for core from 139.178.68.195 port 36858 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:42.341463 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:42.349824 systemd-logind[2005]: New session 16 of user core. Dec 13 01:55:42.358561 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:55:42.615939 sshd[6341]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:42.621181 systemd-logind[2005]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:55:42.622470 systemd[1]: sshd@15-172.31.19.153:22-139.178.68.195:36858.service: Deactivated successfully. Dec 13 01:55:42.626358 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:55:42.630752 systemd-logind[2005]: Removed session 16. Dec 13 01:55:47.658097 systemd[1]: Started sshd@16-172.31.19.153:22-139.178.68.195:46662.service - OpenSSH per-connection server daemon (139.178.68.195:46662). Dec 13 01:55:47.842757 sshd[6356]: Accepted publickey for core from 139.178.68.195 port 46662 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:47.845552 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:47.852736 systemd-logind[2005]: New session 17 of user core. Dec 13 01:55:47.861873 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:55:48.125440 sshd[6356]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:48.131338 systemd-logind[2005]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:55:48.132800 systemd[1]: sshd@16-172.31.19.153:22-139.178.68.195:46662.service: Deactivated successfully. Dec 13 01:55:48.137397 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:55:48.142414 systemd-logind[2005]: Removed session 17. Dec 13 01:55:48.166112 systemd[1]: Started sshd@17-172.31.19.153:22-139.178.68.195:46678.service - OpenSSH per-connection server daemon (139.178.68.195:46678). Dec 13 01:55:48.360627 sshd[6370]: Accepted publickey for core from 139.178.68.195 port 46678 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:48.363379 sshd[6370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:48.371212 systemd-logind[2005]: New session 18 of user core. Dec 13 01:55:48.380870 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:55:48.893637 sshd[6370]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:48.903277 systemd[1]: sshd@17-172.31.19.153:22-139.178.68.195:46678.service: Deactivated successfully. Dec 13 01:55:48.913927 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:55:48.917375 systemd-logind[2005]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:55:48.947730 systemd[1]: Started sshd@18-172.31.19.153:22-139.178.68.195:46682.service - OpenSSH per-connection server daemon (139.178.68.195:46682). Dec 13 01:55:48.949661 systemd-logind[2005]: Removed session 18. Dec 13 01:55:49.131359 sshd[6400]: Accepted publickey for core from 139.178.68.195 port 46682 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:49.135622 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:49.150790 systemd-logind[2005]: New session 19 of user core. Dec 13 01:55:49.161911 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:55:53.157085 sshd[6400]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:53.168157 systemd[1]: sshd@18-172.31.19.153:22-139.178.68.195:46682.service: Deactivated successfully. Dec 13 01:55:53.177123 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:55:53.178806 systemd[1]: session-19.scope: Consumed 1.062s CPU time. Dec 13 01:55:53.180874 systemd-logind[2005]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:55:53.202313 systemd[1]: Started sshd@19-172.31.19.153:22-139.178.68.195:46690.service - OpenSSH per-connection server daemon (139.178.68.195:46690). Dec 13 01:55:53.206153 systemd-logind[2005]: Removed session 19. Dec 13 01:55:53.394100 sshd[6419]: Accepted publickey for core from 139.178.68.195 port 46690 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:53.396926 sshd[6419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:53.411387 systemd-logind[2005]: New session 20 of user core. Dec 13 01:55:53.417854 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:55:54.006100 sshd[6419]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:54.013012 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:55:54.013399 systemd-logind[2005]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:55:54.015091 systemd[1]: sshd@19-172.31.19.153:22-139.178.68.195:46690.service: Deactivated successfully. Dec 13 01:55:54.043152 systemd-logind[2005]: Removed session 20. Dec 13 01:55:54.049121 systemd[1]: Started sshd@20-172.31.19.153:22-139.178.68.195:46706.service - OpenSSH per-connection server daemon (139.178.68.195:46706). Dec 13 01:55:54.243412 sshd[6435]: Accepted publickey for core from 139.178.68.195 port 46706 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:54.252640 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:54.264974 systemd-logind[2005]: New session 21 of user core. Dec 13 01:55:54.271024 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:55:54.573778 sshd[6435]: pam_unix(sshd:session): session closed for user core Dec 13 01:55:54.585719 systemd[1]: sshd@20-172.31.19.153:22-139.178.68.195:46706.service: Deactivated successfully. Dec 13 01:55:54.592055 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:55:54.596239 systemd-logind[2005]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:55:54.600082 systemd-logind[2005]: Removed session 21. Dec 13 01:55:59.619148 systemd[1]: Started sshd@21-172.31.19.153:22-139.178.68.195:33368.service - OpenSSH per-connection server daemon (139.178.68.195:33368). Dec 13 01:55:59.809943 sshd[6476]: Accepted publickey for core from 139.178.68.195 port 33368 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:55:59.814939 sshd[6476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:55:59.826375 systemd-logind[2005]: New session 22 of user core. Dec 13 01:55:59.833779 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:56:00.112938 sshd[6476]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:00.122117 systemd[1]: sshd@21-172.31.19.153:22-139.178.68.195:33368.service: Deactivated successfully. Dec 13 01:56:00.129723 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:56:00.132348 systemd-logind[2005]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:56:00.136698 systemd-logind[2005]: Removed session 22. Dec 13 01:56:05.154102 systemd[1]: Started sshd@22-172.31.19.153:22-139.178.68.195:33370.service - OpenSSH per-connection server daemon (139.178.68.195:33370). Dec 13 01:56:05.334139 sshd[6511]: Accepted publickey for core from 139.178.68.195 port 33370 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:05.337333 sshd[6511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:05.345075 systemd-logind[2005]: New session 23 of user core. Dec 13 01:56:05.354833 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:56:05.608686 sshd[6511]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:05.613540 systemd[1]: sshd@22-172.31.19.153:22-139.178.68.195:33370.service: Deactivated successfully. Dec 13 01:56:05.618130 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:56:05.622355 systemd-logind[2005]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:56:05.624527 systemd-logind[2005]: Removed session 23. Dec 13 01:56:10.649134 systemd[1]: Started sshd@23-172.31.19.153:22-139.178.68.195:59606.service - OpenSSH per-connection server daemon (139.178.68.195:59606). Dec 13 01:56:10.830402 sshd[6525]: Accepted publickey for core from 139.178.68.195 port 59606 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:10.833335 sshd[6525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:10.841459 systemd-logind[2005]: New session 24 of user core. Dec 13 01:56:10.849846 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:56:11.092810 sshd[6525]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:11.098521 systemd[1]: sshd@23-172.31.19.153:22-139.178.68.195:59606.service: Deactivated successfully. Dec 13 01:56:11.104906 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:56:11.106871 systemd-logind[2005]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:56:11.110838 systemd-logind[2005]: Removed session 24. Dec 13 01:56:16.138277 systemd[1]: Started sshd@24-172.31.19.153:22-139.178.68.195:40922.service - OpenSSH per-connection server daemon (139.178.68.195:40922). Dec 13 01:56:16.311273 sshd[6540]: Accepted publickey for core from 139.178.68.195 port 40922 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:16.314035 sshd[6540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:16.326270 systemd-logind[2005]: New session 25 of user core. Dec 13 01:56:16.336875 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:56:16.580250 sshd[6540]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:16.587157 systemd[1]: sshd@24-172.31.19.153:22-139.178.68.195:40922.service: Deactivated successfully. Dec 13 01:56:16.591741 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:56:16.593562 systemd-logind[2005]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:56:16.596084 systemd-logind[2005]: Removed session 25. Dec 13 01:56:21.618103 systemd[1]: Started sshd@25-172.31.19.153:22-139.178.68.195:40928.service - OpenSSH per-connection server daemon (139.178.68.195:40928). Dec 13 01:56:21.794831 sshd[6554]: Accepted publickey for core from 139.178.68.195 port 40928 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:21.797621 sshd[6554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:21.805877 systemd-logind[2005]: New session 26 of user core. Dec 13 01:56:21.814867 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:56:22.057129 sshd[6554]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:22.062024 systemd[1]: sshd@25-172.31.19.153:22-139.178.68.195:40928.service: Deactivated successfully. Dec 13 01:56:22.065886 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:56:22.069092 systemd-logind[2005]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:56:22.071747 systemd-logind[2005]: Removed session 26. Dec 13 01:56:27.097199 systemd[1]: Started sshd@26-172.31.19.153:22-139.178.68.195:44718.service - OpenSSH per-connection server daemon (139.178.68.195:44718). Dec 13 01:56:27.271986 sshd[6587]: Accepted publickey for core from 139.178.68.195 port 44718 ssh2: RSA SHA256:3zfVqstnlRSTFN99Cx31drkf9HaziXkWInlPTzuuhf0 Dec 13 01:56:27.274668 sshd[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:56:27.283758 systemd-logind[2005]: New session 27 of user core. Dec 13 01:56:27.288859 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:56:27.532092 sshd[6587]: pam_unix(sshd:session): session closed for user core Dec 13 01:56:27.538965 systemd[1]: sshd@26-172.31.19.153:22-139.178.68.195:44718.service: Deactivated successfully. Dec 13 01:56:27.544212 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:56:27.545553 systemd-logind[2005]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:56:27.548641 systemd-logind[2005]: Removed session 27. Dec 13 01:56:40.818154 systemd[1]: cri-containerd-2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979.scope: Deactivated successfully. Dec 13 01:56:40.818647 systemd[1]: cri-containerd-2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979.scope: Consumed 6.677s CPU time. Dec 13 01:56:40.861719 containerd[2034]: time="2024-12-13T01:56:40.860027943Z" level=info msg="shim disconnected" id=2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979 namespace=k8s.io Dec 13 01:56:40.861719 containerd[2034]: time="2024-12-13T01:56:40.860123427Z" level=warning msg="cleaning up after shim disconnected" id=2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979 namespace=k8s.io Dec 13 01:56:40.861719 containerd[2034]: time="2024-12-13T01:56:40.860143479Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:40.863713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979-rootfs.mount: Deactivated successfully. Dec 13 01:56:41.527024 kubelet[3281]: I1213 01:56:41.526900 3281 scope.go:117] "RemoveContainer" containerID="2dc845853a0da422581e52607b816641b4e730308005ffbab8ce45c50eb9d979" Dec 13 01:56:41.532634 containerd[2034]: time="2024-12-13T01:56:41.531797966Z" level=info msg="CreateContainer within sandbox \"dc17f01b8529846c9ea3969f72d17f4f4c79779b796a2b9340074705aaf2dba9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Dec 13 01:56:41.560258 containerd[2034]: time="2024-12-13T01:56:41.559951826Z" level=info msg="CreateContainer within sandbox \"dc17f01b8529846c9ea3969f72d17f4f4c79779b796a2b9340074705aaf2dba9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e3f39e294d3f49af7d9f20eb9ae2f465d6b7a2d039daadc2790f6ded6ace8569\"" Dec 13 01:56:41.562112 containerd[2034]: time="2024-12-13T01:56:41.561635546Z" level=info msg="StartContainer for \"e3f39e294d3f49af7d9f20eb9ae2f465d6b7a2d039daadc2790f6ded6ace8569\"" Dec 13 01:56:41.624882 systemd[1]: Started cri-containerd-e3f39e294d3f49af7d9f20eb9ae2f465d6b7a2d039daadc2790f6ded6ace8569.scope - libcontainer container e3f39e294d3f49af7d9f20eb9ae2f465d6b7a2d039daadc2790f6ded6ace8569. Dec 13 01:56:41.674679 containerd[2034]: time="2024-12-13T01:56:41.674619567Z" level=info msg="StartContainer for \"e3f39e294d3f49af7d9f20eb9ae2f465d6b7a2d039daadc2790f6ded6ace8569\" returns successfully" Dec 13 01:56:42.751958 systemd[1]: cri-containerd-231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c.scope: Deactivated successfully. Dec 13 01:56:42.752459 systemd[1]: cri-containerd-231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c.scope: Consumed 5.464s CPU time, 21.8M memory peak, 0B memory swap peak. Dec 13 01:56:42.802076 containerd[2034]: time="2024-12-13T01:56:42.800413312Z" level=info msg="shim disconnected" id=231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c namespace=k8s.io Dec 13 01:56:42.802076 containerd[2034]: time="2024-12-13T01:56:42.800484208Z" level=warning msg="cleaning up after shim disconnected" id=231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c namespace=k8s.io Dec 13 01:56:42.802076 containerd[2034]: time="2024-12-13T01:56:42.800507080Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:42.806712 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c-rootfs.mount: Deactivated successfully. Dec 13 01:56:43.537794 kubelet[3281]: I1213 01:56:43.536931 3281 scope.go:117] "RemoveContainer" containerID="231cc9998daf435642fca17eb7116ca823f6ffb5867ff59af670a0bec3cd1d5c" Dec 13 01:56:43.542310 containerd[2034]: time="2024-12-13T01:56:43.542246476Z" level=info msg="CreateContainer within sandbox \"894e1c52f0ca3a46cbdad4cc82aac67e68520b72b9e255b2a61c2e6e85b6531e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 01:56:43.576772 containerd[2034]: time="2024-12-13T01:56:43.576558328Z" level=info msg="CreateContainer within sandbox \"894e1c52f0ca3a46cbdad4cc82aac67e68520b72b9e255b2a61c2e6e85b6531e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"558478ce73f951674f1d96308b0e1b86c996f10ff65706402a9ffe56aa307758\"" Dec 13 01:56:43.578672 containerd[2034]: time="2024-12-13T01:56:43.577759108Z" level=info msg="StartContainer for \"558478ce73f951674f1d96308b0e1b86c996f10ff65706402a9ffe56aa307758\"" Dec 13 01:56:43.639897 systemd[1]: Started cri-containerd-558478ce73f951674f1d96308b0e1b86c996f10ff65706402a9ffe56aa307758.scope - libcontainer container 558478ce73f951674f1d96308b0e1b86c996f10ff65706402a9ffe56aa307758. Dec 13 01:56:43.711134 containerd[2034]: time="2024-12-13T01:56:43.711071345Z" level=info msg="StartContainer for \"558478ce73f951674f1d96308b0e1b86c996f10ff65706402a9ffe56aa307758\" returns successfully" Dec 13 01:56:46.609606 systemd[1]: cri-containerd-d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80.scope: Deactivated successfully. Dec 13 01:56:46.610116 systemd[1]: cri-containerd-d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80.scope: Consumed 3.962s CPU time, 16.2M memory peak, 0B memory swap peak. Dec 13 01:56:46.651682 containerd[2034]: time="2024-12-13T01:56:46.651383336Z" level=info msg="shim disconnected" id=d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80 namespace=k8s.io Dec 13 01:56:46.654543 containerd[2034]: time="2024-12-13T01:56:46.651558476Z" level=warning msg="cleaning up after shim disconnected" id=d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80 namespace=k8s.io Dec 13 01:56:46.654543 containerd[2034]: time="2024-12-13T01:56:46.652262024Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:56:46.654107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80-rootfs.mount: Deactivated successfully. Dec 13 01:56:46.677129 containerd[2034]: time="2024-12-13T01:56:46.676889300Z" level=warning msg="cleanup warnings time=\"2024-12-13T01:56:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Dec 13 01:56:47.556753 kubelet[3281]: I1213 01:56:47.556702 3281 scope.go:117] "RemoveContainer" containerID="d2f7a78176ae60a4cd476ed493d908aa1603167d2ecd68b2805187dbd5215d80" Dec 13 01:56:47.559836 containerd[2034]: time="2024-12-13T01:56:47.559755944Z" level=info msg="CreateContainer within sandbox \"e7d82fbe3bb824c130dbc679bc67c6c5031f8e650c67928deb17c2ce0b4b1284\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 01:56:47.588612 containerd[2034]: time="2024-12-13T01:56:47.586089908Z" level=info msg="CreateContainer within sandbox \"e7d82fbe3bb824c130dbc679bc67c6c5031f8e650c67928deb17c2ce0b4b1284\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"60377119db643aadc522fd1a56741401c5e544cd609f4d1979a2417ef6c8bdf5\"" Dec 13 01:56:47.587969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425642789.mount: Deactivated successfully. Dec 13 01:56:47.591493 containerd[2034]: time="2024-12-13T01:56:47.590528936Z" level=info msg="StartContainer for \"60377119db643aadc522fd1a56741401c5e544cd609f4d1979a2417ef6c8bdf5\"" Dec 13 01:56:47.646986 systemd[1]: Started cri-containerd-60377119db643aadc522fd1a56741401c5e544cd609f4d1979a2417ef6c8bdf5.scope - libcontainer container 60377119db643aadc522fd1a56741401c5e544cd609f4d1979a2417ef6c8bdf5. Dec 13 01:56:47.723251 containerd[2034]: time="2024-12-13T01:56:47.722195037Z" level=info msg="StartContainer for \"60377119db643aadc522fd1a56741401c5e544cd609f4d1979a2417ef6c8bdf5\" returns successfully" Dec 13 01:56:50.351303 kubelet[3281]: E1213 01:56:50.351061 3281 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-19-153)" Dec 13 01:57:00.352520 kubelet[3281]: E1213 01:57:00.352458 3281 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.153:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-153?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"