Jan 17 12:01:24.198483 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 17 12:01:24.198532 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:01:24.198558 kernel: KASLR disabled due to lack of seed Jan 17 12:01:24.198575 kernel: efi: EFI v2.7 by EDK II Jan 17 12:01:24.198591 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Jan 17 12:01:24.198607 kernel: ACPI: Early table checksum verification disabled Jan 17 12:01:24.198625 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 17 12:01:24.198641 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 12:01:24.198657 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 12:01:24.198673 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jan 17 12:01:24.198693 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 12:01:24.198708 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 17 12:01:24.198724 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 17 12:01:24.198740 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 17 12:01:24.198758 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 12:01:24.198779 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 17 12:01:24.198797 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 17 12:01:24.198813 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 17 12:01:24.198829 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 17 12:01:24.198846 kernel: printk: bootconsole [uart0] enabled Jan 17 12:01:24.198862 kernel: NUMA: Failed to initialise from firmware Jan 17 12:01:24.198878 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:01:24.198895 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 17 12:01:24.198911 kernel: Zone ranges: Jan 17 12:01:24.198927 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 12:01:24.198944 kernel: DMA32 empty Jan 17 12:01:24.198965 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 17 12:01:24.198981 kernel: Movable zone start for each node Jan 17 12:01:24.198997 kernel: Early memory node ranges Jan 17 12:01:24.199014 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 17 12:01:24.199030 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 17 12:01:24.199046 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 17 12:01:24.199062 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 17 12:01:24.199079 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 17 12:01:24.199095 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 17 12:01:24.199111 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 17 12:01:24.199127 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 17 12:01:24.199144 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 12:01:24.199164 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 17 12:01:24.199182 kernel: psci: probing for conduit method from ACPI. Jan 17 12:01:24.199205 kernel: psci: PSCIv1.0 detected in firmware. Jan 17 12:01:24.199223 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:01:24.199267 kernel: psci: Trusted OS migration not required Jan 17 12:01:24.199292 kernel: psci: SMC Calling Convention v1.1 Jan 17 12:01:24.199310 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:01:24.199328 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:01:24.199345 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 12:01:24.199363 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:01:24.199380 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:01:24.199397 kernel: CPU features: detected: Spectre-v2 Jan 17 12:01:24.199414 kernel: CPU features: detected: Spectre-v3a Jan 17 12:01:24.199432 kernel: CPU features: detected: Spectre-BHB Jan 17 12:01:24.199449 kernel: CPU features: detected: ARM erratum 1742098 Jan 17 12:01:24.199466 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 17 12:01:24.199488 kernel: alternatives: applying boot alternatives Jan 17 12:01:24.199508 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:01:24.199527 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:01:24.199544 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:01:24.199561 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:01:24.199579 kernel: Fallback order for Node 0: 0 Jan 17 12:01:24.199596 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 17 12:01:24.199613 kernel: Policy zone: Normal Jan 17 12:01:24.199630 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:01:24.199647 kernel: software IO TLB: area num 2. Jan 17 12:01:24.199664 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 17 12:01:24.199687 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Jan 17 12:01:24.199705 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:01:24.199722 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:01:24.199741 kernel: rcu: RCU event tracing is enabled. Jan 17 12:01:24.199759 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:01:24.199777 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:01:24.199794 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:01:24.199812 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:01:24.199829 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:01:24.199847 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:01:24.199864 kernel: GICv3: 96 SPIs implemented Jan 17 12:01:24.199885 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:01:24.199903 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:01:24.199920 kernel: GICv3: GICv3 features: 16 PPIs Jan 17 12:01:24.199937 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 17 12:01:24.199955 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 17 12:01:24.199972 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 12:01:24.199990 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 17 12:01:24.200008 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 17 12:01:24.200025 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 17 12:01:24.200043 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 17 12:01:24.200061 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:01:24.200079 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 17 12:01:24.200102 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 17 12:01:24.200120 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 17 12:01:24.200137 kernel: Console: colour dummy device 80x25 Jan 17 12:01:24.200156 kernel: printk: console [tty1] enabled Jan 17 12:01:24.200173 kernel: ACPI: Core revision 20230628 Jan 17 12:01:24.200192 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 17 12:01:24.200210 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:01:24.201748 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:01:24.201786 kernel: landlock: Up and running. Jan 17 12:01:24.201814 kernel: SELinux: Initializing. Jan 17 12:01:24.201833 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:01:24.201851 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:01:24.201869 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:01:24.201887 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:01:24.201905 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:01:24.201924 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:01:24.201943 kernel: Platform MSI: ITS@0x10080000 domain created Jan 17 12:01:24.201961 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 17 12:01:24.201983 kernel: Remapping and enabling EFI services. Jan 17 12:01:24.202001 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:01:24.202019 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:01:24.202036 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 17 12:01:24.202054 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 17 12:01:24.202072 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 17 12:01:24.202090 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:01:24.202107 kernel: SMP: Total of 2 processors activated. Jan 17 12:01:24.202125 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:01:24.202147 kernel: CPU features: detected: 32-bit EL1 Support Jan 17 12:01:24.202165 kernel: CPU features: detected: CRC32 instructions Jan 17 12:01:24.202183 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:01:24.202212 kernel: alternatives: applying system-wide alternatives Jan 17 12:01:24.202254 kernel: devtmpfs: initialized Jan 17 12:01:24.202276 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:01:24.202294 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:01:24.202313 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:01:24.202331 kernel: SMBIOS 3.0.0 present. Jan 17 12:01:24.202350 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 17 12:01:24.202374 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:01:24.202393 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:01:24.202412 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:01:24.202431 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:01:24.202449 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:01:24.202468 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Jan 17 12:01:24.202486 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:01:24.202509 kernel: cpuidle: using governor menu Jan 17 12:01:24.202528 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:01:24.202546 kernel: ASID allocator initialised with 65536 entries Jan 17 12:01:24.202565 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:01:24.202583 kernel: Serial: AMBA PL011 UART driver Jan 17 12:01:24.202601 kernel: Modules: 17520 pages in range for non-PLT usage Jan 17 12:01:24.202620 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:01:24.202638 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:01:24.202656 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:01:24.202680 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:01:24.202698 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:01:24.202717 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:01:24.202735 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:01:24.202754 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:01:24.202772 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:01:24.202791 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:01:24.202809 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:01:24.202827 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:01:24.202850 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:01:24.202869 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:01:24.202887 kernel: ACPI: Interpreter enabled Jan 17 12:01:24.202905 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:01:24.202924 kernel: ACPI: MCFG table detected, 1 entries Jan 17 12:01:24.202942 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jan 17 12:01:24.203260 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:01:24.203477 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 12:01:24.203699 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 12:01:24.203907 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jan 17 12:01:24.204114 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jan 17 12:01:24.204146 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 17 12:01:24.204165 kernel: acpiphp: Slot [1] registered Jan 17 12:01:24.204184 kernel: acpiphp: Slot [2] registered Jan 17 12:01:24.204202 kernel: acpiphp: Slot [3] registered Jan 17 12:01:24.204221 kernel: acpiphp: Slot [4] registered Jan 17 12:01:24.205287 kernel: acpiphp: Slot [5] registered Jan 17 12:01:24.205319 kernel: acpiphp: Slot [6] registered Jan 17 12:01:24.205339 kernel: acpiphp: Slot [7] registered Jan 17 12:01:24.205358 kernel: acpiphp: Slot [8] registered Jan 17 12:01:24.205377 kernel: acpiphp: Slot [9] registered Jan 17 12:01:24.205395 kernel: acpiphp: Slot [10] registered Jan 17 12:01:24.205413 kernel: acpiphp: Slot [11] registered Jan 17 12:01:24.205432 kernel: acpiphp: Slot [12] registered Jan 17 12:01:24.205451 kernel: acpiphp: Slot [13] registered Jan 17 12:01:24.205469 kernel: acpiphp: Slot [14] registered Jan 17 12:01:24.205493 kernel: acpiphp: Slot [15] registered Jan 17 12:01:24.205512 kernel: acpiphp: Slot [16] registered Jan 17 12:01:24.205550 kernel: acpiphp: Slot [17] registered Jan 17 12:01:24.205571 kernel: acpiphp: Slot [18] registered Jan 17 12:01:24.205589 kernel: acpiphp: Slot [19] registered Jan 17 12:01:24.205608 kernel: acpiphp: Slot [20] registered Jan 17 12:01:24.205626 kernel: acpiphp: Slot [21] registered Jan 17 12:01:24.205645 kernel: acpiphp: Slot [22] registered Jan 17 12:01:24.205664 kernel: acpiphp: Slot [23] registered Jan 17 12:01:24.205689 kernel: acpiphp: Slot [24] registered Jan 17 12:01:24.205708 kernel: acpiphp: Slot [25] registered Jan 17 12:01:24.205726 kernel: acpiphp: Slot [26] registered Jan 17 12:01:24.205745 kernel: acpiphp: Slot [27] registered Jan 17 12:01:24.205763 kernel: acpiphp: Slot [28] registered Jan 17 12:01:24.205782 kernel: acpiphp: Slot [29] registered Jan 17 12:01:24.205800 kernel: acpiphp: Slot [30] registered Jan 17 12:01:24.205818 kernel: acpiphp: Slot [31] registered Jan 17 12:01:24.205837 kernel: PCI host bridge to bus 0000:00 Jan 17 12:01:24.206085 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 17 12:01:24.206333 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 12:01:24.206531 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 17 12:01:24.206724 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jan 17 12:01:24.206978 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 17 12:01:24.207223 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 17 12:01:24.208690 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 17 12:01:24.208923 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 12:01:24.209132 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 17 12:01:24.209407 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:01:24.209678 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 12:01:24.209902 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 17 12:01:24.210113 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 17 12:01:24.210447 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 17 12:01:24.210714 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 12:01:24.210921 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jan 17 12:01:24.211125 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jan 17 12:01:24.211403 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jan 17 12:01:24.211613 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jan 17 12:01:24.211825 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jan 17 12:01:24.212056 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 17 12:01:24.212334 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 12:01:24.212526 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 17 12:01:24.212553 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 12:01:24.212573 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 12:01:24.212592 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 12:01:24.212611 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 12:01:24.212630 kernel: iommu: Default domain type: Translated Jan 17 12:01:24.212649 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:01:24.212676 kernel: efivars: Registered efivars operations Jan 17 12:01:24.212694 kernel: vgaarb: loaded Jan 17 12:01:24.212713 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:01:24.212732 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:01:24.212750 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:01:24.212769 kernel: pnp: PnP ACPI init Jan 17 12:01:24.212998 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 17 12:01:24.213027 kernel: pnp: PnP ACPI: found 1 devices Jan 17 12:01:24.213053 kernel: NET: Registered PF_INET protocol family Jan 17 12:01:24.213073 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:01:24.213092 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:01:24.213111 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:01:24.213130 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:01:24.213149 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:01:24.213168 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:01:24.213187 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:01:24.213206 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:01:24.213250 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:01:24.213275 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:01:24.213295 kernel: kvm [1]: HYP mode not available Jan 17 12:01:24.213314 kernel: Initialise system trusted keyrings Jan 17 12:01:24.213333 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:01:24.213351 kernel: Key type asymmetric registered Jan 17 12:01:24.213371 kernel: Asymmetric key parser 'x509' registered Jan 17 12:01:24.213390 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:01:24.213409 kernel: io scheduler mq-deadline registered Jan 17 12:01:24.213435 kernel: io scheduler kyber registered Jan 17 12:01:24.213454 kernel: io scheduler bfq registered Jan 17 12:01:24.213709 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 17 12:01:24.213740 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 12:01:24.213759 kernel: ACPI: button: Power Button [PWRB] Jan 17 12:01:24.213779 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 17 12:01:24.213798 kernel: ACPI: button: Sleep Button [SLPB] Jan 17 12:01:24.213817 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:01:24.213843 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 12:01:24.214076 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 17 12:01:24.214104 kernel: printk: console [ttyS0] disabled Jan 17 12:01:24.214123 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 17 12:01:24.214142 kernel: printk: console [ttyS0] enabled Jan 17 12:01:24.214161 kernel: printk: bootconsole [uart0] disabled Jan 17 12:01:24.214180 kernel: thunder_xcv, ver 1.0 Jan 17 12:01:24.214198 kernel: thunder_bgx, ver 1.0 Jan 17 12:01:24.214217 kernel: nicpf, ver 1.0 Jan 17 12:01:24.214290 kernel: nicvf, ver 1.0 Jan 17 12:01:24.214521 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:01:24.214716 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:01:23 UTC (1737115283) Jan 17 12:01:24.214742 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:01:24.214761 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 17 12:01:24.214781 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:01:24.214799 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:01:24.214818 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:01:24.214843 kernel: Segment Routing with IPv6 Jan 17 12:01:24.214862 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:01:24.214880 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:01:24.214899 kernel: Key type dns_resolver registered Jan 17 12:01:24.214917 kernel: registered taskstats version 1 Jan 17 12:01:24.214936 kernel: Loading compiled-in X.509 certificates Jan 17 12:01:24.214955 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:01:24.214974 kernel: Key type .fscrypt registered Jan 17 12:01:24.217289 kernel: Key type fscrypt-provisioning registered Jan 17 12:01:24.217319 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:01:24.217339 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:01:24.217358 kernel: ima: No architecture policies found Jan 17 12:01:24.217377 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:01:24.217396 kernel: clk: Disabling unused clocks Jan 17 12:01:24.217415 kernel: Freeing unused kernel memory: 39360K Jan 17 12:01:24.217434 kernel: Run /init as init process Jan 17 12:01:24.217453 kernel: with arguments: Jan 17 12:01:24.217472 kernel: /init Jan 17 12:01:24.217490 kernel: with environment: Jan 17 12:01:24.217515 kernel: HOME=/ Jan 17 12:01:24.217552 kernel: TERM=linux Jan 17 12:01:24.217572 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:01:24.217596 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:01:24.217620 systemd[1]: Detected virtualization amazon. Jan 17 12:01:24.217640 systemd[1]: Detected architecture arm64. Jan 17 12:01:24.217659 systemd[1]: Running in initrd. Jan 17 12:01:24.217685 systemd[1]: No hostname configured, using default hostname. Jan 17 12:01:24.217705 systemd[1]: Hostname set to . Jan 17 12:01:24.217725 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:01:24.217745 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:01:24.217765 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:01:24.217785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:01:24.217806 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:01:24.217827 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:01:24.217852 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:01:24.217874 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:01:24.217897 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:01:24.217918 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:01:24.217938 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:01:24.217958 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:01:24.217978 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:01:24.218003 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:01:24.218023 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:01:24.218043 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:01:24.218063 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:01:24.218083 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:01:24.218103 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:01:24.218123 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:01:24.218143 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:01:24.218164 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:01:24.218189 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:01:24.218209 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:01:24.218289 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:01:24.218315 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:01:24.218337 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:01:24.218357 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:01:24.218377 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:01:24.218397 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:01:24.218423 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:24.218443 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:01:24.218463 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:01:24.218483 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:01:24.218550 systemd-journald[251]: Collecting audit messages is disabled. Jan 17 12:01:24.218600 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:01:24.218621 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:01:24.218640 kernel: Bridge firewalling registered Jan 17 12:01:24.218665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:01:24.218686 systemd-journald[251]: Journal started Jan 17 12:01:24.218723 systemd-journald[251]: Runtime Journal (/run/log/journal/ec20ed71649a7253bf98baeb1323d67f) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:01:24.182480 systemd-modules-load[252]: Inserted module 'overlay' Jan 17 12:01:24.226280 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:01:24.213330 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 17 12:01:24.227087 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:24.233508 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:01:24.252553 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:01:24.266472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:01:24.272489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:01:24.288420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:01:24.308676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:01:24.326825 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:01:24.336510 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:01:24.347559 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:01:24.353326 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:24.367418 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:01:24.392722 dracut-cmdline[288]: dracut-dracut-053 Jan 17 12:01:24.399748 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:01:24.442691 systemd-resolved[285]: Positive Trust Anchors: Jan 17 12:01:24.442726 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:01:24.442788 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:01:24.574288 kernel: SCSI subsystem initialized Jan 17 12:01:24.584261 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:01:24.595265 kernel: iscsi: registered transport (tcp) Jan 17 12:01:24.617264 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:01:24.617351 kernel: QLogic iSCSI HBA Driver Jan 17 12:01:24.676265 kernel: random: crng init done Jan 17 12:01:24.676435 systemd-resolved[285]: Defaulting to hostname 'linux'. Jan 17 12:01:24.679801 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:01:24.682591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:01:24.708540 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:01:24.716530 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:01:24.757602 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:01:24.757677 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:01:24.759421 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:01:24.825293 kernel: raid6: neonx8 gen() 6654 MB/s Jan 17 12:01:24.842263 kernel: raid6: neonx4 gen() 6457 MB/s Jan 17 12:01:24.859263 kernel: raid6: neonx2 gen() 5362 MB/s Jan 17 12:01:24.876273 kernel: raid6: neonx1 gen() 3927 MB/s Jan 17 12:01:24.893276 kernel: raid6: int64x8 gen() 3786 MB/s Jan 17 12:01:24.910270 kernel: raid6: int64x4 gen() 3679 MB/s Jan 17 12:01:24.927296 kernel: raid6: int64x2 gen() 3537 MB/s Jan 17 12:01:24.945127 kernel: raid6: int64x1 gen() 2699 MB/s Jan 17 12:01:24.945205 kernel: raid6: using algorithm neonx8 gen() 6654 MB/s Jan 17 12:01:24.963037 kernel: raid6: .... xor() 4831 MB/s, rmw enabled Jan 17 12:01:24.963113 kernel: raid6: using neon recovery algorithm Jan 17 12:01:24.971821 kernel: xor: measuring software checksum speed Jan 17 12:01:24.971891 kernel: 8regs : 10976 MB/sec Jan 17 12:01:24.972967 kernel: 32regs : 11918 MB/sec Jan 17 12:01:24.975089 kernel: arm64_neon : 9089 MB/sec Jan 17 12:01:24.975124 kernel: xor: using function: 32regs (11918 MB/sec) Jan 17 12:01:25.060327 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:01:25.079581 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:01:25.088662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:01:25.128794 systemd-udevd[470]: Using default interface naming scheme 'v255'. Jan 17 12:01:25.137956 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:01:25.163571 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:01:25.203265 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Jan 17 12:01:25.258566 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:01:25.273484 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:01:25.388575 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:01:25.403869 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:01:25.450130 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:01:25.454782 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:01:25.459634 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:01:25.461947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:01:25.470522 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:01:25.512190 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:01:25.577248 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 12:01:25.577317 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 17 12:01:25.615869 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 12:01:25.616137 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 12:01:25.616437 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:4f:8c:c9:88:1f Jan 17 12:01:25.616675 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 12:01:25.616704 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 12:01:25.599539 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:01:25.599768 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:25.602995 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:01:25.605202 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:01:25.605475 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:25.607748 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:25.616819 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:25.641269 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 12:01:25.647276 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:01:25.647349 kernel: GPT:9289727 != 16777215 Jan 17 12:01:25.647374 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:01:25.650022 kernel: GPT:9289727 != 16777215 Jan 17 12:01:25.650072 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:01:25.651575 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:25.655898 (udev-worker)[532]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:25.663489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:25.685222 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:01:25.710708 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:25.816312 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (523) Jan 17 12:01:25.839271 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (519) Jan 17 12:01:25.883042 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 12:01:25.926837 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:01:25.944247 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 12:01:25.970658 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 12:01:25.972975 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 12:01:25.988545 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:01:26.000827 disk-uuid[663]: Primary Header is updated. Jan 17 12:01:26.000827 disk-uuid[663]: Secondary Entries is updated. Jan 17 12:01:26.000827 disk-uuid[663]: Secondary Header is updated. Jan 17 12:01:26.012310 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:26.019267 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:26.027273 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:27.031111 disk-uuid[664]: The operation has completed successfully. Jan 17 12:01:27.036172 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 12:01:27.213732 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:01:27.213954 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:01:27.258517 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:01:27.269271 sh[1007]: Success Jan 17 12:01:27.293281 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:01:27.406727 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:01:27.413888 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:01:27.419442 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:01:27.453931 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:01:27.453993 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:27.454020 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:01:27.455343 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:01:27.456421 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:01:27.585265 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 12:01:27.600844 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:01:27.603222 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:01:27.619615 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:01:27.626493 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:01:27.652150 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:27.652256 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:27.654451 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:01:27.662309 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:01:27.678790 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:01:27.680962 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:27.693311 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:01:27.703610 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:01:27.815701 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:01:27.830514 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:01:27.881919 systemd-networkd[1199]: lo: Link UP Jan 17 12:01:27.881943 systemd-networkd[1199]: lo: Gained carrier Jan 17 12:01:27.886954 systemd-networkd[1199]: Enumeration completed Jan 17 12:01:27.887115 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:01:27.891502 systemd[1]: Reached target network.target - Network. Jan 17 12:01:27.893414 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:27.893420 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:01:27.905647 systemd-networkd[1199]: eth0: Link UP Jan 17 12:01:27.905661 systemd-networkd[1199]: eth0: Gained carrier Jan 17 12:01:27.905679 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:27.923334 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.20.160/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:01:28.085120 ignition[1106]: Ignition 2.19.0 Jan 17 12:01:28.085142 ignition[1106]: Stage: fetch-offline Jan 17 12:01:28.085741 ignition[1106]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:28.085766 ignition[1106]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:28.087902 ignition[1106]: Ignition finished successfully Jan 17 12:01:28.094040 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:01:28.110667 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:01:28.134122 ignition[1209]: Ignition 2.19.0 Jan 17 12:01:28.134145 ignition[1209]: Stage: fetch Jan 17 12:01:28.135869 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:28.135898 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:28.137074 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:28.148926 ignition[1209]: PUT result: OK Jan 17 12:01:28.152367 ignition[1209]: parsed url from cmdline: "" Jan 17 12:01:28.152389 ignition[1209]: no config URL provided Jan 17 12:01:28.152406 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:01:28.152434 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:01:28.152471 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:28.154131 ignition[1209]: PUT result: OK Jan 17 12:01:28.154210 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 12:01:28.158552 ignition[1209]: GET result: OK Jan 17 12:01:28.158702 ignition[1209]: parsing config with SHA512: 9217f0f21eb294d9c204d122bc575ca2f328f6b27f4dab71f0624237924d41ccde9ded4f26e71114166c872b64f5ac614e2858f259eee3f411787e300399c800 Jan 17 12:01:28.170124 unknown[1209]: fetched base config from "system" Jan 17 12:01:28.170802 ignition[1209]: fetch: fetch complete Jan 17 12:01:28.170146 unknown[1209]: fetched base config from "system" Jan 17 12:01:28.170814 ignition[1209]: fetch: fetch passed Jan 17 12:01:28.170160 unknown[1209]: fetched user config from "aws" Jan 17 12:01:28.170888 ignition[1209]: Ignition finished successfully Jan 17 12:01:28.181607 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:01:28.200463 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:01:28.222720 ignition[1216]: Ignition 2.19.0 Jan 17 12:01:28.222752 ignition[1216]: Stage: kargs Jan 17 12:01:28.224067 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:28.224094 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:28.224291 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:28.227663 ignition[1216]: PUT result: OK Jan 17 12:01:28.235337 ignition[1216]: kargs: kargs passed Jan 17 12:01:28.235454 ignition[1216]: Ignition finished successfully Jan 17 12:01:28.240523 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:01:28.258519 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:01:28.282865 ignition[1222]: Ignition 2.19.0 Jan 17 12:01:28.282886 ignition[1222]: Stage: disks Jan 17 12:01:28.284125 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:28.284153 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:28.284361 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:28.287749 ignition[1222]: PUT result: OK Jan 17 12:01:28.297788 ignition[1222]: disks: disks passed Jan 17 12:01:28.297909 ignition[1222]: Ignition finished successfully Jan 17 12:01:28.302527 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:01:28.309106 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:01:28.311857 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:01:28.316335 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:01:28.322717 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:01:28.326457 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:01:28.345753 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:01:28.390578 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:01:28.396591 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:01:28.413523 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:01:28.490272 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:01:28.491844 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:01:28.497489 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:01:28.513391 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:01:28.520467 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:01:28.524480 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:01:28.524573 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:01:28.524619 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:01:28.548778 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1249) Jan 17 12:01:28.548840 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:28.550425 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:28.552418 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:01:28.556482 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:01:28.568251 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:01:28.568642 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:01:28.580905 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:01:29.061678 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:01:29.070262 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:01:29.078301 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:01:29.097481 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:01:29.430146 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:01:29.446065 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:01:29.452551 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:01:29.469670 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:01:29.474277 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:29.519728 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:01:29.523751 ignition[1362]: INFO : Ignition 2.19.0 Jan 17 12:01:29.523751 ignition[1362]: INFO : Stage: mount Jan 17 12:01:29.529351 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:29.529351 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:29.529351 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:29.536317 ignition[1362]: INFO : PUT result: OK Jan 17 12:01:29.540578 ignition[1362]: INFO : mount: mount passed Jan 17 12:01:29.542260 ignition[1362]: INFO : Ignition finished successfully Jan 17 12:01:29.546328 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:01:29.555550 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:01:29.586429 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:01:29.609259 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1373) Jan 17 12:01:29.613284 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:01:29.613362 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:01:29.613390 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 12:01:29.620284 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 12:01:29.623861 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:01:29.660577 ignition[1390]: INFO : Ignition 2.19.0 Jan 17 12:01:29.660577 ignition[1390]: INFO : Stage: files Jan 17 12:01:29.663963 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:29.663963 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:29.663963 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:29.671260 ignition[1390]: INFO : PUT result: OK Jan 17 12:01:29.675994 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:01:29.678864 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:01:29.678864 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:01:29.722098 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:01:29.725505 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:01:29.728598 unknown[1390]: wrote ssh authorized keys file for user: core Jan 17 12:01:29.731975 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:01:29.735056 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:01:29.735056 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:01:29.733426 systemd-networkd[1199]: eth0: Gained IPv6LL Jan 17 12:01:29.836185 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:01:30.001706 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:01:30.001706 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:01:30.009317 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:01:30.012607 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:01:30.016196 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:01:30.019480 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:01:30.022741 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:01:30.022741 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:01:30.029292 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:01:30.033708 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:01:30.037255 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:01:30.037255 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:01:30.037255 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:01:30.037255 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:01:30.037255 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 17 12:01:30.509351 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:01:30.886498 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:01:30.890961 ignition[1390]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:01:30.890961 ignition[1390]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:01:30.890961 ignition[1390]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:01:30.890961 ignition[1390]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:01:30.890961 ignition[1390]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:01:30.890961 ignition[1390]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:01:30.908192 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:01:30.908192 ignition[1390]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:01:30.908192 ignition[1390]: INFO : files: files passed Jan 17 12:01:30.908192 ignition[1390]: INFO : Ignition finished successfully Jan 17 12:01:30.920319 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:01:30.934559 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:01:30.944058 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:01:30.959845 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:01:30.961312 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:01:30.984289 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:01:30.984289 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:01:30.994457 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:01:30.999537 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:01:31.003142 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:01:31.015561 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:01:31.072386 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:01:31.072808 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:01:31.077123 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:01:31.079565 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:01:31.083420 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:01:31.099754 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:01:31.129617 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:01:31.149776 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:01:31.175739 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:01:31.180368 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:01:31.184966 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:01:31.188555 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:01:31.189757 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:01:31.193453 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:01:31.195750 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:01:31.203387 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:01:31.205985 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:01:31.212645 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:01:31.215666 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:01:31.222705 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:01:31.226185 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:01:31.230926 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:01:31.236842 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:01:31.238657 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:01:31.238913 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:01:31.247136 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:01:31.249952 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:01:31.254270 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:01:31.258312 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:01:31.261201 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:01:31.261657 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:01:31.266889 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:01:31.267139 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:01:31.277713 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:01:31.278009 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:01:31.294673 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:01:31.311531 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:01:31.316012 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:01:31.317445 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:01:31.326689 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:01:31.326921 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:01:31.347076 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:01:31.347318 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:01:31.367270 ignition[1442]: INFO : Ignition 2.19.0 Jan 17 12:01:31.367270 ignition[1442]: INFO : Stage: umount Jan 17 12:01:31.367270 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:01:31.367270 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 12:01:31.376924 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 12:01:31.376924 ignition[1442]: INFO : PUT result: OK Jan 17 12:01:31.382543 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:01:31.388265 ignition[1442]: INFO : umount: umount passed Jan 17 12:01:31.388265 ignition[1442]: INFO : Ignition finished successfully Jan 17 12:01:31.391946 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:01:31.392279 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:01:31.396804 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:01:31.396979 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:01:31.404308 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:01:31.404606 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:01:31.408564 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:01:31.408657 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:01:31.411543 systemd[1]: Stopped target network.target - Network. Jan 17 12:01:31.413906 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:01:31.414004 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:01:31.435089 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:01:31.437375 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:01:31.437488 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:01:31.445353 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:01:31.449008 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:01:31.452331 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:01:31.452424 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:01:31.464923 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:01:31.465004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:01:31.467207 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:01:31.467314 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:01:31.469331 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:01:31.469433 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:01:31.471852 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:01:31.474206 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:01:31.479022 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:01:31.479201 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:01:31.482556 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:01:31.482733 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:01:31.491721 systemd-networkd[1199]: eth0: DHCPv6 lease lost Jan 17 12:01:31.495778 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:01:31.496050 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:01:31.506600 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:01:31.506802 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:01:31.516190 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:01:31.516333 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:01:31.537850 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:01:31.542573 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:01:31.542714 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:01:31.549036 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:01:31.549146 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:01:31.562667 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:01:31.562776 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:01:31.565609 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:01:31.565707 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:01:31.569406 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:01:31.603713 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:01:31.605560 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:01:31.611665 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:01:31.613441 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:01:31.621173 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:01:31.621331 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:01:31.623753 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:01:31.623825 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:01:31.626613 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:01:31.626710 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:01:31.628973 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:01:31.629060 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:01:31.647659 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:01:31.647773 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:01:31.664512 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:01:31.668515 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:01:31.668652 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:01:31.671511 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:01:31.671623 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:01:31.674790 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:01:31.674905 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:01:31.678274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:01:31.678403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:31.728091 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:01:31.728657 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:01:31.736817 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:01:31.752576 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:01:31.772185 systemd[1]: Switching root. Jan 17 12:01:31.815594 systemd-journald[251]: Journal stopped Jan 17 12:01:37.975808 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 17 12:01:37.975944 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:01:37.975988 kernel: SELinux: policy capability open_perms=1 Jan 17 12:01:37.976020 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:01:37.976051 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:01:37.976083 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:01:37.976115 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:01:37.976147 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:01:37.976179 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:01:37.976214 kernel: audit: type=1403 audit(1737115292.281:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:01:37.976284 systemd[1]: Successfully loaded SELinux policy in 72.882ms. Jan 17 12:01:37.976335 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.691ms. Jan 17 12:01:37.976371 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:01:37.976403 systemd[1]: Detected virtualization amazon. Jan 17 12:01:37.976434 systemd[1]: Detected architecture arm64. Jan 17 12:01:37.976466 systemd[1]: Detected first boot. Jan 17 12:01:37.976509 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:01:37.976548 zram_generator::config[1485]: No configuration found. Jan 17 12:01:37.976591 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:01:37.976624 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:01:37.976655 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:01:37.976688 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:01:37.976805 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:01:37.977591 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:01:37.977921 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:01:37.977959 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:01:37.978000 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:01:37.978033 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:01:37.978066 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:01:37.978096 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:01:37.978126 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:01:37.978156 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:01:37.978188 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:01:37.978220 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:01:37.978638 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:01:37.978682 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:01:37.978716 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:01:37.978748 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:01:37.978787 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:01:37.978820 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:01:37.978851 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:01:37.978883 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:01:37.978920 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:01:37.978952 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:01:37.978983 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:01:37.979014 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:01:37.979045 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:01:37.979077 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:01:37.979109 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:01:37.979139 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:01:37.979171 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:01:37.979203 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:01:37.979276 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:01:37.979310 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:01:37.979340 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:01:37.979371 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:01:37.979401 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:01:37.979431 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:01:37.979462 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:01:37.979496 systemd[1]: Reached target machines.target - Containers. Jan 17 12:01:37.979532 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:01:37.979562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:01:37.979594 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:01:37.979626 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:01:37.979658 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:01:37.979688 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:01:37.979721 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:01:37.979754 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:01:37.979786 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:01:37.979821 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:01:37.979852 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:01:37.979881 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:01:37.979913 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:01:37.979943 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:01:37.979975 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:01:37.980005 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:01:37.980034 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:01:37.980064 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:01:37.980098 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:01:37.980131 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:01:37.980165 systemd[1]: Stopped verity-setup.service. Jan 17 12:01:37.980197 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:01:37.980289 systemd-journald[1560]: Collecting audit messages is disabled. Jan 17 12:01:37.980525 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:01:37.980562 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:01:37.980596 systemd-journald[1560]: Journal started Jan 17 12:01:37.980648 systemd-journald[1560]: Runtime Journal (/run/log/journal/ec20ed71649a7253bf98baeb1323d67f) is 8.0M, max 75.3M, 67.3M free. Jan 17 12:01:36.534042 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:01:37.523311 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 12:01:37.524271 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:01:37.986732 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:01:37.988172 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:01:37.997493 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:01:38.000543 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:01:38.004284 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:01:38.007319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:01:38.007744 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:01:38.010659 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:01:38.011116 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:01:38.014364 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:01:38.017754 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:01:38.036995 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:01:38.039560 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:01:38.039614 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:01:38.044359 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:01:38.057492 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:01:38.063276 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:01:38.065512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:01:38.070506 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:01:38.082589 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:01:38.086458 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:01:38.090560 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:01:38.102719 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:01:38.111426 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:01:38.119518 systemd-journald[1560]: Time spent on flushing to /var/log/journal/ec20ed71649a7253bf98baeb1323d67f is 143.582ms for 888 entries. Jan 17 12:01:38.119518 systemd-journald[1560]: System Journal (/var/log/journal/ec20ed71649a7253bf98baeb1323d67f) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:01:39.058215 systemd-journald[1560]: Received client request to flush runtime journal. Jan 17 12:01:39.058319 kernel: loop: module loaded Jan 17 12:01:39.058357 kernel: fuse: init (API version 7.39) Jan 17 12:01:39.058405 kernel: loop0: detected capacity change from 0 to 114432 Jan 17 12:01:39.058440 kernel: ACPI: bus type drm_connector registered Jan 17 12:01:38.369497 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:01:38.369969 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:01:38.372919 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:01:38.857027 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:01:38.867890 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:01:38.868339 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:01:38.892477 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:01:38.895367 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:01:38.897436 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:01:38.924710 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:01:38.932436 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:01:38.935439 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:01:38.938776 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:01:38.954026 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:01:38.956899 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:01:38.986779 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:01:38.993649 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:01:39.020084 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:01:39.022363 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:01:39.031957 systemd-tmpfiles[1585]: ACLs are not supported, ignoring. Jan 17 12:01:39.031982 systemd-tmpfiles[1585]: ACLs are not supported, ignoring. Jan 17 12:01:39.060921 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:01:39.069318 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:01:39.196359 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:01:39.206676 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:01:39.237610 udevadm[1614]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:01:39.916919 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:01:39.958359 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:01:39.961344 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:01:40.000550 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:01:40.008012 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:01:40.027436 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:01:40.037451 kernel: loop1: detected capacity change from 0 to 194096 Jan 17 12:01:40.054375 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:01:40.103284 kernel: loop2: detected capacity change from 0 to 114328 Jan 17 12:01:40.114257 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:01:40.127716 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:01:40.169750 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jan 17 12:01:40.169802 systemd-tmpfiles[1639]: ACLs are not supported, ignoring. Jan 17 12:01:40.179651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:01:40.190600 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:01:40.256352 systemd-udevd[1642]: Using default interface naming scheme 'v255'. Jan 17 12:01:41.694323 kernel: loop3: detected capacity change from 0 to 52536 Jan 17 12:01:41.737274 kernel: loop4: detected capacity change from 0 to 114432 Jan 17 12:01:41.758318 kernel: loop5: detected capacity change from 0 to 194096 Jan 17 12:01:41.786741 kernel: loop6: detected capacity change from 0 to 114328 Jan 17 12:01:41.801304 kernel: loop7: detected capacity change from 0 to 52536 Jan 17 12:01:41.817015 (sd-merge)[1645]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 12:01:41.818217 (sd-merge)[1645]: Merged extensions into '/usr'. Jan 17 12:01:41.825654 systemd[1]: Reloading requested from client PID 1584 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:01:41.826142 systemd[1]: Reloading... Jan 17 12:01:42.069799 zram_generator::config[1692]: No configuration found. Jan 17 12:01:42.165551 (udev-worker)[1685]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:42.536197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:42.622315 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1677) Jan 17 12:01:42.726005 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 12:01:42.726156 systemd[1]: Reloading finished in 899 ms. Jan 17 12:01:42.802809 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:01:42.819420 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:01:42.878417 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 12:01:42.883327 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:01:42.904603 systemd[1]: Starting ensure-sysext.service... Jan 17 12:01:42.916783 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:01:42.921644 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:01:42.936541 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:01:42.946616 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:01:42.951658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:01:42.959360 systemd[1]: Reloading requested from client PID 1836 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:01:42.959398 systemd[1]: Reloading... Jan 17 12:01:43.061404 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:01:43.062817 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:01:43.065670 systemd-tmpfiles[1840]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:01:43.068145 systemd-tmpfiles[1840]: ACLs are not supported, ignoring. Jan 17 12:01:43.068540 systemd-tmpfiles[1840]: ACLs are not supported, ignoring. Jan 17 12:01:43.077064 systemd-tmpfiles[1840]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:01:43.077337 systemd-tmpfiles[1840]: Skipping /boot Jan 17 12:01:43.101983 systemd-tmpfiles[1840]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:01:43.102346 systemd-tmpfiles[1840]: Skipping /boot Jan 17 12:01:43.170300 zram_generator::config[1871]: No configuration found. Jan 17 12:01:43.442700 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:01:43.450283 lvm[1837]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:01:43.615318 systemd[1]: Reloading finished in 655 ms. Jan 17 12:01:43.654357 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:01:43.657810 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:01:43.660643 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:01:43.679394 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:01:43.696297 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:01:43.703598 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:01:43.715620 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:01:43.724070 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:01:43.736662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:01:43.742120 lvm[1929]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:01:43.742545 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:01:43.758524 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:01:43.770829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:01:43.778290 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:01:43.785815 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:01:43.798859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:01:43.800195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:01:43.823070 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:01:43.833296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:01:43.847963 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:01:43.849198 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:01:43.852160 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:01:43.855293 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:01:43.857294 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:01:43.862679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:01:43.864469 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:01:43.878352 systemd[1]: Finished ensure-sysext.service. Jan 17 12:01:43.880913 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:01:43.885838 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:01:43.887350 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:01:43.895099 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:01:43.895335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:01:43.913801 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:01:43.915388 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:01:43.927032 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:01:43.961462 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:01:44.086143 systemd-resolved[1932]: Positive Trust Anchors: Jan 17 12:01:44.086740 systemd-resolved[1932]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:01:44.086913 systemd-resolved[1932]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:01:44.097188 systemd-resolved[1932]: Defaulting to hostname 'linux'. Jan 17 12:01:44.100757 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:01:44.103041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:01:44.778505 systemd-networkd[1839]: lo: Link UP Jan 17 12:01:44.778524 systemd-networkd[1839]: lo: Gained carrier Jan 17 12:01:44.782393 systemd-networkd[1839]: Enumeration completed Jan 17 12:01:44.783402 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:01:44.785649 systemd[1]: Reached target network.target - Network. Jan 17 12:01:44.787515 systemd-networkd[1839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:44.787533 systemd-networkd[1839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:01:44.791332 systemd-networkd[1839]: eth0: Link UP Jan 17 12:01:44.792950 systemd-networkd[1839]: eth0: Gained carrier Jan 17 12:01:44.793158 systemd-networkd[1839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:01:44.794631 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:01:44.809438 systemd-networkd[1839]: eth0: DHCPv4 address 172.31.20.160/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 12:01:45.214866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:01:45.463642 augenrules[1971]: No rules Jan 17 12:01:45.464766 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:01:46.497473 systemd-networkd[1839]: eth0: Gained IPv6LL Jan 17 12:01:46.502359 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:01:46.505269 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:01:46.677839 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:01:46.680922 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:01:48.489316 ldconfig[1580]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:01:48.518346 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:01:48.526553 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:01:48.556508 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:01:48.559136 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:01:48.561640 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:01:48.564009 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:01:48.566760 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:01:48.569025 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:01:48.571431 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:01:48.573750 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:01:48.573820 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:01:48.575529 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:01:48.578152 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:01:48.583001 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:01:48.590503 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:01:48.593536 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:01:48.595783 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:01:48.597683 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:01:48.599596 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:01:48.599653 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:01:48.601942 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:01:48.612654 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:01:48.620546 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:01:48.626485 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:01:48.641573 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:01:48.643513 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:01:48.649607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:01:48.659651 jq[1987]: false Jan 17 12:01:48.660548 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:01:48.669861 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 12:01:48.685650 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:01:48.702935 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:01:48.710050 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 12:01:48.719479 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:01:48.726025 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:01:48.737604 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:01:48.740806 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:01:48.742762 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:01:48.760959 dbus-daemon[1986]: [system] SELinux support is enabled Jan 17 12:01:48.769444 dbus-daemon[1986]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1839 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 12:01:48.766931 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:01:48.773490 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:01:48.777942 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:01:48.790891 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:01:48.792722 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:01:48.816974 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:01:48.817472 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:01:48.818816 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:01:48.821435 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:01:48.821476 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:01:48.830553 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:01:48.832792 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:01:48.857614 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 12:01:48.871630 ntpd[1991]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:01:48.883539 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: ntpd 4.2.8p17@1.4004-o Fri Jan 17 10:03:43 UTC 2025 (1): Starting Jan 17 12:01:48.883539 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:01:48.883539 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: ---------------------------------------------------- Jan 17 12:01:48.883539 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:01:48.883539 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:01:48.883539 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: corporation. Support and training for ntp-4 are Jan 17 12:01:48.883539 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: available at https://www.nwtime.org/support Jan 17 12:01:48.883539 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: ---------------------------------------------------- Jan 17 12:01:48.883539 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: proto: precision = 0.096 usec (-23) Jan 17 12:01:48.871687 ntpd[1991]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 12:01:48.871708 ntpd[1991]: ---------------------------------------------------- Jan 17 12:01:48.871728 ntpd[1991]: ntp-4 is maintained by Network Time Foundation, Jan 17 12:01:48.871746 ntpd[1991]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 12:01:48.871765 ntpd[1991]: corporation. Support and training for ntp-4 are Jan 17 12:01:48.871784 ntpd[1991]: available at https://www.nwtime.org/support Jan 17 12:01:48.871804 ntpd[1991]: ---------------------------------------------------- Jan 17 12:01:48.879965 ntpd[1991]: proto: precision = 0.096 usec (-23) Jan 17 12:01:48.890079 update_engine[2001]: I20250117 12:01:48.888183 2001 main.cc:92] Flatcar Update Engine starting Jan 17 12:01:48.891323 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:01:48.893784 update_engine[2001]: I20250117 12:01:48.891396 2001 update_check_scheduler.cc:74] Next update check in 4m27s Jan 17 12:01:48.896715 ntpd[1991]: basedate set to 2025-01-05 Jan 17 12:01:48.898398 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: basedate set to 2025-01-05 Jan 17 12:01:48.898398 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: gps base set to 2025-01-05 (week 2348) Jan 17 12:01:48.896760 ntpd[1991]: gps base set to 2025-01-05 (week 2348) Jan 17 12:01:48.920839 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:01:48.918490 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:01:48.932105 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 12:01:48.932105 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:01:48.932105 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:01:48.932105 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: Listen normally on 3 eth0 172.31.20.160:123 Jan 17 12:01:48.932105 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: Listen normally on 4 lo [::1]:123 Jan 17 12:01:48.932105 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: Listen normally on 5 eth0 [fe80::44f:8cff:fec9:881f%2]:123 Jan 17 12:01:48.932105 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: Listening on routing socket on fd #22 for interface updates Jan 17 12:01:48.932465 extend-filesystems[1988]: Found loop4 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found loop5 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found loop6 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found loop7 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found nvme0n1 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found nvme0n1p1 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found nvme0n1p2 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found nvme0n1p3 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found usr Jan 17 12:01:48.932465 extend-filesystems[1988]: Found nvme0n1p4 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found nvme0n1p6 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found nvme0n1p7 Jan 17 12:01:48.932465 extend-filesystems[1988]: Found nvme0n1p9 Jan 17 12:01:48.932465 extend-filesystems[1988]: Checking size of /dev/nvme0n1p9 Jan 17 12:01:48.931473 (ntainerd)[2014]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:01:48.918572 ntpd[1991]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 12:01:49.003219 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:49.003219 ntpd[1991]: 17 Jan 12:01:48 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:48.962099 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:01:48.918850 ntpd[1991]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 12:01:48.962546 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:01:48.918914 ntpd[1991]: Listen normally on 3 eth0 172.31.20.160:123 Jan 17 12:01:48.918978 ntpd[1991]: Listen normally on 4 lo [::1]:123 Jan 17 12:01:48.919048 ntpd[1991]: Listen normally on 5 eth0 [fe80::44f:8cff:fec9:881f%2]:123 Jan 17 12:01:48.919110 ntpd[1991]: Listening on routing socket on fd #22 for interface updates Jan 17 12:01:48.985727 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:48.985790 ntpd[1991]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 12:01:49.016262 jq[2002]: true Jan 17 12:01:49.054512 tar[2005]: linux-arm64/helm Jan 17 12:01:49.055466 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:01:49.082137 jq[2036]: true Jan 17 12:01:49.111063 systemd-logind[2000]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 12:01:49.111099 systemd-logind[2000]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 17 12:01:49.115953 systemd-logind[2000]: New seat seat0. Jan 17 12:01:49.124777 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:01:49.199432 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 12:01:49.234876 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 12:01:49.243801 coreos-metadata[1985]: Jan 17 12:01:49.243 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:01:49.249380 coreos-metadata[1985]: Jan 17 12:01:49.245 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 12:01:49.249380 coreos-metadata[1985]: Jan 17 12:01:49.249 INFO Fetch successful Jan 17 12:01:49.249380 coreos-metadata[1985]: Jan 17 12:01:49.249 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 12:01:49.255878 coreos-metadata[1985]: Jan 17 12:01:49.250 INFO Fetch successful Jan 17 12:01:49.255878 coreos-metadata[1985]: Jan 17 12:01:49.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 12:01:49.255878 coreos-metadata[1985]: Jan 17 12:01:49.254 INFO Fetch successful Jan 17 12:01:49.255878 coreos-metadata[1985]: Jan 17 12:01:49.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 12:01:49.257813 coreos-metadata[1985]: Jan 17 12:01:49.256 INFO Fetch successful Jan 17 12:01:49.257813 coreos-metadata[1985]: Jan 17 12:01:49.256 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 12:01:49.265281 coreos-metadata[1985]: Jan 17 12:01:49.260 INFO Fetch failed with 404: resource not found Jan 17 12:01:49.265281 coreos-metadata[1985]: Jan 17 12:01:49.260 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 12:01:49.265281 coreos-metadata[1985]: Jan 17 12:01:49.261 INFO Fetch successful Jan 17 12:01:49.265281 coreos-metadata[1985]: Jan 17 12:01:49.261 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 12:01:49.268808 coreos-metadata[1985]: Jan 17 12:01:49.267 INFO Fetch successful Jan 17 12:01:49.268808 coreos-metadata[1985]: Jan 17 12:01:49.267 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 12:01:49.268808 coreos-metadata[1985]: Jan 17 12:01:49.268 INFO Fetch successful Jan 17 12:01:49.268808 coreos-metadata[1985]: Jan 17 12:01:49.268 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 12:01:49.269815 coreos-metadata[1985]: Jan 17 12:01:49.269 INFO Fetch successful Jan 17 12:01:49.269815 coreos-metadata[1985]: Jan 17 12:01:49.269 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 12:01:49.275960 coreos-metadata[1985]: Jan 17 12:01:49.270 INFO Fetch successful Jan 17 12:01:49.395416 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:01:49.398531 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:01:49.411260 amazon-ssm-agent[2060]: Initializing new seelog logger Jan 17 12:01:49.416288 amazon-ssm-agent[2060]: New Seelog Logger Creation Complete Jan 17 12:01:49.416288 amazon-ssm-agent[2060]: 2025/01/17 12:01:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:49.416288 amazon-ssm-agent[2060]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:49.416288 amazon-ssm-agent[2060]: 2025/01/17 12:01:49 processing appconfig overrides Jan 17 12:01:49.420902 amazon-ssm-agent[2060]: 2025/01/17 12:01:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:49.420902 amazon-ssm-agent[2060]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:49.420902 amazon-ssm-agent[2060]: 2025/01/17 12:01:49 processing appconfig overrides Jan 17 12:01:49.420902 amazon-ssm-agent[2060]: 2025/01/17 12:01:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:49.420902 amazon-ssm-agent[2060]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:49.420902 amazon-ssm-agent[2060]: 2025/01/17 12:01:49 processing appconfig overrides Jan 17 12:01:49.420902 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO Proxy environment variables: Jan 17 12:01:49.433257 amazon-ssm-agent[2060]: 2025/01/17 12:01:49 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:49.433257 amazon-ssm-agent[2060]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 12:01:49.433257 amazon-ssm-agent[2060]: 2025/01/17 12:01:49 processing appconfig overrides Jan 17 12:01:49.481495 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 12:01:49.481787 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 12:01:49.486186 dbus-daemon[1986]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2016 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 12:01:49.500833 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 12:01:49.522263 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO https_proxy: Jan 17 12:01:49.531278 extend-filesystems[1988]: Resized partition /dev/nvme0n1p9 Jan 17 12:01:49.540464 extend-filesystems[2087]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:01:49.570459 polkitd[2084]: Started polkitd version 121 Jan 17 12:01:49.593476 polkitd[2084]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 12:01:49.593627 polkitd[2084]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 12:01:49.599214 polkitd[2084]: Finished loading, compiling and executing 2 rules Jan 17 12:01:49.600631 dbus-daemon[1986]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 12:01:49.600925 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 12:01:49.604893 polkitd[2084]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 12:01:49.621408 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO http_proxy: Jan 17 12:01:49.646480 systemd-hostnamed[2016]: Hostname set to (transient) Jan 17 12:01:49.648352 systemd-resolved[1932]: System hostname changed to 'ip-172-31-20-160'. Jan 17 12:01:49.713528 locksmithd[2019]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:01:49.931895 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2065) Jan 17 12:01:49.932001 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO no_proxy: Jan 17 12:01:49.992270 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jan 17 12:01:50.006882 bash[2070]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:01:50.012438 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:01:50.028413 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO Checking if agent identity type OnPrem can be assumed Jan 17 12:01:50.036738 systemd[1]: Starting sshkeys.service... Jan 17 12:01:50.100853 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 12:01:50.114884 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 12:01:50.126590 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO Checking if agent identity type EC2 can be assumed Jan 17 12:01:50.139293 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jan 17 12:01:50.187499 extend-filesystems[2087]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 12:01:50.187499 extend-filesystems[2087]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:01:50.187499 extend-filesystems[2087]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jan 17 12:01:50.181876 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:01:50.196531 extend-filesystems[1988]: Resized filesystem in /dev/nvme0n1p9 Jan 17 12:01:50.183330 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:01:50.225886 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO Agent will take identity from EC2 Jan 17 12:01:50.325377 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:01:50.380425 coreos-metadata[2195]: Jan 17 12:01:50.377 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 12:01:50.380425 coreos-metadata[2195]: Jan 17 12:01:50.378 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 12:01:50.380425 coreos-metadata[2195]: Jan 17 12:01:50.379 INFO Fetch successful Jan 17 12:01:50.380425 coreos-metadata[2195]: Jan 17 12:01:50.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 12:01:50.381194 coreos-metadata[2195]: Jan 17 12:01:50.380 INFO Fetch successful Jan 17 12:01:50.384517 unknown[2195]: wrote ssh authorized keys file for user: core Jan 17 12:01:50.428705 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:01:50.440787 update-ssh-keys[2207]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:01:50.444124 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 12:01:50.453042 systemd[1]: Finished sshkeys.service. Jan 17 12:01:50.527358 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 12:01:50.607268 containerd[2014]: time="2025-01-17T12:01:50.604909618Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:01:50.630251 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 12:01:50.730893 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 17 12:01:50.778703 containerd[2014]: time="2025-01-17T12:01:50.777756827Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.785767811Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.785850863Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.785886023Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.786209723Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.786267383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.786393647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.786426671Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.786718223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.786753983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.786786083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:50.788159 containerd[2014]: time="2025-01-17T12:01:50.786811463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:50.788726 containerd[2014]: time="2025-01-17T12:01:50.786970463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:50.796192 containerd[2014]: time="2025-01-17T12:01:50.794726819Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:01:50.796192 containerd[2014]: time="2025-01-17T12:01:50.794991395Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:01:50.796192 containerd[2014]: time="2025-01-17T12:01:50.795068015Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:01:50.796192 containerd[2014]: time="2025-01-17T12:01:50.795307631Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:01:50.796192 containerd[2014]: time="2025-01-17T12:01:50.795436331Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.806384879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.806566391Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.806616875Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.806653199Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.806686415Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.806945351Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.807355631Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.807561251Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.807593279Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.807623519Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.807654011Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.807683195Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.807711683Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:01:50.808265 containerd[2014]: time="2025-01-17T12:01:50.807746519Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.807789215Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.807821687Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.807850967Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.807887591Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.807926339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.807957359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.807985823Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.808016087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.808046207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.808076507Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.808103819Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.808133219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.808165355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.808893 containerd[2014]: time="2025-01-17T12:01:50.808198727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818270243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818367743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818401727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818470943Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818544851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818608655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818641931Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818813087Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818875487Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818909327Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.818967671Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.819000143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.819053027Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:01:50.822252 containerd[2014]: time="2025-01-17T12:01:50.819090371Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:01:50.822925 containerd[2014]: time="2025-01-17T12:01:50.819147467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:01:50.828468 containerd[2014]: time="2025-01-17T12:01:50.823039679Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:01:50.828468 containerd[2014]: time="2025-01-17T12:01:50.823210931Z" level=info msg="Connect containerd service" Jan 17 12:01:50.828468 containerd[2014]: time="2025-01-17T12:01:50.823299779Z" level=info msg="using legacy CRI server" Jan 17 12:01:50.828468 containerd[2014]: time="2025-01-17T12:01:50.823355543Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:01:50.828468 containerd[2014]: time="2025-01-17T12:01:50.823540811Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:01:50.831522 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 12:01:50.835560 containerd[2014]: time="2025-01-17T12:01:50.833881763Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:01:50.835560 containerd[2014]: time="2025-01-17T12:01:50.834121919Z" level=info msg="Start subscribing containerd event" Jan 17 12:01:50.835560 containerd[2014]: time="2025-01-17T12:01:50.834198323Z" level=info msg="Start recovering state" Jan 17 12:01:50.835560 containerd[2014]: time="2025-01-17T12:01:50.835184951Z" level=info msg="Start event monitor" Jan 17 12:01:50.835560 containerd[2014]: time="2025-01-17T12:01:50.835214243Z" level=info msg="Start snapshots syncer" Jan 17 12:01:50.835560 containerd[2014]: time="2025-01-17T12:01:50.835259315Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:01:50.835560 containerd[2014]: time="2025-01-17T12:01:50.835280291Z" level=info msg="Start streaming server" Jan 17 12:01:50.845709 containerd[2014]: time="2025-01-17T12:01:50.845644463Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:01:50.845827 containerd[2014]: time="2025-01-17T12:01:50.845793599Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:01:50.846021 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:01:50.848801 containerd[2014]: time="2025-01-17T12:01:50.848733131Z" level=info msg="containerd successfully booted in 0.247284s" Jan 17 12:01:50.931516 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 12:01:51.036323 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO [Registrar] Starting registrar module Jan 17 12:01:51.136251 amazon-ssm-agent[2060]: 2025-01-17 12:01:49 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 12:01:51.172929 tar[2005]: linux-arm64/LICENSE Jan 17 12:01:51.174095 tar[2005]: linux-arm64/README.md Jan 17 12:01:51.211326 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:01:51.404559 amazon-ssm-agent[2060]: 2025-01-17 12:01:51 INFO [EC2Identity] EC2 registration was successful. Jan 17 12:01:51.445188 amazon-ssm-agent[2060]: 2025-01-17 12:01:51 INFO [CredentialRefresher] credentialRefresher has started Jan 17 12:01:51.445188 amazon-ssm-agent[2060]: 2025-01-17 12:01:51 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 12:01:51.445188 amazon-ssm-agent[2060]: 2025-01-17 12:01:51 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 12:01:51.505505 amazon-ssm-agent[2060]: 2025-01-17 12:01:51 INFO [CredentialRefresher] Next credential rotation will be in 30.1999930089 minutes Jan 17 12:01:51.544641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:01:51.557127 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:01:51.570808 sshd_keygen[2041]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:01:51.626385 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:01:51.635779 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:01:51.663002 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:01:51.665300 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:01:51.677971 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:01:51.704182 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:01:51.719295 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:01:51.729955 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:01:51.733457 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:01:51.735521 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:01:51.742368 systemd[1]: Startup finished in 1.157s (kernel) + 8.459s (initrd) + 19.530s (userspace) = 29.148s. Jan 17 12:01:52.471948 amazon-ssm-agent[2060]: 2025-01-17 12:01:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 12:01:52.529270 kubelet[2223]: E0117 12:01:52.529140 2223 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:01:52.533197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:01:52.533989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:01:52.534581 systemd[1]: kubelet.service: Consumed 1.304s CPU time. Jan 17 12:01:52.572788 amazon-ssm-agent[2060]: 2025-01-17 12:01:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2251) started Jan 17 12:01:52.673609 amazon-ssm-agent[2060]: 2025-01-17 12:01:52 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 12:01:55.155298 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:01:55.161769 systemd[1]: Started sshd@0-172.31.20.160:22-139.178.68.195:45570.service - OpenSSH per-connection server daemon (139.178.68.195:45570). Jan 17 12:01:55.453099 sshd[2263]: Accepted publickey for core from 139.178.68.195 port 45570 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:55.455848 sshd[2263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:55.470647 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:01:55.479765 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:01:55.484175 systemd-logind[2000]: New session 1 of user core. Jan 17 12:01:55.506798 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:01:55.515786 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:01:55.532758 (systemd)[2267]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:01:55.762216 systemd[2267]: Queued start job for default target default.target. Jan 17 12:01:55.774091 systemd[2267]: Created slice app.slice - User Application Slice. Jan 17 12:01:55.774145 systemd[2267]: Reached target paths.target - Paths. Jan 17 12:01:55.774177 systemd[2267]: Reached target timers.target - Timers. Jan 17 12:01:55.776647 systemd[2267]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:01:55.806296 systemd[2267]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:01:55.806536 systemd[2267]: Reached target sockets.target - Sockets. Jan 17 12:01:55.806569 systemd[2267]: Reached target basic.target - Basic System. Jan 17 12:01:55.806667 systemd[2267]: Reached target default.target - Main User Target. Jan 17 12:01:55.806735 systemd[2267]: Startup finished in 262ms. Jan 17 12:01:55.806751 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:01:55.818483 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:01:55.420098 systemd-resolved[1932]: Clock change detected. Flushing caches. Jan 17 12:01:55.427358 systemd-journald[1560]: Time jumped backwards, rotating. Jan 17 12:01:55.517189 systemd[1]: Started sshd@1-172.31.20.160:22-139.178.68.195:43846.service - OpenSSH per-connection server daemon (139.178.68.195:43846). Jan 17 12:01:55.708520 sshd[2279]: Accepted publickey for core from 139.178.68.195 port 43846 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:55.711065 sshd[2279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:55.719178 systemd-logind[2000]: New session 2 of user core. Jan 17 12:01:55.729173 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:01:55.854280 sshd[2279]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:55.859571 systemd-logind[2000]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:01:55.860038 systemd[1]: sshd@1-172.31.20.160:22-139.178.68.195:43846.service: Deactivated successfully. Jan 17 12:01:55.863073 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:01:55.866519 systemd-logind[2000]: Removed session 2. Jan 17 12:01:55.898493 systemd[1]: Started sshd@2-172.31.20.160:22-139.178.68.195:43856.service - OpenSSH per-connection server daemon (139.178.68.195:43856). Jan 17 12:01:56.065992 sshd[2286]: Accepted publickey for core from 139.178.68.195 port 43856 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:56.068098 sshd[2286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:56.077145 systemd-logind[2000]: New session 3 of user core. Jan 17 12:01:56.089262 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:01:56.208059 sshd[2286]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:56.213288 systemd[1]: sshd@2-172.31.20.160:22-139.178.68.195:43856.service: Deactivated successfully. Jan 17 12:01:56.217173 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:01:56.220658 systemd-logind[2000]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:01:56.222855 systemd-logind[2000]: Removed session 3. Jan 17 12:01:56.250460 systemd[1]: Started sshd@3-172.31.20.160:22-139.178.68.195:43866.service - OpenSSH per-connection server daemon (139.178.68.195:43866). Jan 17 12:01:56.420867 sshd[2293]: Accepted publickey for core from 139.178.68.195 port 43866 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:56.423428 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:56.431207 systemd-logind[2000]: New session 4 of user core. Jan 17 12:01:56.442268 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:01:56.570753 sshd[2293]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:56.575735 systemd[1]: sshd@3-172.31.20.160:22-139.178.68.195:43866.service: Deactivated successfully. Jan 17 12:01:56.579508 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:01:56.582523 systemd-logind[2000]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:01:56.584477 systemd-logind[2000]: Removed session 4. Jan 17 12:01:56.611494 systemd[1]: Started sshd@4-172.31.20.160:22-139.178.68.195:43868.service - OpenSSH per-connection server daemon (139.178.68.195:43868). Jan 17 12:01:56.787406 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 43868 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:56.790133 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:56.798045 systemd-logind[2000]: New session 5 of user core. Jan 17 12:01:56.813242 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:01:56.943275 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:01:56.944129 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:56.960561 sudo[2303]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:56.984206 sshd[2300]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:56.990173 systemd[1]: sshd@4-172.31.20.160:22-139.178.68.195:43868.service: Deactivated successfully. Jan 17 12:01:56.993229 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:01:56.996456 systemd-logind[2000]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:01:56.998631 systemd-logind[2000]: Removed session 5. Jan 17 12:01:57.025434 systemd[1]: Started sshd@5-172.31.20.160:22-139.178.68.195:43870.service - OpenSSH per-connection server daemon (139.178.68.195:43870). Jan 17 12:01:57.199965 sshd[2308]: Accepted publickey for core from 139.178.68.195 port 43870 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:57.202509 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:57.210772 systemd-logind[2000]: New session 6 of user core. Jan 17 12:01:57.231173 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:01:57.334365 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:01:57.335090 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:57.341312 sudo[2312]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:57.351537 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:01:57.352183 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:57.378397 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:01:57.381523 auditctl[2315]: No rules Jan 17 12:01:57.382252 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:01:57.382604 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:01:57.388882 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:01:57.438642 augenrules[2333]: No rules Jan 17 12:01:57.442113 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:01:57.444290 sudo[2311]: pam_unix(sudo:session): session closed for user root Jan 17 12:01:57.467231 sshd[2308]: pam_unix(sshd:session): session closed for user core Jan 17 12:01:57.474639 systemd[1]: sshd@5-172.31.20.160:22-139.178.68.195:43870.service: Deactivated successfully. Jan 17 12:01:57.477729 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:01:57.480384 systemd-logind[2000]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:01:57.482265 systemd-logind[2000]: Removed session 6. Jan 17 12:01:57.509677 systemd[1]: Started sshd@6-172.31.20.160:22-139.178.68.195:43874.service - OpenSSH per-connection server daemon (139.178.68.195:43874). Jan 17 12:01:57.676222 sshd[2341]: Accepted publickey for core from 139.178.68.195 port 43874 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:01:57.678763 sshd[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:01:57.688234 systemd-logind[2000]: New session 7 of user core. Jan 17 12:01:57.697250 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:01:57.802340 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:01:57.803600 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:01:58.401184 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:01:58.415835 (dockerd)[2361]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:01:58.932317 dockerd[2361]: time="2025-01-17T12:01:58.932185833Z" level=info msg="Starting up" Jan 17 12:01:59.178679 dockerd[2361]: time="2025-01-17T12:01:59.178618326Z" level=info msg="Loading containers: start." Jan 17 12:01:59.383085 kernel: Initializing XFRM netlink socket Jan 17 12:01:59.457466 (udev-worker)[2386]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:01:59.562846 systemd-networkd[1839]: docker0: Link UP Jan 17 12:01:59.589541 dockerd[2361]: time="2025-01-17T12:01:59.589379912Z" level=info msg="Loading containers: done." Jan 17 12:01:59.616739 dockerd[2361]: time="2025-01-17T12:01:59.616648364Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:01:59.616957 dockerd[2361]: time="2025-01-17T12:01:59.616804028Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:01:59.617991 dockerd[2361]: time="2025-01-17T12:01:59.617031968Z" level=info msg="Daemon has completed initialization" Jan 17 12:01:59.618075 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3389625937-merged.mount: Deactivated successfully. Jan 17 12:01:59.678143 dockerd[2361]: time="2025-01-17T12:01:59.677266473Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:01:59.677706 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:02:00.852648 containerd[2014]: time="2025-01-17T12:02:00.852563026Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:02:01.517683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount465523255.mount: Deactivated successfully. Jan 17 12:02:02.179077 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:02:02.192250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:02.494232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:02.496411 (kubelet)[2563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:02:02.579777 kubelet[2563]: E0117 12:02:02.579623 2563 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:02:02.586837 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:02:02.587267 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:02:03.987995 containerd[2014]: time="2025-01-17T12:02:03.987760358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:03.990351 containerd[2014]: time="2025-01-17T12:02:03.990266438Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 17 12:02:03.991986 containerd[2014]: time="2025-01-17T12:02:03.991887326Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:03.997731 containerd[2014]: time="2025-01-17T12:02:03.997639970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:04.000436 containerd[2014]: time="2025-01-17T12:02:04.000149194Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 3.147509788s" Jan 17 12:02:04.000436 containerd[2014]: time="2025-01-17T12:02:04.000215806Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 17 12:02:04.039456 containerd[2014]: time="2025-01-17T12:02:04.039141166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:02:07.062000 containerd[2014]: time="2025-01-17T12:02:07.061571461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:07.063841 containerd[2014]: time="2025-01-17T12:02:07.063775249Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 17 12:02:07.065528 containerd[2014]: time="2025-01-17T12:02:07.065411677Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:07.072078 containerd[2014]: time="2025-01-17T12:02:07.071952517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:07.076187 containerd[2014]: time="2025-01-17T12:02:07.074378005Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 3.035170779s" Jan 17 12:02:07.076187 containerd[2014]: time="2025-01-17T12:02:07.074441389Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 17 12:02:07.116393 containerd[2014]: time="2025-01-17T12:02:07.116316914Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:02:08.832044 containerd[2014]: time="2025-01-17T12:02:08.831965418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:08.834948 containerd[2014]: time="2025-01-17T12:02:08.834529722Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 17 12:02:08.836178 containerd[2014]: time="2025-01-17T12:02:08.836106570Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:08.843032 containerd[2014]: time="2025-01-17T12:02:08.842962290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:08.845481 containerd[2014]: time="2025-01-17T12:02:08.844962906Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.728573344s" Jan 17 12:02:08.845481 containerd[2014]: time="2025-01-17T12:02:08.845022570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 17 12:02:08.882731 containerd[2014]: time="2025-01-17T12:02:08.882635118Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:02:10.308060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount978781528.mount: Deactivated successfully. Jan 17 12:02:10.817293 containerd[2014]: time="2025-01-17T12:02:10.817219592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:10.818557 containerd[2014]: time="2025-01-17T12:02:10.818416640Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 17 12:02:10.821010 containerd[2014]: time="2025-01-17T12:02:10.820771628Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:10.826314 containerd[2014]: time="2025-01-17T12:02:10.826211588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:10.828441 containerd[2014]: time="2025-01-17T12:02:10.828078236Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.94538289s" Jan 17 12:02:10.828441 containerd[2014]: time="2025-01-17T12:02:10.828150128Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 17 12:02:10.870961 containerd[2014]: time="2025-01-17T12:02:10.870877136Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:02:11.529735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount869469383.mount: Deactivated successfully. Jan 17 12:02:12.617394 containerd[2014]: time="2025-01-17T12:02:12.617313777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:12.620484 containerd[2014]: time="2025-01-17T12:02:12.620428665Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 17 12:02:12.622068 containerd[2014]: time="2025-01-17T12:02:12.621978777Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:12.630793 containerd[2014]: time="2025-01-17T12:02:12.630675045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:12.633486 containerd[2014]: time="2025-01-17T12:02:12.633391245Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.762421649s" Jan 17 12:02:12.633846 containerd[2014]: time="2025-01-17T12:02:12.633669429Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:02:12.671270 containerd[2014]: time="2025-01-17T12:02:12.671207205Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:02:12.679104 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:02:12.688295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:13.000604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:13.019665 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:02:13.105860 kubelet[2667]: E0117 12:02:13.105726 2667 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:02:13.110643 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:02:13.111059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:02:13.197012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2254391652.mount: Deactivated successfully. Jan 17 12:02:13.206197 containerd[2014]: time="2025-01-17T12:02:13.206121392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:13.207872 containerd[2014]: time="2025-01-17T12:02:13.207788012Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 17 12:02:13.210567 containerd[2014]: time="2025-01-17T12:02:13.210481448Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:13.215987 containerd[2014]: time="2025-01-17T12:02:13.215895260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:13.218042 containerd[2014]: time="2025-01-17T12:02:13.217815908Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 546.537927ms" Jan 17 12:02:13.218042 containerd[2014]: time="2025-01-17T12:02:13.217871816Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 12:02:13.256229 containerd[2014]: time="2025-01-17T12:02:13.255433976Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:02:13.823638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258136688.mount: Deactivated successfully. Jan 17 12:02:16.920721 containerd[2014]: time="2025-01-17T12:02:16.920622122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:16.923210 containerd[2014]: time="2025-01-17T12:02:16.923139194Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 17 12:02:16.924524 containerd[2014]: time="2025-01-17T12:02:16.924407390Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:16.931383 containerd[2014]: time="2025-01-17T12:02:16.931256258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:02:16.934572 containerd[2014]: time="2025-01-17T12:02:16.934328462Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 3.678832014s" Jan 17 12:02:16.934572 containerd[2014]: time="2025-01-17T12:02:16.934406378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 17 12:02:19.231816 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 12:02:23.179845 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:02:23.191086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:23.482310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:23.492839 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:02:23.582626 kubelet[2796]: E0117 12:02:23.582567 2796 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:02:23.587406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:02:23.588775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:02:25.830947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:25.844394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:25.888053 systemd[1]: Reloading requested from client PID 2810 ('systemctl') (unit session-7.scope)... Jan 17 12:02:25.888093 systemd[1]: Reloading... Jan 17 12:02:26.145952 zram_generator::config[2859]: No configuration found. Jan 17 12:02:26.363459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:26.534238 systemd[1]: Reloading finished in 645 ms. Jan 17 12:02:26.638501 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:02:26.638697 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:02:26.639215 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:26.647506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:26.923741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:26.944484 (kubelet)[2913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:02:27.017123 kubelet[2913]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:27.017838 kubelet[2913]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:02:27.017838 kubelet[2913]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:27.019421 kubelet[2913]: I0117 12:02:27.019330 2913 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:02:28.334459 kubelet[2913]: I0117 12:02:28.334322 2913 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:02:28.334459 kubelet[2913]: I0117 12:02:28.334387 2913 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:02:28.335430 kubelet[2913]: I0117 12:02:28.335382 2913 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:02:28.374174 kubelet[2913]: E0117 12:02:28.374112 2913 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.160:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:28.374947 kubelet[2913]: I0117 12:02:28.374873 2913 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:28.389936 kubelet[2913]: I0117 12:02:28.388828 2913 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:02:28.389936 kubelet[2913]: I0117 12:02:28.389362 2913 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:02:28.389936 kubelet[2913]: I0117 12:02:28.389402 2913 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:02:28.389936 kubelet[2913]: I0117 12:02:28.389695 2913 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:02:28.390311 kubelet[2913]: I0117 12:02:28.389713 2913 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:02:28.390445 kubelet[2913]: I0117 12:02:28.390422 2913 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:28.392079 kubelet[2913]: I0117 12:02:28.392050 2913 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:02:28.392233 kubelet[2913]: I0117 12:02:28.392212 2913 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:02:28.392394 kubelet[2913]: I0117 12:02:28.392375 2913 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:02:28.392519 kubelet[2913]: I0117 12:02:28.392498 2913 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:02:28.395415 kubelet[2913]: W0117 12:02:28.395322 2913 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-160&limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:28.395415 kubelet[2913]: E0117 12:02:28.395419 2913 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-160&limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:28.396548 kubelet[2913]: I0117 12:02:28.396495 2913 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:02:28.396919 kubelet[2913]: I0117 12:02:28.396867 2913 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:02:28.397036 kubelet[2913]: W0117 12:02:28.397004 2913 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:02:28.398072 kubelet[2913]: I0117 12:02:28.398026 2913 server.go:1264] "Started kubelet" Jan 17 12:02:28.398319 kubelet[2913]: W0117 12:02:28.398244 2913 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:28.398390 kubelet[2913]: E0117 12:02:28.398332 2913 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:28.405939 kubelet[2913]: E0117 12:02:28.405718 2913 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.160:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.160:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-160.181b7934ab241a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-160,UID:ip-172-31-20-160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-160,},FirstTimestamp:2025-01-17 12:02:28.397988435 +0000 UTC m=+1.447239536,LastTimestamp:2025-01-17 12:02:28.397988435 +0000 UTC m=+1.447239536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-160,}" Jan 17 12:02:28.406473 kubelet[2913]: I0117 12:02:28.406355 2913 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:02:28.408496 kubelet[2913]: I0117 12:02:28.407129 2913 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:02:28.408496 kubelet[2913]: I0117 12:02:28.407226 2913 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:02:28.409270 kubelet[2913]: I0117 12:02:28.409239 2913 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:02:28.409734 kubelet[2913]: I0117 12:02:28.409689 2913 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:02:28.419635 kubelet[2913]: I0117 12:02:28.419559 2913 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:02:28.421006 kubelet[2913]: I0117 12:02:28.420478 2913 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:02:28.423022 kubelet[2913]: I0117 12:02:28.422958 2913 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:02:28.427880 kubelet[2913]: E0117 12:02:28.427810 2913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-160?timeout=10s\": dial tcp 172.31.20.160:6443: connect: connection refused" interval="200ms" Jan 17 12:02:28.428532 kubelet[2913]: W0117 12:02:28.428052 2913 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:28.428532 kubelet[2913]: E0117 12:02:28.428473 2913 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:28.429773 kubelet[2913]: E0117 12:02:28.429699 2913 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:02:28.430899 kubelet[2913]: I0117 12:02:28.430198 2913 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:02:28.430899 kubelet[2913]: I0117 12:02:28.430224 2913 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:02:28.430899 kubelet[2913]: I0117 12:02:28.430382 2913 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:02:28.455451 kubelet[2913]: I0117 12:02:28.455374 2913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:02:28.460888 kubelet[2913]: I0117 12:02:28.460832 2913 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:02:28.461600 kubelet[2913]: I0117 12:02:28.461556 2913 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:02:28.461695 kubelet[2913]: I0117 12:02:28.461612 2913 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:02:28.461752 kubelet[2913]: E0117 12:02:28.461679 2913 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:02:28.466708 kubelet[2913]: W0117 12:02:28.466652 2913 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:28.467269 kubelet[2913]: E0117 12:02:28.467238 2913 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:28.472431 kubelet[2913]: I0117 12:02:28.472385 2913 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:02:28.472956 kubelet[2913]: I0117 12:02:28.472564 2913 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:02:28.472956 kubelet[2913]: I0117 12:02:28.472599 2913 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:28.476114 kubelet[2913]: I0117 12:02:28.476078 2913 policy_none.go:49] "None policy: Start" Jan 17 12:02:28.477631 kubelet[2913]: I0117 12:02:28.477547 2913 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:02:28.477746 kubelet[2913]: I0117 12:02:28.477640 2913 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:02:28.491266 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:02:28.506520 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:02:28.514326 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:02:28.522587 kubelet[2913]: I0117 12:02:28.522537 2913 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:02:28.523261 kubelet[2913]: I0117 12:02:28.522841 2913 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:02:28.523261 kubelet[2913]: I0117 12:02:28.523042 2913 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:02:28.526123 kubelet[2913]: I0117 12:02:28.525804 2913 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-160" Jan 17 12:02:28.526616 kubelet[2913]: E0117 12:02:28.526585 2913 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-160\" not found" Jan 17 12:02:28.526865 kubelet[2913]: E0117 12:02:28.526373 2913 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.160:6443/api/v1/nodes\": dial tcp 172.31.20.160:6443: connect: connection refused" node="ip-172-31-20-160" Jan 17 12:02:28.562067 kubelet[2913]: I0117 12:02:28.561973 2913 topology_manager.go:215] "Topology Admit Handler" podUID="d0b066f7fe9463c237dadfec6d28c427" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-160" Jan 17 12:02:28.564599 kubelet[2913]: I0117 12:02:28.564084 2913 topology_manager.go:215] "Topology Admit Handler" podUID="ca1e98446d978df7daa22270cdfead68" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:28.566791 kubelet[2913]: I0117 12:02:28.566729 2913 topology_manager.go:215] "Topology Admit Handler" podUID="ff4478faa8b30275192c52633c446719" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-160" Jan 17 12:02:28.584274 systemd[1]: Created slice kubepods-burstable-podd0b066f7fe9463c237dadfec6d28c427.slice - libcontainer container kubepods-burstable-podd0b066f7fe9463c237dadfec6d28c427.slice. Jan 17 12:02:28.607601 systemd[1]: Created slice kubepods-burstable-podca1e98446d978df7daa22270cdfead68.slice - libcontainer container kubepods-burstable-podca1e98446d978df7daa22270cdfead68.slice. Jan 17 12:02:28.618235 systemd[1]: Created slice kubepods-burstable-podff4478faa8b30275192c52633c446719.slice - libcontainer container kubepods-burstable-podff4478faa8b30275192c52633c446719.slice. Jan 17 12:02:28.624165 kubelet[2913]: I0117 12:02:28.624004 2913 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0b066f7fe9463c237dadfec6d28c427-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-160\" (UID: \"d0b066f7fe9463c237dadfec6d28c427\") " pod="kube-system/kube-apiserver-ip-172-31-20-160" Jan 17 12:02:28.624165 kubelet[2913]: I0117 12:02:28.624080 2913 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:28.624165 kubelet[2913]: I0117 12:02:28.624128 2913 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:28.624165 kubelet[2913]: I0117 12:02:28.624167 2913 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:28.624612 kubelet[2913]: I0117 12:02:28.624214 2913 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:28.624612 kubelet[2913]: I0117 12:02:28.624253 2913 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff4478faa8b30275192c52633c446719-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-160\" (UID: \"ff4478faa8b30275192c52633c446719\") " pod="kube-system/kube-scheduler-ip-172-31-20-160" Jan 17 12:02:28.624612 kubelet[2913]: I0117 12:02:28.624291 2913 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0b066f7fe9463c237dadfec6d28c427-ca-certs\") pod \"kube-apiserver-ip-172-31-20-160\" (UID: \"d0b066f7fe9463c237dadfec6d28c427\") " pod="kube-system/kube-apiserver-ip-172-31-20-160" Jan 17 12:02:28.624612 kubelet[2913]: I0117 12:02:28.624326 2913 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0b066f7fe9463c237dadfec6d28c427-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-160\" (UID: \"d0b066f7fe9463c237dadfec6d28c427\") " pod="kube-system/kube-apiserver-ip-172-31-20-160" Jan 17 12:02:28.624612 kubelet[2913]: I0117 12:02:28.624363 2913 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:28.628479 kubelet[2913]: E0117 12:02:28.628405 2913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-160?timeout=10s\": dial tcp 172.31.20.160:6443: connect: connection refused" interval="400ms" Jan 17 12:02:28.729847 kubelet[2913]: I0117 12:02:28.729709 2913 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-160" Jan 17 12:02:28.730746 kubelet[2913]: E0117 12:02:28.730666 2913 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.160:6443/api/v1/nodes\": dial tcp 172.31.20.160:6443: connect: connection refused" node="ip-172-31-20-160" Jan 17 12:02:28.903107 containerd[2014]: time="2025-01-17T12:02:28.902767754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-160,Uid:d0b066f7fe9463c237dadfec6d28c427,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:28.916094 containerd[2014]: time="2025-01-17T12:02:28.916019282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-160,Uid:ca1e98446d978df7daa22270cdfead68,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:28.924293 containerd[2014]: time="2025-01-17T12:02:28.924170354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-160,Uid:ff4478faa8b30275192c52633c446719,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:29.029525 kubelet[2913]: E0117 12:02:29.029450 2913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-160?timeout=10s\": dial tcp 172.31.20.160:6443: connect: connection refused" interval="800ms" Jan 17 12:02:29.133838 kubelet[2913]: I0117 12:02:29.133770 2913 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-160" Jan 17 12:02:29.134545 kubelet[2913]: E0117 12:02:29.134479 2913 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.160:6443/api/v1/nodes\": dial tcp 172.31.20.160:6443: connect: connection refused" node="ip-172-31-20-160" Jan 17 12:02:29.476927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2296580378.mount: Deactivated successfully. Jan 17 12:02:29.496527 containerd[2014]: time="2025-01-17T12:02:29.496407865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:29.498840 containerd[2014]: time="2025-01-17T12:02:29.498760777Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:29.500991 containerd[2014]: time="2025-01-17T12:02:29.500802373Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 12:02:29.503394 containerd[2014]: time="2025-01-17T12:02:29.503310625Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:02:29.505207 containerd[2014]: time="2025-01-17T12:02:29.505141741Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:29.511998 containerd[2014]: time="2025-01-17T12:02:29.511585609Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:29.513982 containerd[2014]: time="2025-01-17T12:02:29.513861481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:02:29.524251 containerd[2014]: time="2025-01-17T12:02:29.524094445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:02:29.528173 containerd[2014]: time="2025-01-17T12:02:29.527367997Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 611.210475ms" Jan 17 12:02:29.531657 containerd[2014]: time="2025-01-17T12:02:29.531522889Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 628.629867ms" Jan 17 12:02:29.533568 containerd[2014]: time="2025-01-17T12:02:29.533445349Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 609.154683ms" Jan 17 12:02:29.572065 kubelet[2913]: W0117 12:02:29.571865 2913 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:29.572678 kubelet[2913]: E0117 12:02:29.572076 2913 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.160:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:29.670600 kubelet[2913]: W0117 12:02:29.670471 2913 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-160&limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:29.670600 kubelet[2913]: E0117 12:02:29.670566 2913 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.160:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-160&limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:29.729759 kubelet[2913]: W0117 12:02:29.728454 2913 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:29.729759 kubelet[2913]: E0117 12:02:29.728530 2913 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.160:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:29.738513 containerd[2014]: time="2025-01-17T12:02:29.738003278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:29.738513 containerd[2014]: time="2025-01-17T12:02:29.738091550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:29.738513 containerd[2014]: time="2025-01-17T12:02:29.738149738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.738513 containerd[2014]: time="2025-01-17T12:02:29.738319346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.750169 containerd[2014]: time="2025-01-17T12:02:29.749994530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:29.753775 containerd[2014]: time="2025-01-17T12:02:29.753274058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:29.753775 containerd[2014]: time="2025-01-17T12:02:29.753392378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:29.753775 containerd[2014]: time="2025-01-17T12:02:29.753429230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.753775 containerd[2014]: time="2025-01-17T12:02:29.753593426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.753775 containerd[2014]: time="2025-01-17T12:02:29.750112106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:29.753775 containerd[2014]: time="2025-01-17T12:02:29.750654170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.753775 containerd[2014]: time="2025-01-17T12:02:29.751955918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:29.785783 systemd[1]: Started cri-containerd-8ed741057ed8a013564c175e433ced89eee567bcf09278072388a0815ef1fc2a.scope - libcontainer container 8ed741057ed8a013564c175e433ced89eee567bcf09278072388a0815ef1fc2a. Jan 17 12:02:29.819260 systemd[1]: Started cri-containerd-043d2b0ddb02cdaebea82f0d0b0c9974aa5283403678997b3aaca6acef3cdc17.scope - libcontainer container 043d2b0ddb02cdaebea82f0d0b0c9974aa5283403678997b3aaca6acef3cdc17. Jan 17 12:02:29.823514 systemd[1]: Started cri-containerd-98fbf1db043882d46a4a9e871abbcb2fb718ae9844df5f022b00c4e8ad19a134.scope - libcontainer container 98fbf1db043882d46a4a9e871abbcb2fb718ae9844df5f022b00c4e8ad19a134. Jan 17 12:02:29.831325 kubelet[2913]: E0117 12:02:29.831257 2913 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-160?timeout=10s\": dial tcp 172.31.20.160:6443: connect: connection refused" interval="1.6s" Jan 17 12:02:29.849371 kubelet[2913]: W0117 12:02:29.849283 2913 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:29.849485 kubelet[2913]: E0117 12:02:29.849379 2913 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.160:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.160:6443: connect: connection refused Jan 17 12:02:29.932785 containerd[2014]: time="2025-01-17T12:02:29.932593659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-160,Uid:ca1e98446d978df7daa22270cdfead68,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ed741057ed8a013564c175e433ced89eee567bcf09278072388a0815ef1fc2a\"" Jan 17 12:02:29.939260 kubelet[2913]: I0117 12:02:29.938337 2913 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-160" Jan 17 12:02:29.939260 kubelet[2913]: E0117 12:02:29.938854 2913 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.160:6443/api/v1/nodes\": dial tcp 172.31.20.160:6443: connect: connection refused" node="ip-172-31-20-160" Jan 17 12:02:29.945243 containerd[2014]: time="2025-01-17T12:02:29.944189331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-160,Uid:d0b066f7fe9463c237dadfec6d28c427,Namespace:kube-system,Attempt:0,} returns sandbox id \"043d2b0ddb02cdaebea82f0d0b0c9974aa5283403678997b3aaca6acef3cdc17\"" Jan 17 12:02:29.948195 containerd[2014]: time="2025-01-17T12:02:29.947734479Z" level=info msg="CreateContainer within sandbox \"8ed741057ed8a013564c175e433ced89eee567bcf09278072388a0815ef1fc2a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:02:29.953335 containerd[2014]: time="2025-01-17T12:02:29.952804599Z" level=info msg="CreateContainer within sandbox \"043d2b0ddb02cdaebea82f0d0b0c9974aa5283403678997b3aaca6acef3cdc17\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:02:29.954834 containerd[2014]: time="2025-01-17T12:02:29.954742671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-160,Uid:ff4478faa8b30275192c52633c446719,Namespace:kube-system,Attempt:0,} returns sandbox id \"98fbf1db043882d46a4a9e871abbcb2fb718ae9844df5f022b00c4e8ad19a134\"" Jan 17 12:02:29.961103 containerd[2014]: time="2025-01-17T12:02:29.961034559Z" level=info msg="CreateContainer within sandbox \"98fbf1db043882d46a4a9e871abbcb2fb718ae9844df5f022b00c4e8ad19a134\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:02:30.013767 containerd[2014]: time="2025-01-17T12:02:30.013593995Z" level=info msg="CreateContainer within sandbox \"8ed741057ed8a013564c175e433ced89eee567bcf09278072388a0815ef1fc2a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4\"" Jan 17 12:02:30.016563 containerd[2014]: time="2025-01-17T12:02:30.016306559Z" level=info msg="CreateContainer within sandbox \"98fbf1db043882d46a4a9e871abbcb2fb718ae9844df5f022b00c4e8ad19a134\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac\"" Jan 17 12:02:30.016784 containerd[2014]: time="2025-01-17T12:02:30.016708139Z" level=info msg="StartContainer for \"3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4\"" Jan 17 12:02:30.023108 containerd[2014]: time="2025-01-17T12:02:30.023031203Z" level=info msg="CreateContainer within sandbox \"043d2b0ddb02cdaebea82f0d0b0c9974aa5283403678997b3aaca6acef3cdc17\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"80c34c56972d0e2c6973be4426fc3e763292a7c86fdfe2b4ffaf5b32df048159\"" Jan 17 12:02:30.024030 containerd[2014]: time="2025-01-17T12:02:30.023964407Z" level=info msg="StartContainer for \"c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac\"" Jan 17 12:02:30.035878 containerd[2014]: time="2025-01-17T12:02:30.035102507Z" level=info msg="StartContainer for \"80c34c56972d0e2c6973be4426fc3e763292a7c86fdfe2b4ffaf5b32df048159\"" Jan 17 12:02:30.075342 systemd[1]: Started cri-containerd-3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4.scope - libcontainer container 3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4. Jan 17 12:02:30.115715 systemd[1]: Started cri-containerd-c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac.scope - libcontainer container c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac. Jan 17 12:02:30.133230 systemd[1]: Started cri-containerd-80c34c56972d0e2c6973be4426fc3e763292a7c86fdfe2b4ffaf5b32df048159.scope - libcontainer container 80c34c56972d0e2c6973be4426fc3e763292a7c86fdfe2b4ffaf5b32df048159. Jan 17 12:02:30.209139 containerd[2014]: time="2025-01-17T12:02:30.208446648Z" level=info msg="StartContainer for \"3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4\" returns successfully" Jan 17 12:02:30.270958 containerd[2014]: time="2025-01-17T12:02:30.270741817Z" level=info msg="StartContainer for \"c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac\" returns successfully" Jan 17 12:02:30.295253 containerd[2014]: time="2025-01-17T12:02:30.295176805Z" level=info msg="StartContainer for \"80c34c56972d0e2c6973be4426fc3e763292a7c86fdfe2b4ffaf5b32df048159\" returns successfully" Jan 17 12:02:31.544413 kubelet[2913]: I0117 12:02:31.544360 2913 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-160" Jan 17 12:02:34.146963 update_engine[2001]: I20250117 12:02:34.145969 2001 update_attempter.cc:509] Updating boot flags... Jan 17 12:02:34.318097 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3205) Jan 17 12:02:34.761978 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3207) Jan 17 12:02:35.255056 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3207) Jan 17 12:02:36.004281 kubelet[2913]: E0117 12:02:36.004202 2913 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-160\" not found" node="ip-172-31-20-160" Jan 17 12:02:36.059054 kubelet[2913]: I0117 12:02:36.058724 2913 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-160" Jan 17 12:02:36.093589 kubelet[2913]: E0117 12:02:36.093422 2913 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-20-160.181b7934ab241a53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-160,UID:ip-172-31-20-160,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-160,},FirstTimestamp:2025-01-17 12:02:28.397988435 +0000 UTC m=+1.447239536,LastTimestamp:2025-01-17 12:02:28.397988435 +0000 UTC m=+1.447239536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-160,}" Jan 17 12:02:36.400318 kubelet[2913]: I0117 12:02:36.400139 2913 apiserver.go:52] "Watching apiserver" Jan 17 12:02:36.422333 kubelet[2913]: I0117 12:02:36.421893 2913 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:02:37.908381 systemd[1]: Reloading requested from client PID 3459 ('systemctl') (unit session-7.scope)... Jan 17 12:02:37.908408 systemd[1]: Reloading... Jan 17 12:02:38.084017 zram_generator::config[3506]: No configuration found. Jan 17 12:02:38.308872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:02:38.512851 systemd[1]: Reloading finished in 603 ms. Jan 17 12:02:38.617448 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:38.636983 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:02:38.637685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:38.637768 systemd[1]: kubelet.service: Consumed 2.251s CPU time, 114.5M memory peak, 0B memory swap peak. Jan 17 12:02:38.653147 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:02:39.005848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:02:39.020515 (kubelet)[3563]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:02:39.128361 kubelet[3563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:39.128361 kubelet[3563]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:02:39.128361 kubelet[3563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:02:39.128901 kubelet[3563]: I0117 12:02:39.128474 3563 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:02:39.137630 kubelet[3563]: I0117 12:02:39.137566 3563 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:02:39.137630 kubelet[3563]: I0117 12:02:39.137613 3563 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:02:39.138958 kubelet[3563]: I0117 12:02:39.138144 3563 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:02:39.140801 kubelet[3563]: I0117 12:02:39.140747 3563 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:02:39.144494 kubelet[3563]: I0117 12:02:39.143702 3563 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:02:39.157622 kubelet[3563]: I0117 12:02:39.157573 3563 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:02:39.158087 kubelet[3563]: I0117 12:02:39.158034 3563 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:02:39.158698 kubelet[3563]: I0117 12:02:39.158092 3563 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-160","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:02:39.158698 kubelet[3563]: I0117 12:02:39.158413 3563 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:02:39.158698 kubelet[3563]: I0117 12:02:39.158435 3563 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:02:39.158698 kubelet[3563]: I0117 12:02:39.158495 3563 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:39.158698 kubelet[3563]: I0117 12:02:39.158678 3563 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:02:39.161093 kubelet[3563]: I0117 12:02:39.158703 3563 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:02:39.161093 kubelet[3563]: I0117 12:02:39.158754 3563 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:02:39.161093 kubelet[3563]: I0117 12:02:39.158795 3563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:02:39.172459 kubelet[3563]: I0117 12:02:39.164442 3563 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:02:39.172459 kubelet[3563]: I0117 12:02:39.164775 3563 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:02:39.172459 kubelet[3563]: I0117 12:02:39.166046 3563 server.go:1264] "Started kubelet" Jan 17 12:02:39.172459 kubelet[3563]: I0117 12:02:39.171363 3563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:02:39.189875 kubelet[3563]: I0117 12:02:39.189796 3563 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:02:39.192146 kubelet[3563]: I0117 12:02:39.191950 3563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:02:39.199593 kubelet[3563]: I0117 12:02:39.199541 3563 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:02:39.204546 kubelet[3563]: I0117 12:02:39.202613 3563 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:02:39.204546 kubelet[3563]: I0117 12:02:39.202891 3563 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:02:39.228292 kubelet[3563]: I0117 12:02:39.227049 3563 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:02:39.228292 kubelet[3563]: I0117 12:02:39.227258 3563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:02:39.245937 kubelet[3563]: I0117 12:02:39.244334 3563 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:02:39.251107 kubelet[3563]: I0117 12:02:39.251068 3563 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:02:39.254948 kubelet[3563]: I0117 12:02:39.251851 3563 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:02:39.265024 kubelet[3563]: I0117 12:02:39.263737 3563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:02:39.271626 kubelet[3563]: I0117 12:02:39.271486 3563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:02:39.272071 kubelet[3563]: I0117 12:02:39.271954 3563 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:02:39.272476 kubelet[3563]: I0117 12:02:39.272346 3563 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:02:39.274172 kubelet[3563]: E0117 12:02:39.274124 3563 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:02:39.332613 kubelet[3563]: I0117 12:02:39.331525 3563 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-160" Jan 17 12:02:39.352977 kubelet[3563]: I0117 12:02:39.352399 3563 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-160" Jan 17 12:02:39.353580 kubelet[3563]: I0117 12:02:39.353533 3563 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-160" Jan 17 12:02:39.375631 kubelet[3563]: E0117 12:02:39.374434 3563 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:02:39.438154 kubelet[3563]: I0117 12:02:39.438094 3563 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:02:39.439067 kubelet[3563]: I0117 12:02:39.439015 3563 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:02:39.439536 kubelet[3563]: I0117 12:02:39.439401 3563 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:02:39.441900 kubelet[3563]: I0117 12:02:39.441640 3563 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:02:39.441900 kubelet[3563]: I0117 12:02:39.441684 3563 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:02:39.441900 kubelet[3563]: I0117 12:02:39.441725 3563 policy_none.go:49] "None policy: Start" Jan 17 12:02:39.445211 kubelet[3563]: I0117 12:02:39.445167 3563 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:02:39.445211 kubelet[3563]: I0117 12:02:39.445217 3563 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:02:39.445608 kubelet[3563]: I0117 12:02:39.445574 3563 state_mem.go:75] "Updated machine memory state" Jan 17 12:02:39.458940 kubelet[3563]: I0117 12:02:39.458116 3563 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:02:39.458940 kubelet[3563]: I0117 12:02:39.458394 3563 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:02:39.461225 kubelet[3563]: I0117 12:02:39.461194 3563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:02:39.575242 kubelet[3563]: I0117 12:02:39.575083 3563 topology_manager.go:215] "Topology Admit Handler" podUID="ff4478faa8b30275192c52633c446719" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-160" Jan 17 12:02:39.576952 kubelet[3563]: I0117 12:02:39.575514 3563 topology_manager.go:215] "Topology Admit Handler" podUID="d0b066f7fe9463c237dadfec6d28c427" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-160" Jan 17 12:02:39.576952 kubelet[3563]: I0117 12:02:39.575608 3563 topology_manager.go:215] "Topology Admit Handler" podUID="ca1e98446d978df7daa22270cdfead68" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:39.608521 kubelet[3563]: I0117 12:02:39.608439 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ff4478faa8b30275192c52633c446719-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-160\" (UID: \"ff4478faa8b30275192c52633c446719\") " pod="kube-system/kube-scheduler-ip-172-31-20-160" Jan 17 12:02:39.608521 kubelet[3563]: I0117 12:02:39.608507 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d0b066f7fe9463c237dadfec6d28c427-ca-certs\") pod \"kube-apiserver-ip-172-31-20-160\" (UID: \"d0b066f7fe9463c237dadfec6d28c427\") " pod="kube-system/kube-apiserver-ip-172-31-20-160" Jan 17 12:02:39.609871 kubelet[3563]: I0117 12:02:39.608550 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:39.609871 kubelet[3563]: I0117 12:02:39.608598 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:39.609871 kubelet[3563]: I0117 12:02:39.608682 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:39.609871 kubelet[3563]: I0117 12:02:39.608742 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d0b066f7fe9463c237dadfec6d28c427-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-160\" (UID: \"d0b066f7fe9463c237dadfec6d28c427\") " pod="kube-system/kube-apiserver-ip-172-31-20-160" Jan 17 12:02:39.609871 kubelet[3563]: I0117 12:02:39.608809 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d0b066f7fe9463c237dadfec6d28c427-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-160\" (UID: \"d0b066f7fe9463c237dadfec6d28c427\") " pod="kube-system/kube-apiserver-ip-172-31-20-160" Jan 17 12:02:39.610250 kubelet[3563]: I0117 12:02:39.609998 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:39.610250 kubelet[3563]: I0117 12:02:39.610072 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca1e98446d978df7daa22270cdfead68-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-160\" (UID: \"ca1e98446d978df7daa22270cdfead68\") " pod="kube-system/kube-controller-manager-ip-172-31-20-160" Jan 17 12:02:40.161371 kubelet[3563]: I0117 12:02:40.161006 3563 apiserver.go:52] "Watching apiserver" Jan 17 12:02:40.204148 kubelet[3563]: I0117 12:02:40.203697 3563 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:02:40.493715 kubelet[3563]: I0117 12:02:40.493618 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-160" podStartSLOduration=1.4935957229999999 podStartE2EDuration="1.493595723s" podCreationTimestamp="2025-01-17 12:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:40.493513595 +0000 UTC m=+1.466282144" watchObservedRunningTime="2025-01-17 12:02:40.493595723 +0000 UTC m=+1.466364260" Jan 17 12:02:40.552171 kubelet[3563]: I0117 12:02:40.551596 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-160" podStartSLOduration=1.5515721999999998 podStartE2EDuration="1.5515722s" podCreationTimestamp="2025-01-17 12:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:40.5274633 +0000 UTC m=+1.500231837" watchObservedRunningTime="2025-01-17 12:02:40.5515722 +0000 UTC m=+1.524340749" Jan 17 12:02:40.584715 kubelet[3563]: I0117 12:02:40.584280 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-160" podStartSLOduration=1.584259348 podStartE2EDuration="1.584259348s" podCreationTimestamp="2025-01-17 12:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:40.555440364 +0000 UTC m=+1.528208913" watchObservedRunningTime="2025-01-17 12:02:40.584259348 +0000 UTC m=+1.557027981" Jan 17 12:02:46.014894 sudo[2344]: pam_unix(sudo:session): session closed for user root Jan 17 12:02:46.038784 sshd[2341]: pam_unix(sshd:session): session closed for user core Jan 17 12:02:46.045134 systemd-logind[2000]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:02:46.045522 systemd[1]: sshd@6-172.31.20.160:22-139.178.68.195:43874.service: Deactivated successfully. Jan 17 12:02:46.049876 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:02:46.051012 systemd[1]: session-7.scope: Consumed 12.349s CPU time, 184.5M memory peak, 0B memory swap peak. Jan 17 12:02:46.054117 systemd-logind[2000]: Removed session 7. Jan 17 12:02:51.936884 kubelet[3563]: I0117 12:02:51.936794 3563 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:02:51.938224 containerd[2014]: time="2025-01-17T12:02:51.938131680Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:02:51.940887 kubelet[3563]: I0117 12:02:51.938565 3563 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:02:52.848955 kubelet[3563]: I0117 12:02:52.847015 3563 topology_manager.go:215] "Topology Admit Handler" podUID="988b5112-e82b-4f23-828c-87ae5b2d24ed" podNamespace="kube-system" podName="kube-proxy-v29hn" Jan 17 12:02:52.867082 systemd[1]: Created slice kubepods-besteffort-pod988b5112_e82b_4f23_828c_87ae5b2d24ed.slice - libcontainer container kubepods-besteffort-pod988b5112_e82b_4f23_828c_87ae5b2d24ed.slice. Jan 17 12:02:52.898084 kubelet[3563]: I0117 12:02:52.898010 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/988b5112-e82b-4f23-828c-87ae5b2d24ed-kube-proxy\") pod \"kube-proxy-v29hn\" (UID: \"988b5112-e82b-4f23-828c-87ae5b2d24ed\") " pod="kube-system/kube-proxy-v29hn" Jan 17 12:02:52.898253 kubelet[3563]: I0117 12:02:52.898088 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/988b5112-e82b-4f23-828c-87ae5b2d24ed-xtables-lock\") pod \"kube-proxy-v29hn\" (UID: \"988b5112-e82b-4f23-828c-87ae5b2d24ed\") " pod="kube-system/kube-proxy-v29hn" Jan 17 12:02:52.898253 kubelet[3563]: I0117 12:02:52.898129 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/988b5112-e82b-4f23-828c-87ae5b2d24ed-lib-modules\") pod \"kube-proxy-v29hn\" (UID: \"988b5112-e82b-4f23-828c-87ae5b2d24ed\") " pod="kube-system/kube-proxy-v29hn" Jan 17 12:02:52.898253 kubelet[3563]: I0117 12:02:52.898166 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzgzd\" (UniqueName: \"kubernetes.io/projected/988b5112-e82b-4f23-828c-87ae5b2d24ed-kube-api-access-dzgzd\") pod \"kube-proxy-v29hn\" (UID: \"988b5112-e82b-4f23-828c-87ae5b2d24ed\") " pod="kube-system/kube-proxy-v29hn" Jan 17 12:02:53.078967 kubelet[3563]: I0117 12:02:53.077689 3563 topology_manager.go:215] "Topology Admit Handler" podUID="7459ced1-a7a8-477c-bfde-fab22d4a9b26" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-65wfh" Jan 17 12:02:53.104319 kubelet[3563]: I0117 12:02:53.100456 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7459ced1-a7a8-477c-bfde-fab22d4a9b26-var-lib-calico\") pod \"tigera-operator-7bc55997bb-65wfh\" (UID: \"7459ced1-a7a8-477c-bfde-fab22d4a9b26\") " pod="tigera-operator/tigera-operator-7bc55997bb-65wfh" Jan 17 12:02:53.104319 kubelet[3563]: I0117 12:02:53.100555 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df4mc\" (UniqueName: \"kubernetes.io/projected/7459ced1-a7a8-477c-bfde-fab22d4a9b26-kube-api-access-df4mc\") pod \"tigera-operator-7bc55997bb-65wfh\" (UID: \"7459ced1-a7a8-477c-bfde-fab22d4a9b26\") " pod="tigera-operator/tigera-operator-7bc55997bb-65wfh" Jan 17 12:02:53.103626 systemd[1]: Created slice kubepods-besteffort-pod7459ced1_a7a8_477c_bfde_fab22d4a9b26.slice - libcontainer container kubepods-besteffort-pod7459ced1_a7a8_477c_bfde_fab22d4a9b26.slice. Jan 17 12:02:53.183586 containerd[2014]: time="2025-01-17T12:02:53.183523306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v29hn,Uid:988b5112-e82b-4f23-828c-87ae5b2d24ed,Namespace:kube-system,Attempt:0,}" Jan 17 12:02:53.246291 containerd[2014]: time="2025-01-17T12:02:53.245695559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:53.246291 containerd[2014]: time="2025-01-17T12:02:53.245800499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:53.246291 containerd[2014]: time="2025-01-17T12:02:53.245856671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:53.246687 containerd[2014]: time="2025-01-17T12:02:53.246333107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:53.294217 systemd[1]: Started cri-containerd-15a2414b5d301f7ae2575e1e9f3484372b9e0dae458d109ad5aeff276ae1ef2f.scope - libcontainer container 15a2414b5d301f7ae2575e1e9f3484372b9e0dae458d109ad5aeff276ae1ef2f. Jan 17 12:02:53.345340 containerd[2014]: time="2025-01-17T12:02:53.345221471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v29hn,Uid:988b5112-e82b-4f23-828c-87ae5b2d24ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"15a2414b5d301f7ae2575e1e9f3484372b9e0dae458d109ad5aeff276ae1ef2f\"" Jan 17 12:02:53.353789 containerd[2014]: time="2025-01-17T12:02:53.353610707Z" level=info msg="CreateContainer within sandbox \"15a2414b5d301f7ae2575e1e9f3484372b9e0dae458d109ad5aeff276ae1ef2f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:02:53.385510 containerd[2014]: time="2025-01-17T12:02:53.385310171Z" level=info msg="CreateContainer within sandbox \"15a2414b5d301f7ae2575e1e9f3484372b9e0dae458d109ad5aeff276ae1ef2f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d3d7b15e589e2d62f81b86cf7c3969918177fdb3cc41e1571a480aecd9e05c28\"" Jan 17 12:02:53.386973 containerd[2014]: time="2025-01-17T12:02:53.386709479Z" level=info msg="StartContainer for \"d3d7b15e589e2d62f81b86cf7c3969918177fdb3cc41e1571a480aecd9e05c28\"" Jan 17 12:02:53.416098 containerd[2014]: time="2025-01-17T12:02:53.416037504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-65wfh,Uid:7459ced1-a7a8-477c-bfde-fab22d4a9b26,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:02:53.439238 systemd[1]: Started cri-containerd-d3d7b15e589e2d62f81b86cf7c3969918177fdb3cc41e1571a480aecd9e05c28.scope - libcontainer container d3d7b15e589e2d62f81b86cf7c3969918177fdb3cc41e1571a480aecd9e05c28. Jan 17 12:02:53.475530 containerd[2014]: time="2025-01-17T12:02:53.475292760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:02:53.475530 containerd[2014]: time="2025-01-17T12:02:53.475440696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:02:53.475530 containerd[2014]: time="2025-01-17T12:02:53.475480932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:53.477940 containerd[2014]: time="2025-01-17T12:02:53.477404460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:02:53.522263 systemd[1]: Started cri-containerd-189c86a95f7f710df0cded733c0cb3783783a562f83e429cb17c3373d2e1b271.scope - libcontainer container 189c86a95f7f710df0cded733c0cb3783783a562f83e429cb17c3373d2e1b271. Jan 17 12:02:53.526645 containerd[2014]: time="2025-01-17T12:02:53.526296660Z" level=info msg="StartContainer for \"d3d7b15e589e2d62f81b86cf7c3969918177fdb3cc41e1571a480aecd9e05c28\" returns successfully" Jan 17 12:02:53.605999 containerd[2014]: time="2025-01-17T12:02:53.605838613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-65wfh,Uid:7459ced1-a7a8-477c-bfde-fab22d4a9b26,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"189c86a95f7f710df0cded733c0cb3783783a562f83e429cb17c3373d2e1b271\"" Jan 17 12:02:53.611656 containerd[2014]: time="2025-01-17T12:02:53.611300701Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:02:54.031309 systemd[1]: run-containerd-runc-k8s.io-15a2414b5d301f7ae2575e1e9f3484372b9e0dae458d109ad5aeff276ae1ef2f-runc.caxBkU.mount: Deactivated successfully. Jan 17 12:02:54.421665 kubelet[3563]: I0117 12:02:54.420967 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v29hn" podStartSLOduration=2.420941353 podStartE2EDuration="2.420941353s" podCreationTimestamp="2025-01-17 12:02:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:02:54.420229345 +0000 UTC m=+15.392997882" watchObservedRunningTime="2025-01-17 12:02:54.420941353 +0000 UTC m=+15.393709902" Jan 17 12:03:00.044241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1828036134.mount: Deactivated successfully. Jan 17 12:03:00.619218 containerd[2014]: time="2025-01-17T12:03:00.619151095Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:00.621360 containerd[2014]: time="2025-01-17T12:03:00.621291079Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19126000" Jan 17 12:03:00.623964 containerd[2014]: time="2025-01-17T12:03:00.622225003Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:00.627257 containerd[2014]: time="2025-01-17T12:03:00.627206887Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:00.628965 containerd[2014]: time="2025-01-17T12:03:00.628876399Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 7.017513694s" Jan 17 12:03:00.629121 containerd[2014]: time="2025-01-17T12:03:00.628961251Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 17 12:03:00.635601 containerd[2014]: time="2025-01-17T12:03:00.635510635Z" level=info msg="CreateContainer within sandbox \"189c86a95f7f710df0cded733c0cb3783783a562f83e429cb17c3373d2e1b271\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:03:00.657674 containerd[2014]: time="2025-01-17T12:03:00.657606920Z" level=info msg="CreateContainer within sandbox \"189c86a95f7f710df0cded733c0cb3783783a562f83e429cb17c3373d2e1b271\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141\"" Jan 17 12:03:00.659975 containerd[2014]: time="2025-01-17T12:03:00.659773244Z" level=info msg="StartContainer for \"0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141\"" Jan 17 12:03:00.714531 systemd[1]: Started cri-containerd-0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141.scope - libcontainer container 0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141. Jan 17 12:03:00.775798 containerd[2014]: time="2025-01-17T12:03:00.775612772Z" level=info msg="StartContainer for \"0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141\" returns successfully" Jan 17 12:03:01.442016 kubelet[3563]: I0117 12:03:01.441899 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-65wfh" podStartSLOduration=1.420404545 podStartE2EDuration="8.441874411s" podCreationTimestamp="2025-01-17 12:02:53 +0000 UTC" firstStartedPulling="2025-01-17 12:02:53.610278613 +0000 UTC m=+14.583047138" lastFinishedPulling="2025-01-17 12:03:00.631748467 +0000 UTC m=+21.604517004" observedRunningTime="2025-01-17 12:03:01.440654983 +0000 UTC m=+22.413423544" watchObservedRunningTime="2025-01-17 12:03:01.441874411 +0000 UTC m=+22.414642948" Jan 17 12:03:05.877932 kubelet[3563]: I0117 12:03:05.877850 3563 topology_manager.go:215] "Topology Admit Handler" podUID="793962ef-7079-40cd-aeea-b7575ac7917e" podNamespace="calico-system" podName="calico-typha-5848b55c95-l76ll" Jan 17 12:03:05.894430 systemd[1]: Created slice kubepods-besteffort-pod793962ef_7079_40cd_aeea_b7575ac7917e.slice - libcontainer container kubepods-besteffort-pod793962ef_7079_40cd_aeea_b7575ac7917e.slice. Jan 17 12:03:05.985474 kubelet[3563]: I0117 12:03:05.985115 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/793962ef-7079-40cd-aeea-b7575ac7917e-typha-certs\") pod \"calico-typha-5848b55c95-l76ll\" (UID: \"793962ef-7079-40cd-aeea-b7575ac7917e\") " pod="calico-system/calico-typha-5848b55c95-l76ll" Jan 17 12:03:05.985474 kubelet[3563]: I0117 12:03:05.985287 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/793962ef-7079-40cd-aeea-b7575ac7917e-tigera-ca-bundle\") pod \"calico-typha-5848b55c95-l76ll\" (UID: \"793962ef-7079-40cd-aeea-b7575ac7917e\") " pod="calico-system/calico-typha-5848b55c95-l76ll" Jan 17 12:03:05.985474 kubelet[3563]: I0117 12:03:05.985357 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkd2f\" (UniqueName: \"kubernetes.io/projected/793962ef-7079-40cd-aeea-b7575ac7917e-kube-api-access-kkd2f\") pod \"calico-typha-5848b55c95-l76ll\" (UID: \"793962ef-7079-40cd-aeea-b7575ac7917e\") " pod="calico-system/calico-typha-5848b55c95-l76ll" Jan 17 12:03:06.065545 kubelet[3563]: I0117 12:03:06.064533 3563 topology_manager.go:215] "Topology Admit Handler" podUID="18854e3f-a74b-4c08-8bc3-85fc1a268f27" podNamespace="calico-system" podName="calico-node-kdgsc" Jan 17 12:03:06.087979 kubelet[3563]: I0117 12:03:06.086180 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18854e3f-a74b-4c08-8bc3-85fc1a268f27-lib-modules\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.087979 kubelet[3563]: I0117 12:03:06.086264 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/18854e3f-a74b-4c08-8bc3-85fc1a268f27-flexvol-driver-host\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.087979 kubelet[3563]: I0117 12:03:06.086322 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltdzr\" (UniqueName: \"kubernetes.io/projected/18854e3f-a74b-4c08-8bc3-85fc1a268f27-kube-api-access-ltdzr\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.087979 kubelet[3563]: I0117 12:03:06.086373 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/18854e3f-a74b-4c08-8bc3-85fc1a268f27-cni-log-dir\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.087979 kubelet[3563]: I0117 12:03:06.086452 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/18854e3f-a74b-4c08-8bc3-85fc1a268f27-var-run-calico\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.088350 kubelet[3563]: I0117 12:03:06.086500 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/18854e3f-a74b-4c08-8bc3-85fc1a268f27-var-lib-calico\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.088350 kubelet[3563]: I0117 12:03:06.086546 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/18854e3f-a74b-4c08-8bc3-85fc1a268f27-cni-net-dir\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.088350 kubelet[3563]: I0117 12:03:06.086645 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/18854e3f-a74b-4c08-8bc3-85fc1a268f27-policysync\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.088350 kubelet[3563]: I0117 12:03:06.086694 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/18854e3f-a74b-4c08-8bc3-85fc1a268f27-node-certs\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.088350 kubelet[3563]: I0117 12:03:06.086740 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/18854e3f-a74b-4c08-8bc3-85fc1a268f27-cni-bin-dir\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.090101 kubelet[3563]: I0117 12:03:06.086789 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18854e3f-a74b-4c08-8bc3-85fc1a268f27-xtables-lock\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.090101 kubelet[3563]: I0117 12:03:06.087003 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18854e3f-a74b-4c08-8bc3-85fc1a268f27-tigera-ca-bundle\") pod \"calico-node-kdgsc\" (UID: \"18854e3f-a74b-4c08-8bc3-85fc1a268f27\") " pod="calico-system/calico-node-kdgsc" Jan 17 12:03:06.102292 systemd[1]: Created slice kubepods-besteffort-pod18854e3f_a74b_4c08_8bc3_85fc1a268f27.slice - libcontainer container kubepods-besteffort-pod18854e3f_a74b_4c08_8bc3_85fc1a268f27.slice. Jan 17 12:03:06.190104 kubelet[3563]: E0117 12:03:06.190057 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.190104 kubelet[3563]: W0117 12:03:06.190097 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.190299 kubelet[3563]: E0117 12:03:06.190132 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.191981 kubelet[3563]: E0117 12:03:06.191661 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.191981 kubelet[3563]: W0117 12:03:06.191720 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.191981 kubelet[3563]: E0117 12:03:06.191756 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.192977 kubelet[3563]: E0117 12:03:06.192859 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.192977 kubelet[3563]: W0117 12:03:06.192897 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.193379 kubelet[3563]: E0117 12:03:06.193152 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.193620 kubelet[3563]: E0117 12:03:06.193577 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.193620 kubelet[3563]: W0117 12:03:06.193612 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.193919 kubelet[3563]: E0117 12:03:06.193724 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.195594 kubelet[3563]: E0117 12:03:06.195460 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.195944 kubelet[3563]: W0117 12:03:06.195709 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.197047 kubelet[3563]: E0117 12:03:06.195896 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.197455 kubelet[3563]: E0117 12:03:06.197221 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.197455 kubelet[3563]: W0117 12:03:06.197242 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.197455 kubelet[3563]: E0117 12:03:06.197300 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.197846 kubelet[3563]: E0117 12:03:06.197807 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.198382 kubelet[3563]: W0117 12:03:06.198140 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.198382 kubelet[3563]: E0117 12:03:06.198205 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.200065 kubelet[3563]: E0117 12:03:06.199821 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.200065 kubelet[3563]: W0117 12:03:06.199876 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.200065 kubelet[3563]: E0117 12:03:06.200035 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.201563 kubelet[3563]: E0117 12:03:06.201335 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.201563 kubelet[3563]: W0117 12:03:06.201401 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.201563 kubelet[3563]: E0117 12:03:06.201468 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.203241 containerd[2014]: time="2025-01-17T12:03:06.202343711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5848b55c95-l76ll,Uid:793962ef-7079-40cd-aeea-b7575ac7917e,Namespace:calico-system,Attempt:0,}" Jan 17 12:03:06.205379 kubelet[3563]: E0117 12:03:06.204586 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.205379 kubelet[3563]: W0117 12:03:06.204630 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.205825 kubelet[3563]: E0117 12:03:06.205740 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.209358 kubelet[3563]: E0117 12:03:06.209180 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.209358 kubelet[3563]: W0117 12:03:06.209217 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.210067 kubelet[3563]: E0117 12:03:06.209804 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.213478 kubelet[3563]: E0117 12:03:06.213411 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.214232 kubelet[3563]: W0117 12:03:06.214066 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.216225 kubelet[3563]: E0117 12:03:06.215991 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.217121 kubelet[3563]: E0117 12:03:06.216790 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.217121 kubelet[3563]: W0117 12:03:06.216822 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.217121 kubelet[3563]: E0117 12:03:06.217033 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.220195 kubelet[3563]: E0117 12:03:06.218973 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.220195 kubelet[3563]: W0117 12:03:06.219004 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.220195 kubelet[3563]: E0117 12:03:06.220140 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.222244 kubelet[3563]: E0117 12:03:06.222092 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.222244 kubelet[3563]: W0117 12:03:06.222126 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.223317 kubelet[3563]: E0117 12:03:06.222567 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.224151 kubelet[3563]: E0117 12:03:06.223928 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.224151 kubelet[3563]: W0117 12:03:06.223959 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.224151 kubelet[3563]: E0117 12:03:06.224022 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.225297 kubelet[3563]: E0117 12:03:06.224747 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.225297 kubelet[3563]: W0117 12:03:06.224879 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.225297 kubelet[3563]: E0117 12:03:06.225045 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.226200 kubelet[3563]: E0117 12:03:06.226171 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.226487 kubelet[3563]: W0117 12:03:06.226301 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.226754 kubelet[3563]: E0117 12:03:06.226633 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.229261 kubelet[3563]: E0117 12:03:06.229046 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.229261 kubelet[3563]: W0117 12:03:06.229080 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.229967 kubelet[3563]: E0117 12:03:06.229701 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.231104 kubelet[3563]: E0117 12:03:06.230954 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.231104 kubelet[3563]: W0117 12:03:06.230988 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.232173 kubelet[3563]: E0117 12:03:06.231971 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.233119 kubelet[3563]: E0117 12:03:06.233085 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.234096 kubelet[3563]: W0117 12:03:06.233978 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.234273 kubelet[3563]: E0117 12:03:06.234190 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.236709 kubelet[3563]: E0117 12:03:06.236459 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.236709 kubelet[3563]: W0117 12:03:06.236492 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.237497 kubelet[3563]: E0117 12:03:06.237207 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.237899 kubelet[3563]: E0117 12:03:06.237831 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.237899 kubelet[3563]: W0117 12:03:06.237860 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.238464 kubelet[3563]: E0117 12:03:06.238238 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.239070 kubelet[3563]: E0117 12:03:06.239041 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.239355 kubelet[3563]: W0117 12:03:06.239214 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.239823 kubelet[3563]: E0117 12:03:06.239620 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.240265 kubelet[3563]: E0117 12:03:06.240233 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.240721 kubelet[3563]: W0117 12:03:06.240424 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.241152 kubelet[3563]: E0117 12:03:06.240857 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.241631 kubelet[3563]: E0117 12:03:06.241564 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.241832 kubelet[3563]: W0117 12:03:06.241594 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.242093 kubelet[3563]: E0117 12:03:06.241824 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.242695 kubelet[3563]: E0117 12:03:06.242491 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.242695 kubelet[3563]: W0117 12:03:06.242518 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.242695 kubelet[3563]: E0117 12:03:06.242617 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.243374 kubelet[3563]: E0117 12:03:06.243345 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.243602 kubelet[3563]: W0117 12:03:06.243505 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.243602 kubelet[3563]: E0117 12:03:06.243543 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.256093 kubelet[3563]: E0117 12:03:06.256054 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.257584 kubelet[3563]: W0117 12:03:06.257465 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.257584 kubelet[3563]: E0117 12:03:06.257518 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.280379 containerd[2014]: time="2025-01-17T12:03:06.280231259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:06.281175 containerd[2014]: time="2025-01-17T12:03:06.281078759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:06.282536 containerd[2014]: time="2025-01-17T12:03:06.281658407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:06.283044 containerd[2014]: time="2025-01-17T12:03:06.282956963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:06.342964 systemd[1]: Started cri-containerd-1551b8b04d5a46e3ad9bad5d5da408b18d04572ae290c99682ed0f244556bf12.scope - libcontainer container 1551b8b04d5a46e3ad9bad5d5da408b18d04572ae290c99682ed0f244556bf12. Jan 17 12:03:06.351568 kubelet[3563]: I0117 12:03:06.351067 3563 topology_manager.go:215] "Topology Admit Handler" podUID="baedf7e0-e7c5-4c51-97a9-7d0b1c402648" podNamespace="calico-system" podName="csi-node-driver-4b886" Jan 17 12:03:06.351568 kubelet[3563]: E0117 12:03:06.351505 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b886" podUID="baedf7e0-e7c5-4c51-97a9-7d0b1c402648" Jan 17 12:03:06.374768 kubelet[3563]: E0117 12:03:06.374728 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.375022 kubelet[3563]: W0117 12:03:06.374990 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.375335 kubelet[3563]: E0117 12:03:06.375227 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.377941 kubelet[3563]: E0117 12:03:06.375995 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.377941 kubelet[3563]: W0117 12:03:06.376027 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.377941 kubelet[3563]: E0117 12:03:06.376061 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.378774 kubelet[3563]: E0117 12:03:06.378530 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.378774 kubelet[3563]: W0117 12:03:06.378562 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.378774 kubelet[3563]: E0117 12:03:06.378594 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.379368 kubelet[3563]: E0117 12:03:06.379187 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.379368 kubelet[3563]: W0117 12:03:06.379215 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.379368 kubelet[3563]: E0117 12:03:06.379243 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.380230 kubelet[3563]: E0117 12:03:06.380071 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.380230 kubelet[3563]: W0117 12:03:06.380102 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.380230 kubelet[3563]: E0117 12:03:06.380131 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.381267 kubelet[3563]: E0117 12:03:06.380994 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.381267 kubelet[3563]: W0117 12:03:06.381027 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.381267 kubelet[3563]: E0117 12:03:06.381058 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.383165 kubelet[3563]: E0117 12:03:06.383009 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.383869 kubelet[3563]: W0117 12:03:06.383469 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.383869 kubelet[3563]: E0117 12:03:06.383544 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.385357 kubelet[3563]: E0117 12:03:06.384866 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.385357 kubelet[3563]: W0117 12:03:06.384901 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.385357 kubelet[3563]: E0117 12:03:06.384980 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.387529 kubelet[3563]: E0117 12:03:06.387290 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.387529 kubelet[3563]: W0117 12:03:06.387325 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.387529 kubelet[3563]: E0117 12:03:06.387356 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.388575 kubelet[3563]: E0117 12:03:06.388319 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.388575 kubelet[3563]: W0117 12:03:06.388350 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.388575 kubelet[3563]: E0117 12:03:06.388386 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.390613 kubelet[3563]: E0117 12:03:06.389823 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.390613 kubelet[3563]: W0117 12:03:06.389857 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.390613 kubelet[3563]: E0117 12:03:06.389890 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.391776 kubelet[3563]: E0117 12:03:06.391565 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.391776 kubelet[3563]: W0117 12:03:06.391598 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.391776 kubelet[3563]: E0117 12:03:06.391634 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.393586 kubelet[3563]: E0117 12:03:06.393315 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.393586 kubelet[3563]: W0117 12:03:06.393350 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.393586 kubelet[3563]: E0117 12:03:06.393382 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.396779 kubelet[3563]: E0117 12:03:06.396192 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.396779 kubelet[3563]: W0117 12:03:06.396228 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.396779 kubelet[3563]: E0117 12:03:06.396260 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.398750 kubelet[3563]: E0117 12:03:06.398499 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.398750 kubelet[3563]: W0117 12:03:06.398534 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.398750 kubelet[3563]: E0117 12:03:06.398568 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.401522 kubelet[3563]: E0117 12:03:06.401185 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.401522 kubelet[3563]: W0117 12:03:06.401219 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.401522 kubelet[3563]: E0117 12:03:06.401251 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.402559 kubelet[3563]: E0117 12:03:06.402417 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.402559 kubelet[3563]: W0117 12:03:06.402443 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.402559 kubelet[3563]: E0117 12:03:06.402471 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.404660 kubelet[3563]: E0117 12:03:06.404151 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.404660 kubelet[3563]: W0117 12:03:06.404198 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.404660 kubelet[3563]: E0117 12:03:06.404234 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.406331 kubelet[3563]: E0117 12:03:06.406140 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.406331 kubelet[3563]: W0117 12:03:06.406176 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.406331 kubelet[3563]: E0117 12:03:06.406209 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.407722 kubelet[3563]: E0117 12:03:06.407046 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.407722 kubelet[3563]: W0117 12:03:06.407080 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.407722 kubelet[3563]: E0117 12:03:06.407112 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.409749 kubelet[3563]: E0117 12:03:06.409395 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.409749 kubelet[3563]: W0117 12:03:06.409432 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.409749 kubelet[3563]: E0117 12:03:06.409465 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.409749 kubelet[3563]: I0117 12:03:06.409512 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/baedf7e0-e7c5-4c51-97a9-7d0b1c402648-socket-dir\") pod \"csi-node-driver-4b886\" (UID: \"baedf7e0-e7c5-4c51-97a9-7d0b1c402648\") " pod="calico-system/csi-node-driver-4b886" Jan 17 12:03:06.410729 kubelet[3563]: E0117 12:03:06.410247 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.410729 kubelet[3563]: W0117 12:03:06.410275 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.410729 kubelet[3563]: E0117 12:03:06.410306 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.410729 kubelet[3563]: I0117 12:03:06.410346 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/baedf7e0-e7c5-4c51-97a9-7d0b1c402648-registration-dir\") pod \"csi-node-driver-4b886\" (UID: \"baedf7e0-e7c5-4c51-97a9-7d0b1c402648\") " pod="calico-system/csi-node-driver-4b886" Jan 17 12:03:06.413183 kubelet[3563]: E0117 12:03:06.412991 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.413183 kubelet[3563]: W0117 12:03:06.413125 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.413695 kubelet[3563]: E0117 12:03:06.413299 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.414228 kubelet[3563]: I0117 12:03:06.414021 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/baedf7e0-e7c5-4c51-97a9-7d0b1c402648-kubelet-dir\") pod \"csi-node-driver-4b886\" (UID: \"baedf7e0-e7c5-4c51-97a9-7d0b1c402648\") " pod="calico-system/csi-node-driver-4b886" Jan 17 12:03:06.414546 kubelet[3563]: E0117 12:03:06.414469 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.414546 kubelet[3563]: W0117 12:03:06.414498 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.415120 kubelet[3563]: E0117 12:03:06.414668 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.416282 kubelet[3563]: E0117 12:03:06.415792 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.416282 kubelet[3563]: W0117 12:03:06.415829 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.416282 kubelet[3563]: E0117 12:03:06.415883 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.417348 kubelet[3563]: E0117 12:03:06.417100 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.417348 kubelet[3563]: W0117 12:03:06.417134 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.417348 kubelet[3563]: E0117 12:03:06.417182 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.417348 kubelet[3563]: I0117 12:03:06.417228 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4v7f\" (UniqueName: \"kubernetes.io/projected/baedf7e0-e7c5-4c51-97a9-7d0b1c402648-kube-api-access-c4v7f\") pod \"csi-node-driver-4b886\" (UID: \"baedf7e0-e7c5-4c51-97a9-7d0b1c402648\") " pod="calico-system/csi-node-driver-4b886" Jan 17 12:03:06.418543 kubelet[3563]: E0117 12:03:06.418421 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.418543 kubelet[3563]: W0117 12:03:06.418457 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.418543 kubelet[3563]: E0117 12:03:06.418505 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.419464 kubelet[3563]: E0117 12:03:06.419395 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.419464 kubelet[3563]: W0117 12:03:06.419438 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.419823 kubelet[3563]: E0117 12:03:06.419478 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.420339 kubelet[3563]: E0117 12:03:06.420298 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.420339 kubelet[3563]: W0117 12:03:06.420331 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.420580 kubelet[3563]: E0117 12:03:06.420373 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.421534 kubelet[3563]: E0117 12:03:06.421487 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.421534 kubelet[3563]: W0117 12:03:06.421521 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.421693 kubelet[3563]: E0117 12:03:06.421564 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.423028 kubelet[3563]: I0117 12:03:06.422036 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/baedf7e0-e7c5-4c51-97a9-7d0b1c402648-varrun\") pod \"csi-node-driver-4b886\" (UID: \"baedf7e0-e7c5-4c51-97a9-7d0b1c402648\") " pod="calico-system/csi-node-driver-4b886" Jan 17 12:03:06.423321 kubelet[3563]: E0117 12:03:06.423215 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.423321 kubelet[3563]: W0117 12:03:06.423247 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.423321 kubelet[3563]: E0117 12:03:06.423287 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.424499 kubelet[3563]: E0117 12:03:06.423899 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.424499 kubelet[3563]: W0117 12:03:06.423951 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.424499 kubelet[3563]: E0117 12:03:06.423976 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.425285 containerd[2014]: time="2025-01-17T12:03:06.425094840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kdgsc,Uid:18854e3f-a74b-4c08-8bc3-85fc1a268f27,Namespace:calico-system,Attempt:0,}" Jan 17 12:03:06.427924 kubelet[3563]: E0117 12:03:06.426793 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.427924 kubelet[3563]: W0117 12:03:06.426824 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.427924 kubelet[3563]: E0117 12:03:06.426857 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.429990 kubelet[3563]: E0117 12:03:06.429116 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.430445 kubelet[3563]: W0117 12:03:06.430239 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.430445 kubelet[3563]: E0117 12:03:06.430294 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.435141 kubelet[3563]: E0117 12:03:06.435022 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.435141 kubelet[3563]: W0117 12:03:06.435057 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.435141 kubelet[3563]: E0117 12:03:06.435091 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.493830 containerd[2014]: time="2025-01-17T12:03:06.492480025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:06.493830 containerd[2014]: time="2025-01-17T12:03:06.492583321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:06.493830 containerd[2014]: time="2025-01-17T12:03:06.492610021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:06.493830 containerd[2014]: time="2025-01-17T12:03:06.493053949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:06.530558 kubelet[3563]: E0117 12:03:06.530235 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.530558 kubelet[3563]: W0117 12:03:06.530272 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.530558 kubelet[3563]: E0117 12:03:06.530307 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.531123 kubelet[3563]: E0117 12:03:06.530882 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.531123 kubelet[3563]: W0117 12:03:06.530938 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.531123 kubelet[3563]: E0117 12:03:06.530982 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.534500 kubelet[3563]: E0117 12:03:06.534431 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.534500 kubelet[3563]: W0117 12:03:06.534471 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.534500 kubelet[3563]: E0117 12:03:06.534521 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.540063 kubelet[3563]: E0117 12:03:06.539990 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.540063 kubelet[3563]: W0117 12:03:06.540051 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.541221 kubelet[3563]: E0117 12:03:06.540866 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.541221 kubelet[3563]: W0117 12:03:06.540899 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.541221 kubelet[3563]: E0117 12:03:06.540962 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.541221 kubelet[3563]: E0117 12:03:06.541001 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.543018 kubelet[3563]: E0117 12:03:06.542285 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.543018 kubelet[3563]: W0117 12:03:06.542339 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.543018 kubelet[3563]: E0117 12:03:06.542533 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.545236 kubelet[3563]: E0117 12:03:06.544587 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.545236 kubelet[3563]: W0117 12:03:06.544621 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.545236 kubelet[3563]: E0117 12:03:06.544694 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.547246 kubelet[3563]: E0117 12:03:06.546625 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.547246 kubelet[3563]: W0117 12:03:06.546660 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.547246 kubelet[3563]: E0117 12:03:06.546702 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.548973 kubelet[3563]: E0117 12:03:06.548219 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.548973 kubelet[3563]: W0117 12:03:06.548691 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.548973 kubelet[3563]: E0117 12:03:06.548789 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.549756 kubelet[3563]: E0117 12:03:06.549512 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.549756 kubelet[3563]: W0117 12:03:06.549543 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.550197 kubelet[3563]: E0117 12:03:06.549972 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.551074 kubelet[3563]: E0117 12:03:06.550780 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.551074 kubelet[3563]: W0117 12:03:06.550819 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.551671 kubelet[3563]: E0117 12:03:06.551638 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.551999 kubelet[3563]: W0117 12:03:06.551803 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.552467 kubelet[3563]: E0117 12:03:06.552401 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.552752 kubelet[3563]: W0117 12:03:06.552583 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.553069 kubelet[3563]: E0117 12:03:06.553001 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.553147 kubelet[3563]: E0117 12:03:06.553070 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.553147 kubelet[3563]: E0117 12:03:06.553121 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.555123 kubelet[3563]: E0117 12:03:06.554829 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.555123 kubelet[3563]: W0117 12:03:06.554861 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.555453 kubelet[3563]: E0117 12:03:06.555394 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.555876 kubelet[3563]: E0117 12:03:06.555611 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.555876 kubelet[3563]: W0117 12:03:06.555635 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.555876 kubelet[3563]: E0117 12:03:06.555695 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.559203 kubelet[3563]: E0117 12:03:06.558278 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.559203 kubelet[3563]: W0117 12:03:06.558312 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.560833 kubelet[3563]: E0117 12:03:06.560016 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.560833 kubelet[3563]: E0117 12:03:06.560166 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.560833 kubelet[3563]: W0117 12:03:06.560184 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.561619 kubelet[3563]: E0117 12:03:06.561409 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.561619 kubelet[3563]: W0117 12:03:06.561441 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.562299 kubelet[3563]: E0117 12:03:06.562066 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.562299 kubelet[3563]: W0117 12:03:06.562092 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.562731 kubelet[3563]: E0117 12:03:06.562694 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.563132 kubelet[3563]: W0117 12:03:06.562827 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.563132 kubelet[3563]: E0117 12:03:06.562863 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.563132 kubelet[3563]: E0117 12:03:06.562960 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.563616 kubelet[3563]: E0117 12:03:06.563576 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.563978 kubelet[3563]: W0117 12:03:06.563716 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.563978 kubelet[3563]: E0117 12:03:06.563751 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.564509 kubelet[3563]: E0117 12:03:06.564465 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.565631 kubelet[3563]: W0117 12:03:06.564639 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.565631 kubelet[3563]: E0117 12:03:06.564676 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.565631 kubelet[3563]: E0117 12:03:06.565058 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.565528 systemd[1]: Started cri-containerd-38665ea22442508d82a7fe0640c2f0558e313786cfa73b139e3a52cc2e72a1cd.scope - libcontainer container 38665ea22442508d82a7fe0640c2f0558e313786cfa73b139e3a52cc2e72a1cd. Jan 17 12:03:06.566976 kubelet[3563]: E0117 12:03:06.566928 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.567448 kubelet[3563]: W0117 12:03:06.567117 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.567448 kubelet[3563]: E0117 12:03:06.567160 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.569858 kubelet[3563]: E0117 12:03:06.569802 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.570412 kubelet[3563]: E0117 12:03:06.570242 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.570815 kubelet[3563]: W0117 12:03:06.570495 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.570815 kubelet[3563]: E0117 12:03:06.570529 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.572118 kubelet[3563]: E0117 12:03:06.571975 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.572118 kubelet[3563]: W0117 12:03:06.572021 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.572118 kubelet[3563]: E0117 12:03:06.572055 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.602097 kubelet[3563]: E0117 12:03:06.602048 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:06.602097 kubelet[3563]: W0117 12:03:06.602085 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:06.602270 kubelet[3563]: E0117 12:03:06.602121 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:06.684506 containerd[2014]: time="2025-01-17T12:03:06.684382129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5848b55c95-l76ll,Uid:793962ef-7079-40cd-aeea-b7575ac7917e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1551b8b04d5a46e3ad9bad5d5da408b18d04572ae290c99682ed0f244556bf12\"" Jan 17 12:03:06.701322 containerd[2014]: time="2025-01-17T12:03:06.701173502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:03:06.709113 containerd[2014]: time="2025-01-17T12:03:06.708952850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kdgsc,Uid:18854e3f-a74b-4c08-8bc3-85fc1a268f27,Namespace:calico-system,Attempt:0,} returns sandbox id \"38665ea22442508d82a7fe0640c2f0558e313786cfa73b139e3a52cc2e72a1cd\"" Jan 17 12:03:08.270751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount332963416.mount: Deactivated successfully. Jan 17 12:03:08.275016 kubelet[3563]: E0117 12:03:08.274863 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b886" podUID="baedf7e0-e7c5-4c51-97a9-7d0b1c402648" Jan 17 12:03:09.092358 containerd[2014]: time="2025-01-17T12:03:09.092015305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:09.093825 containerd[2014]: time="2025-01-17T12:03:09.093735853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 17 12:03:09.095505 containerd[2014]: time="2025-01-17T12:03:09.095418349Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:09.100614 containerd[2014]: time="2025-01-17T12:03:09.100523509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:09.103187 containerd[2014]: time="2025-01-17T12:03:09.102935629Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.401630943s" Jan 17 12:03:09.103187 containerd[2014]: time="2025-01-17T12:03:09.102996301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 17 12:03:09.105745 containerd[2014]: time="2025-01-17T12:03:09.105242317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:03:09.133811 containerd[2014]: time="2025-01-17T12:03:09.133660094Z" level=info msg="CreateContainer within sandbox \"1551b8b04d5a46e3ad9bad5d5da408b18d04572ae290c99682ed0f244556bf12\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:03:09.180711 containerd[2014]: time="2025-01-17T12:03:09.180635546Z" level=info msg="CreateContainer within sandbox \"1551b8b04d5a46e3ad9bad5d5da408b18d04572ae290c99682ed0f244556bf12\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4de10330ca5320f1334e8449a6153bf30b13fdb6a46ba7a1dc941f25d08e0d73\"" Jan 17 12:03:09.183823 containerd[2014]: time="2025-01-17T12:03:09.182191046Z" level=info msg="StartContainer for \"4de10330ca5320f1334e8449a6153bf30b13fdb6a46ba7a1dc941f25d08e0d73\"" Jan 17 12:03:09.247257 systemd[1]: Started cri-containerd-4de10330ca5320f1334e8449a6153bf30b13fdb6a46ba7a1dc941f25d08e0d73.scope - libcontainer container 4de10330ca5320f1334e8449a6153bf30b13fdb6a46ba7a1dc941f25d08e0d73. Jan 17 12:03:09.354161 containerd[2014]: time="2025-01-17T12:03:09.354009843Z" level=info msg="StartContainer for \"4de10330ca5320f1334e8449a6153bf30b13fdb6a46ba7a1dc941f25d08e0d73\" returns successfully" Jan 17 12:03:09.531862 kubelet[3563]: E0117 12:03:09.531795 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.533488 kubelet[3563]: W0117 12:03:09.532958 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.533488 kubelet[3563]: E0117 12:03:09.533040 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.536325 kubelet[3563]: E0117 12:03:09.536111 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.536325 kubelet[3563]: W0117 12:03:09.536169 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.536325 kubelet[3563]: E0117 12:03:09.536203 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.537474 kubelet[3563]: E0117 12:03:09.537161 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.537474 kubelet[3563]: W0117 12:03:09.537217 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.537474 kubelet[3563]: E0117 12:03:09.537250 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.538285 kubelet[3563]: E0117 12:03:09.538053 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.538285 kubelet[3563]: W0117 12:03:09.538083 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.538285 kubelet[3563]: E0117 12:03:09.538169 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.539289 kubelet[3563]: E0117 12:03:09.539036 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.539289 kubelet[3563]: W0117 12:03:09.539067 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.539289 kubelet[3563]: E0117 12:03:09.539120 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.540046 kubelet[3563]: E0117 12:03:09.539820 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.540046 kubelet[3563]: W0117 12:03:09.539871 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.540046 kubelet[3563]: E0117 12:03:09.539900 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.541037 kubelet[3563]: E0117 12:03:09.540752 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.541037 kubelet[3563]: W0117 12:03:09.540784 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.541037 kubelet[3563]: E0117 12:03:09.540812 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.542072 kubelet[3563]: E0117 12:03:09.541425 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.542072 kubelet[3563]: W0117 12:03:09.541453 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.542072 kubelet[3563]: E0117 12:03:09.541480 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.542658 kubelet[3563]: E0117 12:03:09.542452 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.542658 kubelet[3563]: W0117 12:03:09.542483 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.542658 kubelet[3563]: E0117 12:03:09.542514 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.544353 kubelet[3563]: E0117 12:03:09.544022 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.544353 kubelet[3563]: W0117 12:03:09.544056 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.545441 kubelet[3563]: E0117 12:03:09.544089 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.546418 kubelet[3563]: E0117 12:03:09.545845 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.546418 kubelet[3563]: W0117 12:03:09.545876 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.546418 kubelet[3563]: E0117 12:03:09.546041 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.547411 kubelet[3563]: E0117 12:03:09.547113 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.547411 kubelet[3563]: W0117 12:03:09.547149 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.547411 kubelet[3563]: E0117 12:03:09.547200 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.548164 kubelet[3563]: E0117 12:03:09.547758 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.548164 kubelet[3563]: W0117 12:03:09.547784 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.548164 kubelet[3563]: E0117 12:03:09.547810 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.550805 kubelet[3563]: E0117 12:03:09.550320 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.550805 kubelet[3563]: W0117 12:03:09.550360 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.550805 kubelet[3563]: E0117 12:03:09.550394 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.551658 kubelet[3563]: E0117 12:03:09.551435 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.551972 kubelet[3563]: W0117 12:03:09.551934 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.552458 kubelet[3563]: E0117 12:03:09.552297 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.573247 kubelet[3563]: E0117 12:03:09.573034 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.573247 kubelet[3563]: W0117 12:03:09.573078 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.573247 kubelet[3563]: E0117 12:03:09.573115 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.574303 kubelet[3563]: E0117 12:03:09.574257 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.574303 kubelet[3563]: W0117 12:03:09.574292 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.574682 kubelet[3563]: E0117 12:03:09.574338 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.576880 kubelet[3563]: E0117 12:03:09.576826 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.576880 kubelet[3563]: W0117 12:03:09.576865 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.577313 kubelet[3563]: E0117 12:03:09.577023 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.579539 kubelet[3563]: E0117 12:03:09.579485 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.579539 kubelet[3563]: W0117 12:03:09.579523 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.581079 kubelet[3563]: E0117 12:03:09.579785 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.581079 kubelet[3563]: E0117 12:03:09.581060 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.581205 kubelet[3563]: W0117 12:03:09.581087 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.581772 kubelet[3563]: E0117 12:03:09.581734 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.583212 kubelet[3563]: E0117 12:03:09.582417 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.583364 kubelet[3563]: W0117 12:03:09.583213 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.583898 kubelet[3563]: E0117 12:03:09.583621 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.584860 kubelet[3563]: E0117 12:03:09.584621 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.584860 kubelet[3563]: W0117 12:03:09.584686 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.584860 kubelet[3563]: E0117 12:03:09.584750 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.585668 kubelet[3563]: E0117 12:03:09.585372 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.585668 kubelet[3563]: W0117 12:03:09.585394 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.585668 kubelet[3563]: E0117 12:03:09.585494 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.586636 kubelet[3563]: E0117 12:03:09.585983 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.586636 kubelet[3563]: W0117 12:03:09.586013 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.586636 kubelet[3563]: E0117 12:03:09.586139 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.587748 kubelet[3563]: E0117 12:03:09.586854 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.587748 kubelet[3563]: W0117 12:03:09.586886 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.587959 kubelet[3563]: E0117 12:03:09.587836 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.587959 kubelet[3563]: W0117 12:03:09.587859 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.588214 kubelet[3563]: E0117 12:03:09.588101 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.591134 kubelet[3563]: E0117 12:03:09.590844 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.591255 kubelet[3563]: W0117 12:03:09.591221 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.591349 kubelet[3563]: E0117 12:03:09.591263 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.596539 kubelet[3563]: E0117 12:03:09.596201 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.596539 kubelet[3563]: W0117 12:03:09.596277 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.596539 kubelet[3563]: E0117 12:03:09.596312 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.597934 kubelet[3563]: E0117 12:03:09.597516 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.597934 kubelet[3563]: W0117 12:03:09.597546 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.597934 kubelet[3563]: E0117 12:03:09.597575 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.602182 kubelet[3563]: E0117 12:03:09.601595 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.602182 kubelet[3563]: W0117 12:03:09.601630 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.602182 kubelet[3563]: E0117 12:03:09.601663 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.602667 kubelet[3563]: E0117 12:03:09.602636 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.603416 kubelet[3563]: E0117 12:03:09.603382 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.603614 kubelet[3563]: W0117 12:03:09.603582 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.603734 kubelet[3563]: E0117 12:03:09.603709 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.605370 kubelet[3563]: E0117 12:03:09.605241 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.606153 kubelet[3563]: W0117 12:03:09.605562 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.606153 kubelet[3563]: E0117 12:03:09.605606 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:09.607473 kubelet[3563]: E0117 12:03:09.607433 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:09.608026 kubelet[3563]: W0117 12:03:09.607971 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:09.608324 kubelet[3563]: E0117 12:03:09.608214 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.274801 kubelet[3563]: E0117 12:03:10.274638 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b886" podUID="baedf7e0-e7c5-4c51-97a9-7d0b1c402648" Jan 17 12:03:10.461365 containerd[2014]: time="2025-01-17T12:03:10.461221600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:10.463150 containerd[2014]: time="2025-01-17T12:03:10.463059100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 17 12:03:10.465476 containerd[2014]: time="2025-01-17T12:03:10.465392656Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:10.470152 containerd[2014]: time="2025-01-17T12:03:10.470048980Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:10.471681 containerd[2014]: time="2025-01-17T12:03:10.471503488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.366202431s" Jan 17 12:03:10.471681 containerd[2014]: time="2025-01-17T12:03:10.471560620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 17 12:03:10.477139 kubelet[3563]: I0117 12:03:10.475930 3563 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:10.478083 containerd[2014]: time="2025-01-17T12:03:10.478031104Z" level=info msg="CreateContainer within sandbox \"38665ea22442508d82a7fe0640c2f0558e313786cfa73b139e3a52cc2e72a1cd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:03:10.513843 containerd[2014]: time="2025-01-17T12:03:10.513786016Z" level=info msg="CreateContainer within sandbox \"38665ea22442508d82a7fe0640c2f0558e313786cfa73b139e3a52cc2e72a1cd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1be5a0e79ea79cfac781a393d59101c157a042066f7946e9d9e80765b7719a08\"" Jan 17 12:03:10.515448 containerd[2014]: time="2025-01-17T12:03:10.515387909Z" level=info msg="StartContainer for \"1be5a0e79ea79cfac781a393d59101c157a042066f7946e9d9e80765b7719a08\"" Jan 17 12:03:10.561606 kubelet[3563]: E0117 12:03:10.560872 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.561606 kubelet[3563]: W0117 12:03:10.560948 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.561606 kubelet[3563]: E0117 12:03:10.560984 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.565399 kubelet[3563]: E0117 12:03:10.564159 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.565399 kubelet[3563]: W0117 12:03:10.564243 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.565399 kubelet[3563]: E0117 12:03:10.564279 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.565681 kubelet[3563]: E0117 12:03:10.565539 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.565736 kubelet[3563]: W0117 12:03:10.565644 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.565736 kubelet[3563]: E0117 12:03:10.565713 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.568818 kubelet[3563]: E0117 12:03:10.567244 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.568818 kubelet[3563]: W0117 12:03:10.567308 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.568818 kubelet[3563]: E0117 12:03:10.567346 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.568818 kubelet[3563]: E0117 12:03:10.568456 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.568818 kubelet[3563]: W0117 12:03:10.568486 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.568818 kubelet[3563]: E0117 12:03:10.568516 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.570313 kubelet[3563]: E0117 12:03:10.569232 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.570313 kubelet[3563]: W0117 12:03:10.569306 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.570313 kubelet[3563]: E0117 12:03:10.569335 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.570604 kubelet[3563]: E0117 12:03:10.570363 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.570604 kubelet[3563]: W0117 12:03:10.570392 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.570604 kubelet[3563]: E0117 12:03:10.570423 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.571590 kubelet[3563]: E0117 12:03:10.571358 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.571590 kubelet[3563]: W0117 12:03:10.571386 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.571590 kubelet[3563]: E0117 12:03:10.571444 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.575133 kubelet[3563]: E0117 12:03:10.573209 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.575133 kubelet[3563]: W0117 12:03:10.573391 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.575133 kubelet[3563]: E0117 12:03:10.573660 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.576232 kubelet[3563]: E0117 12:03:10.576150 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.576232 kubelet[3563]: W0117 12:03:10.576217 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.576449 kubelet[3563]: E0117 12:03:10.576281 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.580041 kubelet[3563]: E0117 12:03:10.579985 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.580041 kubelet[3563]: W0117 12:03:10.580027 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.580230 kubelet[3563]: E0117 12:03:10.580063 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.584562 kubelet[3563]: E0117 12:03:10.584335 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.584562 kubelet[3563]: W0117 12:03:10.584372 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.584562 kubelet[3563]: E0117 12:03:10.584426 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.588121 kubelet[3563]: E0117 12:03:10.587817 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.588121 kubelet[3563]: W0117 12:03:10.587855 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.588121 kubelet[3563]: E0117 12:03:10.587888 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.588707 kubelet[3563]: E0117 12:03:10.588679 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.589396 kubelet[3563]: W0117 12:03:10.588962 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.589396 kubelet[3563]: E0117 12:03:10.589002 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.590243 kubelet[3563]: E0117 12:03:10.590210 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.590507 kubelet[3563]: W0117 12:03:10.590477 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.590947 kubelet[3563]: E0117 12:03:10.590602 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.591788 kubelet[3563]: E0117 12:03:10.591546 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.591788 kubelet[3563]: W0117 12:03:10.591575 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.591788 kubelet[3563]: E0117 12:03:10.591602 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.592173 kubelet[3563]: E0117 12:03:10.592149 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.592275 kubelet[3563]: W0117 12:03:10.592251 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.592385 kubelet[3563]: E0117 12:03:10.592362 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.593179 kubelet[3563]: E0117 12:03:10.593149 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.593343 kubelet[3563]: W0117 12:03:10.593318 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.593484 kubelet[3563]: E0117 12:03:10.593460 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.594104 kubelet[3563]: E0117 12:03:10.594028 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.594104 kubelet[3563]: W0117 12:03:10.594056 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.594416 kubelet[3563]: E0117 12:03:10.594255 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.594803 kubelet[3563]: E0117 12:03:10.594613 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.594803 kubelet[3563]: W0117 12:03:10.594639 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.594803 kubelet[3563]: E0117 12:03:10.594660 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.595278 kubelet[3563]: E0117 12:03:10.595254 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.595379 kubelet[3563]: W0117 12:03:10.595357 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.595502 kubelet[3563]: E0117 12:03:10.595475 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.596272 systemd[1]: Started cri-containerd-1be5a0e79ea79cfac781a393d59101c157a042066f7946e9d9e80765b7719a08.scope - libcontainer container 1be5a0e79ea79cfac781a393d59101c157a042066f7946e9d9e80765b7719a08. Jan 17 12:03:10.597665 kubelet[3563]: E0117 12:03:10.597309 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.597665 kubelet[3563]: W0117 12:03:10.597337 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.597665 kubelet[3563]: E0117 12:03:10.597370 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.600549 kubelet[3563]: E0117 12:03:10.597982 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.600549 kubelet[3563]: W0117 12:03:10.598017 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.600549 kubelet[3563]: E0117 12:03:10.598046 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.600549 kubelet[3563]: E0117 12:03:10.598427 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.600549 kubelet[3563]: W0117 12:03:10.598445 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.600549 kubelet[3563]: E0117 12:03:10.598466 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.600549 kubelet[3563]: E0117 12:03:10.598739 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.600549 kubelet[3563]: W0117 12:03:10.598754 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.600549 kubelet[3563]: E0117 12:03:10.598773 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.600549 kubelet[3563]: E0117 12:03:10.599066 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.601202 kubelet[3563]: W0117 12:03:10.599084 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.601202 kubelet[3563]: E0117 12:03:10.599105 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.601202 kubelet[3563]: E0117 12:03:10.599447 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.601202 kubelet[3563]: W0117 12:03:10.599465 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.601202 kubelet[3563]: E0117 12:03:10.599487 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.601202 kubelet[3563]: E0117 12:03:10.600825 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.601202 kubelet[3563]: W0117 12:03:10.600851 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.601202 kubelet[3563]: E0117 12:03:10.600880 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.601553 kubelet[3563]: E0117 12:03:10.601397 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.601553 kubelet[3563]: W0117 12:03:10.601415 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.601553 kubelet[3563]: E0117 12:03:10.601438 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.602826 kubelet[3563]: E0117 12:03:10.601719 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.602826 kubelet[3563]: W0117 12:03:10.601735 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.602826 kubelet[3563]: E0117 12:03:10.601755 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.602826 kubelet[3563]: E0117 12:03:10.602229 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.602826 kubelet[3563]: W0117 12:03:10.602249 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.602826 kubelet[3563]: E0117 12:03:10.602272 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.602826 kubelet[3563]: E0117 12:03:10.602665 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.602826 kubelet[3563]: W0117 12:03:10.602684 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.602826 kubelet[3563]: E0117 12:03:10.602705 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.603976 kubelet[3563]: E0117 12:03:10.603539 3563 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:03:10.603976 kubelet[3563]: W0117 12:03:10.603571 3563 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:03:10.603976 kubelet[3563]: E0117 12:03:10.603600 3563 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:03:10.663044 containerd[2014]: time="2025-01-17T12:03:10.662850293Z" level=info msg="StartContainer for \"1be5a0e79ea79cfac781a393d59101c157a042066f7946e9d9e80765b7719a08\" returns successfully" Jan 17 12:03:10.689731 systemd[1]: cri-containerd-1be5a0e79ea79cfac781a393d59101c157a042066f7946e9d9e80765b7719a08.scope: Deactivated successfully. Jan 17 12:03:10.740847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1be5a0e79ea79cfac781a393d59101c157a042066f7946e9d9e80765b7719a08-rootfs.mount: Deactivated successfully. Jan 17 12:03:10.962014 containerd[2014]: time="2025-01-17T12:03:10.961937227Z" level=info msg="shim disconnected" id=1be5a0e79ea79cfac781a393d59101c157a042066f7946e9d9e80765b7719a08 namespace=k8s.io Jan 17 12:03:10.962014 containerd[2014]: time="2025-01-17T12:03:10.962007631Z" level=warning msg="cleaning up after shim disconnected" id=1be5a0e79ea79cfac781a393d59101c157a042066f7946e9d9e80765b7719a08 namespace=k8s.io Jan 17 12:03:10.962377 containerd[2014]: time="2025-01-17T12:03:10.962028739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:11.141439 kubelet[3563]: I0117 12:03:11.139849 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5848b55c95-l76ll" podStartSLOduration=3.735020213 podStartE2EDuration="6.139825924s" podCreationTimestamp="2025-01-17 12:03:05 +0000 UTC" firstStartedPulling="2025-01-17 12:03:06.700073222 +0000 UTC m=+27.672841759" lastFinishedPulling="2025-01-17 12:03:09.104878933 +0000 UTC m=+30.077647470" observedRunningTime="2025-01-17 12:03:09.538836904 +0000 UTC m=+30.511605441" watchObservedRunningTime="2025-01-17 12:03:11.139825924 +0000 UTC m=+32.112594461" Jan 17 12:03:11.483971 containerd[2014]: time="2025-01-17T12:03:11.482966993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:03:12.275036 kubelet[3563]: E0117 12:03:12.274956 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b886" podUID="baedf7e0-e7c5-4c51-97a9-7d0b1c402648" Jan 17 12:03:14.274982 kubelet[3563]: E0117 12:03:14.274802 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4b886" podUID="baedf7e0-e7c5-4c51-97a9-7d0b1c402648" Jan 17 12:03:15.138649 containerd[2014]: time="2025-01-17T12:03:15.138399583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:15.140074 containerd[2014]: time="2025-01-17T12:03:15.139745803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 17 12:03:15.141109 containerd[2014]: time="2025-01-17T12:03:15.141015751Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:15.146789 containerd[2014]: time="2025-01-17T12:03:15.146734484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:15.150379 containerd[2014]: time="2025-01-17T12:03:15.150188228Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.666968527s" Jan 17 12:03:15.150379 containerd[2014]: time="2025-01-17T12:03:15.150244460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 17 12:03:15.155238 containerd[2014]: time="2025-01-17T12:03:15.154475048Z" level=info msg="CreateContainer within sandbox \"38665ea22442508d82a7fe0640c2f0558e313786cfa73b139e3a52cc2e72a1cd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:03:15.177528 containerd[2014]: time="2025-01-17T12:03:15.177460340Z" level=info msg="CreateContainer within sandbox \"38665ea22442508d82a7fe0640c2f0558e313786cfa73b139e3a52cc2e72a1cd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6f5a255ad04a4249869e3378af39925960fcf5fd0d7afd37136d273b73c9c292\"" Jan 17 12:03:15.179619 containerd[2014]: time="2025-01-17T12:03:15.179494616Z" level=info msg="StartContainer for \"6f5a255ad04a4249869e3378af39925960fcf5fd0d7afd37136d273b73c9c292\"" Jan 17 12:03:15.183372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089229971.mount: Deactivated successfully. Jan 17 12:03:15.246267 systemd[1]: Started cri-containerd-6f5a255ad04a4249869e3378af39925960fcf5fd0d7afd37136d273b73c9c292.scope - libcontainer container 6f5a255ad04a4249869e3378af39925960fcf5fd0d7afd37136d273b73c9c292. Jan 17 12:03:15.299368 containerd[2014]: time="2025-01-17T12:03:15.299290508Z" level=info msg="StartContainer for \"6f5a255ad04a4249869e3378af39925960fcf5fd0d7afd37136d273b73c9c292\" returns successfully" Jan 17 12:03:16.211726 containerd[2014]: time="2025-01-17T12:03:16.211602717Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:03:16.215672 systemd[1]: cri-containerd-6f5a255ad04a4249869e3378af39925960fcf5fd0d7afd37136d273b73c9c292.scope: Deactivated successfully. Jan 17 12:03:16.257064 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f5a255ad04a4249869e3378af39925960fcf5fd0d7afd37136d273b73c9c292-rootfs.mount: Deactivated successfully. Jan 17 12:03:16.267159 kubelet[3563]: I0117 12:03:16.265936 3563 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:03:16.296705 systemd[1]: Created slice kubepods-besteffort-podbaedf7e0_e7c5_4c51_97a9_7d0b1c402648.slice - libcontainer container kubepods-besteffort-podbaedf7e0_e7c5_4c51_97a9_7d0b1c402648.slice. Jan 17 12:03:16.306698 containerd[2014]: time="2025-01-17T12:03:16.306596517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4b886,Uid:baedf7e0-e7c5-4c51-97a9-7d0b1c402648,Namespace:calico-system,Attempt:0,}" Jan 17 12:03:16.341339 kubelet[3563]: I0117 12:03:16.340082 3563 topology_manager.go:215] "Topology Admit Handler" podUID="19beb928-574a-4f64-bc0a-c826b89636c5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sblcc" Jan 17 12:03:16.365745 systemd[1]: Created slice kubepods-burstable-pod19beb928_574a_4f64_bc0a_c826b89636c5.slice - libcontainer container kubepods-burstable-pod19beb928_574a_4f64_bc0a_c826b89636c5.slice. Jan 17 12:03:16.383178 kubelet[3563]: I0117 12:03:16.382470 3563 topology_manager.go:215] "Topology Admit Handler" podUID="2d430a54-0d30-40c8-9b81-8b7e74c0d58c" podNamespace="calico-system" podName="calico-kube-controllers-bd8d9fff6-r5hwg" Jan 17 12:03:16.385737 kubelet[3563]: I0117 12:03:16.385385 3563 topology_manager.go:215] "Topology Admit Handler" podUID="bb9ce353-9364-48a8-8b98-45115ba0dad6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-wt5l4" Jan 17 12:03:16.443397 kubelet[3563]: I0117 12:03:16.439787 3563 topology_manager.go:215] "Topology Admit Handler" podUID="e5afc712-523e-4ec9-a9bc-ea9f7fc8548e" podNamespace="calico-apiserver" podName="calico-apiserver-9b75cd897-82x5r" Jan 17 12:03:16.443397 kubelet[3563]: I0117 12:03:16.442391 3563 topology_manager.go:215] "Topology Admit Handler" podUID="3da31367-17d6-48fb-b3e8-b7aa30d9e927" podNamespace="calico-apiserver" podName="calico-apiserver-9b75cd897-2lj2c" Jan 17 12:03:16.449183 kubelet[3563]: I0117 12:03:16.449136 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19beb928-574a-4f64-bc0a-c826b89636c5-config-volume\") pod \"coredns-7db6d8ff4d-sblcc\" (UID: \"19beb928-574a-4f64-bc0a-c826b89636c5\") " pod="kube-system/coredns-7db6d8ff4d-sblcc" Jan 17 12:03:16.459839 kubelet[3563]: I0117 12:03:16.459697 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzvn7\" (UniqueName: \"kubernetes.io/projected/19beb928-574a-4f64-bc0a-c826b89636c5-kube-api-access-bzvn7\") pod \"coredns-7db6d8ff4d-sblcc\" (UID: \"19beb928-574a-4f64-bc0a-c826b89636c5\") " pod="kube-system/coredns-7db6d8ff4d-sblcc" Jan 17 12:03:16.463940 systemd[1]: Created slice kubepods-burstable-podbb9ce353_9364_48a8_8b98_45115ba0dad6.slice - libcontainer container kubepods-burstable-podbb9ce353_9364_48a8_8b98_45115ba0dad6.slice. Jan 17 12:03:16.489517 systemd[1]: Created slice kubepods-besteffort-pod2d430a54_0d30_40c8_9b81_8b7e74c0d58c.slice - libcontainer container kubepods-besteffort-pod2d430a54_0d30_40c8_9b81_8b7e74c0d58c.slice. Jan 17 12:03:16.516320 systemd[1]: Created slice kubepods-besteffort-pod3da31367_17d6_48fb_b3e8_b7aa30d9e927.slice - libcontainer container kubepods-besteffort-pod3da31367_17d6_48fb_b3e8_b7aa30d9e927.slice. Jan 17 12:03:16.533470 systemd[1]: Created slice kubepods-besteffort-pode5afc712_523e_4ec9_a9bc_ea9f7fc8548e.slice - libcontainer container kubepods-besteffort-pode5afc712_523e_4ec9_a9bc_ea9f7fc8548e.slice. Jan 17 12:03:16.560673 kubelet[3563]: I0117 12:03:16.560441 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bb9ce353-9364-48a8-8b98-45115ba0dad6-config-volume\") pod \"coredns-7db6d8ff4d-wt5l4\" (UID: \"bb9ce353-9364-48a8-8b98-45115ba0dad6\") " pod="kube-system/coredns-7db6d8ff4d-wt5l4" Jan 17 12:03:16.560673 kubelet[3563]: I0117 12:03:16.560536 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e5afc712-523e-4ec9-a9bc-ea9f7fc8548e-calico-apiserver-certs\") pod \"calico-apiserver-9b75cd897-82x5r\" (UID: \"e5afc712-523e-4ec9-a9bc-ea9f7fc8548e\") " pod="calico-apiserver/calico-apiserver-9b75cd897-82x5r" Jan 17 12:03:16.560673 kubelet[3563]: I0117 12:03:16.560622 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9stk5\" (UniqueName: \"kubernetes.io/projected/3da31367-17d6-48fb-b3e8-b7aa30d9e927-kube-api-access-9stk5\") pod \"calico-apiserver-9b75cd897-2lj2c\" (UID: \"3da31367-17d6-48fb-b3e8-b7aa30d9e927\") " pod="calico-apiserver/calico-apiserver-9b75cd897-2lj2c" Jan 17 12:03:16.560673 kubelet[3563]: I0117 12:03:16.560659 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g795l\" (UniqueName: \"kubernetes.io/projected/e5afc712-523e-4ec9-a9bc-ea9f7fc8548e-kube-api-access-g795l\") pod \"calico-apiserver-9b75cd897-82x5r\" (UID: \"e5afc712-523e-4ec9-a9bc-ea9f7fc8548e\") " pod="calico-apiserver/calico-apiserver-9b75cd897-82x5r" Jan 17 12:03:16.561054 kubelet[3563]: I0117 12:03:16.560717 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d430a54-0d30-40c8-9b81-8b7e74c0d58c-tigera-ca-bundle\") pod \"calico-kube-controllers-bd8d9fff6-r5hwg\" (UID: \"2d430a54-0d30-40c8-9b81-8b7e74c0d58c\") " pod="calico-system/calico-kube-controllers-bd8d9fff6-r5hwg" Jan 17 12:03:16.561054 kubelet[3563]: I0117 12:03:16.560765 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3da31367-17d6-48fb-b3e8-b7aa30d9e927-calico-apiserver-certs\") pod \"calico-apiserver-9b75cd897-2lj2c\" (UID: \"3da31367-17d6-48fb-b3e8-b7aa30d9e927\") " pod="calico-apiserver/calico-apiserver-9b75cd897-2lj2c" Jan 17 12:03:16.561054 kubelet[3563]: I0117 12:03:16.560800 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-669sx\" (UniqueName: \"kubernetes.io/projected/2d430a54-0d30-40c8-9b81-8b7e74c0d58c-kube-api-access-669sx\") pod \"calico-kube-controllers-bd8d9fff6-r5hwg\" (UID: \"2d430a54-0d30-40c8-9b81-8b7e74c0d58c\") " pod="calico-system/calico-kube-controllers-bd8d9fff6-r5hwg" Jan 17 12:03:16.561054 kubelet[3563]: I0117 12:03:16.560838 3563 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hmft\" (UniqueName: \"kubernetes.io/projected/bb9ce353-9364-48a8-8b98-45115ba0dad6-kube-api-access-6hmft\") pod \"coredns-7db6d8ff4d-wt5l4\" (UID: \"bb9ce353-9364-48a8-8b98-45115ba0dad6\") " pod="kube-system/coredns-7db6d8ff4d-wt5l4" Jan 17 12:03:16.704637 containerd[2014]: time="2025-01-17T12:03:16.691226555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sblcc,Uid:19beb928-574a-4f64-bc0a-c826b89636c5,Namespace:kube-system,Attempt:0,}" Jan 17 12:03:16.704637 containerd[2014]: time="2025-01-17T12:03:16.695999963Z" level=error msg="Failed to destroy network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:16.704637 containerd[2014]: time="2025-01-17T12:03:16.704383343Z" level=error msg="encountered an error cleaning up failed sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:16.704637 containerd[2014]: time="2025-01-17T12:03:16.704481251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4b886,Uid:baedf7e0-e7c5-4c51-97a9-7d0b1c402648,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:16.710195 kubelet[3563]: E0117 12:03:16.707562 3563 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:16.710506 kubelet[3563]: E0117 12:03:16.710458 3563 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4b886" Jan 17 12:03:16.715028 kubelet[3563]: E0117 12:03:16.714559 3563 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4b886" Jan 17 12:03:16.715028 kubelet[3563]: E0117 12:03:16.714669 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4b886_calico-system(baedf7e0-e7c5-4c51-97a9-7d0b1c402648)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4b886_calico-system(baedf7e0-e7c5-4c51-97a9-7d0b1c402648)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4b886" podUID="baedf7e0-e7c5-4c51-97a9-7d0b1c402648" Jan 17 12:03:16.791656 containerd[2014]: time="2025-01-17T12:03:16.790683660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wt5l4,Uid:bb9ce353-9364-48a8-8b98-45115ba0dad6,Namespace:kube-system,Attempt:0,}" Jan 17 12:03:16.809293 containerd[2014]: time="2025-01-17T12:03:16.809193240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd8d9fff6-r5hwg,Uid:2d430a54-0d30-40c8-9b81-8b7e74c0d58c,Namespace:calico-system,Attempt:0,}" Jan 17 12:03:16.828532 containerd[2014]: time="2025-01-17T12:03:16.828176808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b75cd897-2lj2c,Uid:3da31367-17d6-48fb-b3e8-b7aa30d9e927,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:03:16.842734 containerd[2014]: time="2025-01-17T12:03:16.842653824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b75cd897-82x5r,Uid:e5afc712-523e-4ec9-a9bc-ea9f7fc8548e,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:03:17.146817 containerd[2014]: time="2025-01-17T12:03:17.146655009Z" level=info msg="shim disconnected" id=6f5a255ad04a4249869e3378af39925960fcf5fd0d7afd37136d273b73c9c292 namespace=k8s.io Jan 17 12:03:17.147559 containerd[2014]: time="2025-01-17T12:03:17.147268041Z" level=warning msg="cleaning up after shim disconnected" id=6f5a255ad04a4249869e3378af39925960fcf5fd0d7afd37136d273b73c9c292 namespace=k8s.io Jan 17 12:03:17.147559 containerd[2014]: time="2025-01-17T12:03:17.147319329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:03:17.176480 containerd[2014]: time="2025-01-17T12:03:17.176419834Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:03:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:03:17.316056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48-shm.mount: Deactivated successfully. Jan 17 12:03:17.525745 containerd[2014]: time="2025-01-17T12:03:17.524584871Z" level=error msg="Failed to destroy network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.525745 containerd[2014]: time="2025-01-17T12:03:17.525202835Z" level=error msg="encountered an error cleaning up failed sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.525745 containerd[2014]: time="2025-01-17T12:03:17.525284459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sblcc,Uid:19beb928-574a-4f64-bc0a-c826b89636c5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.525745 containerd[2014]: time="2025-01-17T12:03:17.525487559Z" level=error msg="Failed to destroy network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.526826 containerd[2014]: time="2025-01-17T12:03:17.525998687Z" level=error msg="encountered an error cleaning up failed sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.526826 containerd[2014]: time="2025-01-17T12:03:17.526072535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd8d9fff6-r5hwg,Uid:2d430a54-0d30-40c8-9b81-8b7e74c0d58c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.530863 kubelet[3563]: E0117 12:03:17.530657 3563 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.530863 kubelet[3563]: E0117 12:03:17.530742 3563 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd8d9fff6-r5hwg" Jan 17 12:03:17.530863 kubelet[3563]: E0117 12:03:17.530777 3563 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bd8d9fff6-r5hwg" Jan 17 12:03:17.533489 kubelet[3563]: E0117 12:03:17.530848 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bd8d9fff6-r5hwg_calico-system(2d430a54-0d30-40c8-9b81-8b7e74c0d58c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bd8d9fff6-r5hwg_calico-system(2d430a54-0d30-40c8-9b81-8b7e74c0d58c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd8d9fff6-r5hwg" podUID="2d430a54-0d30-40c8-9b81-8b7e74c0d58c" Jan 17 12:03:17.533489 kubelet[3563]: E0117 12:03:17.531477 3563 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.533489 kubelet[3563]: E0117 12:03:17.531543 3563 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sblcc" Jan 17 12:03:17.533780 kubelet[3563]: E0117 12:03:17.531575 3563 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sblcc" Jan 17 12:03:17.533780 kubelet[3563]: E0117 12:03:17.531631 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-sblcc_kube-system(19beb928-574a-4f64-bc0a-c826b89636c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-sblcc_kube-system(19beb928-574a-4f64-bc0a-c826b89636c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sblcc" podUID="19beb928-574a-4f64-bc0a-c826b89636c5" Jan 17 12:03:17.536678 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed-shm.mount: Deactivated successfully. Jan 17 12:03:17.541315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1-shm.mount: Deactivated successfully. Jan 17 12:03:17.545233 containerd[2014]: time="2025-01-17T12:03:17.545177867Z" level=error msg="Failed to destroy network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.546551 containerd[2014]: time="2025-01-17T12:03:17.546471647Z" level=error msg="encountered an error cleaning up failed sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.547035 containerd[2014]: time="2025-01-17T12:03:17.546847067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wt5l4,Uid:bb9ce353-9364-48a8-8b98-45115ba0dad6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.548983 kubelet[3563]: E0117 12:03:17.548808 3563 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.550276 kubelet[3563]: E0117 12:03:17.548888 3563 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wt5l4" Jan 17 12:03:17.550276 kubelet[3563]: E0117 12:03:17.549245 3563 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-wt5l4" Jan 17 12:03:17.550611 kubelet[3563]: E0117 12:03:17.550550 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-wt5l4_kube-system(bb9ce353-9364-48a8-8b98-45115ba0dad6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-wt5l4_kube-system(bb9ce353-9364-48a8-8b98-45115ba0dad6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wt5l4" podUID="bb9ce353-9364-48a8-8b98-45115ba0dad6" Jan 17 12:03:17.553658 containerd[2014]: time="2025-01-17T12:03:17.551961983Z" level=error msg="Failed to destroy network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.562000 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753-shm.mount: Deactivated successfully. Jan 17 12:03:17.564877 kubelet[3563]: I0117 12:03:17.563316 3563 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:17.567482 containerd[2014]: time="2025-01-17T12:03:17.567419304Z" level=error msg="Failed to destroy network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.569349 containerd[2014]: time="2025-01-17T12:03:17.569288268Z" level=error msg="encountered an error cleaning up failed sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.569812 containerd[2014]: time="2025-01-17T12:03:17.569508708Z" level=error msg="encountered an error cleaning up failed sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.578315 containerd[2014]: time="2025-01-17T12:03:17.576243492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b75cd897-2lj2c,Uid:3da31367-17d6-48fb-b3e8-b7aa30d9e927,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.578315 containerd[2014]: time="2025-01-17T12:03:17.572982528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b75cd897-82x5r,Uid:e5afc712-523e-4ec9-a9bc-ea9f7fc8548e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.578315 containerd[2014]: time="2025-01-17T12:03:17.574976400Z" level=info msg="StopPodSandbox for \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\"" Jan 17 12:03:17.578315 containerd[2014]: time="2025-01-17T12:03:17.577061484Z" level=info msg="Ensure that sandbox 8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48 in task-service has been cleanup successfully" Jan 17 12:03:17.576330 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a-shm.mount: Deactivated successfully. Jan 17 12:03:17.581996 kubelet[3563]: E0117 12:03:17.581682 3563 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.586036 kubelet[3563]: E0117 12:03:17.582114 3563 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b75cd897-82x5r" Jan 17 12:03:17.586036 kubelet[3563]: E0117 12:03:17.582597 3563 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b75cd897-82x5r" Jan 17 12:03:17.586036 kubelet[3563]: E0117 12:03:17.582352 3563 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.586036 kubelet[3563]: E0117 12:03:17.582784 3563 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b75cd897-2lj2c" Jan 17 12:03:17.586401 kubelet[3563]: E0117 12:03:17.582826 3563 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9b75cd897-2lj2c" Jan 17 12:03:17.586401 kubelet[3563]: E0117 12:03:17.582887 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b75cd897-2lj2c_calico-apiserver(3da31367-17d6-48fb-b3e8-b7aa30d9e927)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b75cd897-2lj2c_calico-apiserver(3da31367-17d6-48fb-b3e8-b7aa30d9e927)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b75cd897-2lj2c" podUID="3da31367-17d6-48fb-b3e8-b7aa30d9e927" Jan 17 12:03:17.586582 kubelet[3563]: E0117 12:03:17.583954 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9b75cd897-82x5r_calico-apiserver(e5afc712-523e-4ec9-a9bc-ea9f7fc8548e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9b75cd897-82x5r_calico-apiserver(e5afc712-523e-4ec9-a9bc-ea9f7fc8548e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b75cd897-82x5r" podUID="e5afc712-523e-4ec9-a9bc-ea9f7fc8548e" Jan 17 12:03:17.587641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145-shm.mount: Deactivated successfully. Jan 17 12:03:17.605129 containerd[2014]: time="2025-01-17T12:03:17.603489120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:03:17.662172 containerd[2014]: time="2025-01-17T12:03:17.662098296Z" level=error msg="StopPodSandbox for \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\" failed" error="failed to destroy network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:17.662750 kubelet[3563]: E0117 12:03:17.662682 3563 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:17.662928 kubelet[3563]: E0117 12:03:17.662812 3563 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48"} Jan 17 12:03:17.663053 kubelet[3563]: E0117 12:03:17.662986 3563 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"baedf7e0-e7c5-4c51-97a9-7d0b1c402648\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:03:17.663195 kubelet[3563]: E0117 12:03:17.663073 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"baedf7e0-e7c5-4c51-97a9-7d0b1c402648\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4b886" podUID="baedf7e0-e7c5-4c51-97a9-7d0b1c402648" Jan 17 12:03:18.601369 kubelet[3563]: I0117 12:03:18.601319 3563 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:18.603004 containerd[2014]: time="2025-01-17T12:03:18.602948581Z" level=info msg="StopPodSandbox for \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\"" Jan 17 12:03:18.603607 containerd[2014]: time="2025-01-17T12:03:18.603234481Z" level=info msg="Ensure that sandbox 3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a in task-service has been cleanup successfully" Jan 17 12:03:18.607721 kubelet[3563]: I0117 12:03:18.607386 3563 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:18.612068 containerd[2014]: time="2025-01-17T12:03:18.610764997Z" level=info msg="StopPodSandbox for \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\"" Jan 17 12:03:18.612068 containerd[2014]: time="2025-01-17T12:03:18.611115277Z" level=info msg="Ensure that sandbox c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed in task-service has been cleanup successfully" Jan 17 12:03:18.616153 kubelet[3563]: I0117 12:03:18.615244 3563 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:18.618681 containerd[2014]: time="2025-01-17T12:03:18.618618757Z" level=info msg="StopPodSandbox for \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\"" Jan 17 12:03:18.619757 containerd[2014]: time="2025-01-17T12:03:18.619008577Z" level=info msg="Ensure that sandbox 18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753 in task-service has been cleanup successfully" Jan 17 12:03:18.625845 kubelet[3563]: I0117 12:03:18.625435 3563 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:18.631078 containerd[2014]: time="2025-01-17T12:03:18.630043837Z" level=info msg="StopPodSandbox for \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\"" Jan 17 12:03:18.634797 containerd[2014]: time="2025-01-17T12:03:18.634716325Z" level=info msg="Ensure that sandbox deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145 in task-service has been cleanup successfully" Jan 17 12:03:18.641794 kubelet[3563]: I0117 12:03:18.641735 3563 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:18.645661 containerd[2014]: time="2025-01-17T12:03:18.645595837Z" level=info msg="StopPodSandbox for \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\"" Jan 17 12:03:18.645966 containerd[2014]: time="2025-01-17T12:03:18.645897109Z" level=info msg="Ensure that sandbox ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1 in task-service has been cleanup successfully" Jan 17 12:03:18.765461 containerd[2014]: time="2025-01-17T12:03:18.765395461Z" level=error msg="StopPodSandbox for \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\" failed" error="failed to destroy network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:18.766303 kubelet[3563]: E0117 12:03:18.766000 3563 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:18.766303 kubelet[3563]: E0117 12:03:18.766092 3563 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed"} Jan 17 12:03:18.766303 kubelet[3563]: E0117 12:03:18.766174 3563 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d430a54-0d30-40c8-9b81-8b7e74c0d58c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:03:18.766303 kubelet[3563]: E0117 12:03:18.766239 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d430a54-0d30-40c8-9b81-8b7e74c0d58c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bd8d9fff6-r5hwg" podUID="2d430a54-0d30-40c8-9b81-8b7e74c0d58c" Jan 17 12:03:18.774364 containerd[2014]: time="2025-01-17T12:03:18.773443550Z" level=error msg="StopPodSandbox for \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\" failed" error="failed to destroy network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:18.774517 kubelet[3563]: E0117 12:03:18.774078 3563 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:18.774517 kubelet[3563]: E0117 12:03:18.774144 3563 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a"} Jan 17 12:03:18.774517 kubelet[3563]: E0117 12:03:18.774207 3563 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5afc712-523e-4ec9-a9bc-ea9f7fc8548e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:03:18.774517 kubelet[3563]: E0117 12:03:18.774248 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5afc712-523e-4ec9-a9bc-ea9f7fc8548e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b75cd897-82x5r" podUID="e5afc712-523e-4ec9-a9bc-ea9f7fc8548e" Jan 17 12:03:18.790038 containerd[2014]: time="2025-01-17T12:03:18.789971246Z" level=error msg="StopPodSandbox for \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\" failed" error="failed to destroy network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:18.790808 kubelet[3563]: E0117 12:03:18.790553 3563 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:18.790808 kubelet[3563]: E0117 12:03:18.790727 3563 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753"} Jan 17 12:03:18.791537 kubelet[3563]: E0117 12:03:18.791096 3563 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bb9ce353-9364-48a8-8b98-45115ba0dad6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:03:18.791659 containerd[2014]: time="2025-01-17T12:03:18.790947998Z" level=error msg="StopPodSandbox for \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\" failed" error="failed to destroy network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:18.792412 kubelet[3563]: E0117 12:03:18.792014 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bb9ce353-9364-48a8-8b98-45115ba0dad6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-wt5l4" podUID="bb9ce353-9364-48a8-8b98-45115ba0dad6" Jan 17 12:03:18.792412 kubelet[3563]: E0117 12:03:18.791950 3563 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:18.792412 kubelet[3563]: E0117 12:03:18.792149 3563 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145"} Jan 17 12:03:18.792412 kubelet[3563]: E0117 12:03:18.792221 3563 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3da31367-17d6-48fb-b3e8-b7aa30d9e927\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:03:18.792800 kubelet[3563]: E0117 12:03:18.792327 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3da31367-17d6-48fb-b3e8-b7aa30d9e927\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9b75cd897-2lj2c" podUID="3da31367-17d6-48fb-b3e8-b7aa30d9e927" Jan 17 12:03:18.806166 containerd[2014]: time="2025-01-17T12:03:18.806096402Z" level=error msg="StopPodSandbox for \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\" failed" error="failed to destroy network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:03:18.806711 kubelet[3563]: E0117 12:03:18.806458 3563 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:18.806711 kubelet[3563]: E0117 12:03:18.806561 3563 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1"} Jan 17 12:03:18.806711 kubelet[3563]: E0117 12:03:18.806654 3563 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"19beb928-574a-4f64-bc0a-c826b89636c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:03:18.807069 kubelet[3563]: E0117 12:03:18.806728 3563 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"19beb928-574a-4f64-bc0a-c826b89636c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sblcc" podUID="19beb928-574a-4f64-bc0a-c826b89636c5" Jan 17 12:03:24.115458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145900957.mount: Deactivated successfully. Jan 17 12:03:24.190691 containerd[2014]: time="2025-01-17T12:03:24.188596024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:24.190691 containerd[2014]: time="2025-01-17T12:03:24.190386220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 17 12:03:24.191585 containerd[2014]: time="2025-01-17T12:03:24.191537596Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:24.198124 containerd[2014]: time="2025-01-17T12:03:24.198042628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:24.200252 containerd[2014]: time="2025-01-17T12:03:24.200189764Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.596515124s" Jan 17 12:03:24.200252 containerd[2014]: time="2025-01-17T12:03:24.200254012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 17 12:03:24.242732 containerd[2014]: time="2025-01-17T12:03:24.242369057Z" level=info msg="CreateContainer within sandbox \"38665ea22442508d82a7fe0640c2f0558e313786cfa73b139e3a52cc2e72a1cd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:03:24.269851 containerd[2014]: time="2025-01-17T12:03:24.269206997Z" level=info msg="CreateContainer within sandbox \"38665ea22442508d82a7fe0640c2f0558e313786cfa73b139e3a52cc2e72a1cd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"25c8828f5f9571ad085bc7372e40d9d7f3ea81a5913b64ae0c16c6bec80266fe\"" Jan 17 12:03:24.271462 containerd[2014]: time="2025-01-17T12:03:24.271369169Z" level=info msg="StartContainer for \"25c8828f5f9571ad085bc7372e40d9d7f3ea81a5913b64ae0c16c6bec80266fe\"" Jan 17 12:03:24.319280 systemd[1]: Started cri-containerd-25c8828f5f9571ad085bc7372e40d9d7f3ea81a5913b64ae0c16c6bec80266fe.scope - libcontainer container 25c8828f5f9571ad085bc7372e40d9d7f3ea81a5913b64ae0c16c6bec80266fe. Jan 17 12:03:24.386242 containerd[2014]: time="2025-01-17T12:03:24.384900101Z" level=info msg="StartContainer for \"25c8828f5f9571ad085bc7372e40d9d7f3ea81a5913b64ae0c16c6bec80266fe\" returns successfully" Jan 17 12:03:24.536444 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:03:24.536632 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:03:25.708581 systemd[1]: run-containerd-runc-k8s.io-25c8828f5f9571ad085bc7372e40d9d7f3ea81a5913b64ae0c16c6bec80266fe-runc.n7Q9Pf.mount: Deactivated successfully. Jan 17 12:03:26.767981 kernel: bpftool[4923]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:03:27.086647 systemd-networkd[1839]: vxlan.calico: Link UP Jan 17 12:03:27.089527 (udev-worker)[4731]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:27.091428 systemd-networkd[1839]: vxlan.calico: Gained carrier Jan 17 12:03:27.168712 (udev-worker)[4729]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:27.170101 (udev-worker)[4732]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:27.510178 systemd[1]: Started sshd@7-172.31.20.160:22-139.178.68.195:43486.service - OpenSSH per-connection server daemon (139.178.68.195:43486). Jan 17 12:03:27.698843 sshd[4965]: Accepted publickey for core from 139.178.68.195 port 43486 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:27.702597 sshd[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:27.717286 systemd-logind[2000]: New session 8 of user core. Jan 17 12:03:27.724242 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:03:27.992610 sshd[4965]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:27.999090 systemd-logind[2000]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:03:28.000184 systemd[1]: sshd@7-172.31.20.160:22-139.178.68.195:43486.service: Deactivated successfully. Jan 17 12:03:28.004018 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:03:28.006118 systemd-logind[2000]: Removed session 8. Jan 17 12:03:28.701413 systemd-networkd[1839]: vxlan.calico: Gained IPv6LL Jan 17 12:03:30.275774 containerd[2014]: time="2025-01-17T12:03:30.275675939Z" level=info msg="StopPodSandbox for \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\"" Jan 17 12:03:30.415838 kubelet[3563]: I0117 12:03:30.413482 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kdgsc" podStartSLOduration=6.922241205 podStartE2EDuration="24.413460695s" podCreationTimestamp="2025-01-17 12:03:06 +0000 UTC" firstStartedPulling="2025-01-17 12:03:06.710803202 +0000 UTC m=+27.683571739" lastFinishedPulling="2025-01-17 12:03:24.202022692 +0000 UTC m=+45.174791229" observedRunningTime="2025-01-17 12:03:24.765022399 +0000 UTC m=+45.737790924" watchObservedRunningTime="2025-01-17 12:03:30.413460695 +0000 UTC m=+51.386229244" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.414 [INFO][5023] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.416 [INFO][5023] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" iface="eth0" netns="/var/run/netns/cni-e66f7fdb-ed08-3377-58a8-0d7214044426" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.420 [INFO][5023] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" iface="eth0" netns="/var/run/netns/cni-e66f7fdb-ed08-3377-58a8-0d7214044426" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.423 [INFO][5023] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" iface="eth0" netns="/var/run/netns/cni-e66f7fdb-ed08-3377-58a8-0d7214044426" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.423 [INFO][5023] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.423 [INFO][5023] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.473 [INFO][5029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" HandleID="k8s-pod-network.c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.473 [INFO][5029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.474 [INFO][5029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.486 [WARNING][5029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" HandleID="k8s-pod-network.c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.486 [INFO][5029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" HandleID="k8s-pod-network.c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.489 [INFO][5029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:30.498864 containerd[2014]: 2025-01-17 12:03:30.495 [INFO][5023] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:30.502602 containerd[2014]: time="2025-01-17T12:03:30.500050008Z" level=info msg="TearDown network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\" successfully" Jan 17 12:03:30.502602 containerd[2014]: time="2025-01-17T12:03:30.500105700Z" level=info msg="StopPodSandbox for \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\" returns successfully" Jan 17 12:03:30.503811 containerd[2014]: time="2025-01-17T12:03:30.503735592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd8d9fff6-r5hwg,Uid:2d430a54-0d30-40c8-9b81-8b7e74c0d58c,Namespace:calico-system,Attempt:1,}" Jan 17 12:03:30.506075 systemd[1]: run-netns-cni\x2de66f7fdb\x2ded08\x2d3377\x2d58a8\x2d0d7214044426.mount: Deactivated successfully. Jan 17 12:03:30.749992 systemd-networkd[1839]: cali36ab48825b3: Link UP Jan 17 12:03:30.751861 systemd-networkd[1839]: cali36ab48825b3: Gained carrier Jan 17 12:03:30.764034 (udev-worker)[5056]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.599 [INFO][5037] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0 calico-kube-controllers-bd8d9fff6- calico-system 2d430a54-0d30-40c8-9b81-8b7e74c0d58c 839 0 2025-01-17 12:03:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bd8d9fff6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-160 calico-kube-controllers-bd8d9fff6-r5hwg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali36ab48825b3 [] []}} ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Namespace="calico-system" Pod="calico-kube-controllers-bd8d9fff6-r5hwg" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.600 [INFO][5037] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Namespace="calico-system" Pod="calico-kube-controllers-bd8d9fff6-r5hwg" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.668 [INFO][5048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" HandleID="k8s-pod-network.18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.687 [INFO][5048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" HandleID="k8s-pod-network.18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-160", "pod":"calico-kube-controllers-bd8d9fff6-r5hwg", "timestamp":"2025-01-17 12:03:30.668497045 +0000 UTC"}, Hostname:"ip-172-31-20-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.687 [INFO][5048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.687 [INFO][5048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.687 [INFO][5048] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-160' Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.690 [INFO][5048] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" host="ip-172-31-20-160" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.697 [INFO][5048] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-160" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.708 [INFO][5048] ipam/ipam.go 489: Trying affinity for 192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.713 [INFO][5048] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.716 [INFO][5048] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.717 [INFO][5048] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.192/26 handle="k8s-pod-network.18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" host="ip-172-31-20-160" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.720 [INFO][5048] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.726 [INFO][5048] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.192/26 handle="k8s-pod-network.18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" host="ip-172-31-20-160" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.736 [INFO][5048] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.193/26] block=192.168.118.192/26 handle="k8s-pod-network.18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" host="ip-172-31-20-160" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.736 [INFO][5048] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.193/26] handle="k8s-pod-network.18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" host="ip-172-31-20-160" Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.736 [INFO][5048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:30.776352 containerd[2014]: 2025-01-17 12:03:30.736 [INFO][5048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.193/26] IPv6=[] ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" HandleID="k8s-pod-network.18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.781643 containerd[2014]: 2025-01-17 12:03:30.741 [INFO][5037] cni-plugin/k8s.go 386: Populated endpoint ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Namespace="calico-system" Pod="calico-kube-controllers-bd8d9fff6-r5hwg" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0", GenerateName:"calico-kube-controllers-bd8d9fff6-", Namespace:"calico-system", SelfLink:"", UID:"2d430a54-0d30-40c8-9b81-8b7e74c0d58c", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bd8d9fff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"", Pod:"calico-kube-controllers-bd8d9fff6-r5hwg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36ab48825b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:30.781643 containerd[2014]: 2025-01-17 12:03:30.741 [INFO][5037] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.193/32] ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Namespace="calico-system" Pod="calico-kube-controllers-bd8d9fff6-r5hwg" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.781643 containerd[2014]: 2025-01-17 12:03:30.741 [INFO][5037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36ab48825b3 ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Namespace="calico-system" Pod="calico-kube-controllers-bd8d9fff6-r5hwg" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.781643 containerd[2014]: 2025-01-17 12:03:30.753 [INFO][5037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Namespace="calico-system" Pod="calico-kube-controllers-bd8d9fff6-r5hwg" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.781643 containerd[2014]: 2025-01-17 12:03:30.753 [INFO][5037] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Namespace="calico-system" Pod="calico-kube-controllers-bd8d9fff6-r5hwg" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0", GenerateName:"calico-kube-controllers-bd8d9fff6-", Namespace:"calico-system", SelfLink:"", UID:"2d430a54-0d30-40c8-9b81-8b7e74c0d58c", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bd8d9fff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f", Pod:"calico-kube-controllers-bd8d9fff6-r5hwg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36ab48825b3", MAC:"8a:93:25:aa:72:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:30.781643 containerd[2014]: 2025-01-17 12:03:30.770 [INFO][5037] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f" Namespace="calico-system" Pod="calico-kube-controllers-bd8d9fff6-r5hwg" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:30.832693 containerd[2014]: time="2025-01-17T12:03:30.832180177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:30.832693 containerd[2014]: time="2025-01-17T12:03:30.832316653Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:30.832693 containerd[2014]: time="2025-01-17T12:03:30.832354309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:30.832693 containerd[2014]: time="2025-01-17T12:03:30.832516321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:30.882315 systemd[1]: Started cri-containerd-18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f.scope - libcontainer container 18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f. Jan 17 12:03:30.946376 containerd[2014]: time="2025-01-17T12:03:30.946285070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bd8d9fff6-r5hwg,Uid:2d430a54-0d30-40c8-9b81-8b7e74c0d58c,Namespace:calico-system,Attempt:1,} returns sandbox id \"18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f\"" Jan 17 12:03:30.955415 containerd[2014]: time="2025-01-17T12:03:30.955356194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:03:31.276849 containerd[2014]: time="2025-01-17T12:03:31.276366252Z" level=info msg="StopPodSandbox for \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\"" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.373 [INFO][5122] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.374 [INFO][5122] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" iface="eth0" netns="/var/run/netns/cni-9b08d0e3-01f7-1e2b-e635-44d1e5a5db89" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.375 [INFO][5122] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" iface="eth0" netns="/var/run/netns/cni-9b08d0e3-01f7-1e2b-e635-44d1e5a5db89" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.377 [INFO][5122] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" iface="eth0" netns="/var/run/netns/cni-9b08d0e3-01f7-1e2b-e635-44d1e5a5db89" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.377 [INFO][5122] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.377 [INFO][5122] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.422 [INFO][5128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" HandleID="k8s-pod-network.deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.422 [INFO][5128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.422 [INFO][5128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.434 [WARNING][5128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" HandleID="k8s-pod-network.deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.435 [INFO][5128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" HandleID="k8s-pod-network.deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.437 [INFO][5128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:31.443823 containerd[2014]: 2025-01-17 12:03:31.440 [INFO][5122] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:31.445264 containerd[2014]: time="2025-01-17T12:03:31.444128772Z" level=info msg="TearDown network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\" successfully" Jan 17 12:03:31.445264 containerd[2014]: time="2025-01-17T12:03:31.444169824Z" level=info msg="StopPodSandbox for \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\" returns successfully" Jan 17 12:03:31.445970 containerd[2014]: time="2025-01-17T12:03:31.445477464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b75cd897-2lj2c,Uid:3da31367-17d6-48fb-b3e8-b7aa30d9e927,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:03:31.508894 systemd[1]: run-netns-cni\x2d9b08d0e3\x2d01f7\x2d1e2b\x2de635\x2d44d1e5a5db89.mount: Deactivated successfully. Jan 17 12:03:31.678050 systemd-networkd[1839]: cali90b665bfca7: Link UP Jan 17 12:03:31.679826 systemd-networkd[1839]: cali90b665bfca7: Gained carrier Jan 17 12:03:31.680846 (udev-worker)[5068]: Network interface NamePolicy= disabled on kernel command line. Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.541 [INFO][5135] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0 calico-apiserver-9b75cd897- calico-apiserver 3da31367-17d6-48fb-b3e8-b7aa30d9e927 850 0 2025-01-17 12:03:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b75cd897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-160 calico-apiserver-9b75cd897-2lj2c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali90b665bfca7 [] []}} ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-2lj2c" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.541 [INFO][5135] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-2lj2c" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.602 [INFO][5145] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" HandleID="k8s-pod-network.f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.620 [INFO][5145] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" HandleID="k8s-pod-network.f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b1300), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-160", "pod":"calico-apiserver-9b75cd897-2lj2c", "timestamp":"2025-01-17 12:03:31.602167117 +0000 UTC"}, Hostname:"ip-172-31-20-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.620 [INFO][5145] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.620 [INFO][5145] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.620 [INFO][5145] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-160' Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.624 [INFO][5145] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" host="ip-172-31-20-160" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.633 [INFO][5145] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-160" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.642 [INFO][5145] ipam/ipam.go 489: Trying affinity for 192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.645 [INFO][5145] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.649 [INFO][5145] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.649 [INFO][5145] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.192/26 handle="k8s-pod-network.f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" host="ip-172-31-20-160" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.651 [INFO][5145] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9 Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.658 [INFO][5145] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.192/26 handle="k8s-pod-network.f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" host="ip-172-31-20-160" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.669 [INFO][5145] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.194/26] block=192.168.118.192/26 handle="k8s-pod-network.f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" host="ip-172-31-20-160" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.670 [INFO][5145] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.194/26] handle="k8s-pod-network.f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" host="ip-172-31-20-160" Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.670 [INFO][5145] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:31.717800 containerd[2014]: 2025-01-17 12:03:31.670 [INFO][5145] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.194/26] IPv6=[] ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" HandleID="k8s-pod-network.f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.721842 containerd[2014]: 2025-01-17 12:03:31.673 [INFO][5135] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-2lj2c" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0", GenerateName:"calico-apiserver-9b75cd897-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da31367-17d6-48fb-b3e8-b7aa30d9e927", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b75cd897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"", Pod:"calico-apiserver-9b75cd897-2lj2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90b665bfca7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:31.721842 containerd[2014]: 2025-01-17 12:03:31.674 [INFO][5135] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.194/32] ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-2lj2c" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.721842 containerd[2014]: 2025-01-17 12:03:31.674 [INFO][5135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90b665bfca7 ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-2lj2c" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.721842 containerd[2014]: 2025-01-17 12:03:31.680 [INFO][5135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-2lj2c" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.721842 containerd[2014]: 2025-01-17 12:03:31.681 [INFO][5135] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-2lj2c" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0", GenerateName:"calico-apiserver-9b75cd897-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da31367-17d6-48fb-b3e8-b7aa30d9e927", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b75cd897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9", Pod:"calico-apiserver-9b75cd897-2lj2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90b665bfca7", MAC:"1a:f5:9d:ad:b6:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:31.721842 containerd[2014]: 2025-01-17 12:03:31.709 [INFO][5135] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-2lj2c" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:31.765473 containerd[2014]: time="2025-01-17T12:03:31.765115298Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:31.765473 containerd[2014]: time="2025-01-17T12:03:31.765241526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:31.765473 containerd[2014]: time="2025-01-17T12:03:31.765279410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:31.765473 containerd[2014]: time="2025-01-17T12:03:31.765469742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:31.813288 systemd[1]: Started cri-containerd-f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9.scope - libcontainer container f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9. Jan 17 12:03:31.890590 containerd[2014]: time="2025-01-17T12:03:31.890523759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b75cd897-2lj2c,Uid:3da31367-17d6-48fb-b3e8-b7aa30d9e927,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9\"" Jan 17 12:03:32.284963 containerd[2014]: time="2025-01-17T12:03:32.282797101Z" level=info msg="StopPodSandbox for \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\"" Jan 17 12:03:32.291411 containerd[2014]: time="2025-01-17T12:03:32.291358093Z" level=info msg="StopPodSandbox for \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\"" Jan 17 12:03:32.299996 containerd[2014]: time="2025-01-17T12:03:32.291357997Z" level=info msg="StopPodSandbox for \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\"" Jan 17 12:03:32.307445 containerd[2014]: time="2025-01-17T12:03:32.304990657Z" level=info msg="StopPodSandbox for \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\"" Jan 17 12:03:32.544131 systemd-networkd[1839]: cali36ab48825b3: Gained IPv6LL Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.528 [INFO][5244] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.532 [INFO][5244] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" iface="eth0" netns="/var/run/netns/cni-075014b2-759b-8836-fc8d-34eba8ebb79e" Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.534 [INFO][5244] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" iface="eth0" netns="/var/run/netns/cni-075014b2-759b-8836-fc8d-34eba8ebb79e" Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.535 [INFO][5244] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" iface="eth0" netns="/var/run/netns/cni-075014b2-759b-8836-fc8d-34eba8ebb79e" Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.535 [INFO][5244] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.535 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.683 [INFO][5278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" HandleID="k8s-pod-network.8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.684 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.684 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.727 [WARNING][5278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" HandleID="k8s-pod-network.8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.727 [INFO][5278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" HandleID="k8s-pod-network.8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.751 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:32.765157 containerd[2014]: 2025-01-17 12:03:32.761 [INFO][5244] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:32.769101 containerd[2014]: time="2025-01-17T12:03:32.767098095Z" level=info msg="TearDown network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\" successfully" Jan 17 12:03:32.769101 containerd[2014]: time="2025-01-17T12:03:32.767179083Z" level=info msg="StopPodSandbox for \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\" returns successfully" Jan 17 12:03:32.773073 containerd[2014]: time="2025-01-17T12:03:32.772442559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4b886,Uid:baedf7e0-e7c5-4c51-97a9-7d0b1c402648,Namespace:calico-system,Attempt:1,}" Jan 17 12:03:32.772566 systemd[1]: run-netns-cni\x2d075014b2\x2d759b\x2d8836\x2dfc8d\x2d34eba8ebb79e.mount: Deactivated successfully. Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.594 [INFO][5266] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.594 [INFO][5266] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" iface="eth0" netns="/var/run/netns/cni-a8fc32ba-88ad-b161-80cd-a5b483218bee" Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.596 [INFO][5266] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" iface="eth0" netns="/var/run/netns/cni-a8fc32ba-88ad-b161-80cd-a5b483218bee" Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.600 [INFO][5266] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" iface="eth0" netns="/var/run/netns/cni-a8fc32ba-88ad-b161-80cd-a5b483218bee" Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.600 [INFO][5266] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.600 [INFO][5266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.750 [INFO][5285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" HandleID="k8s-pod-network.3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.752 [INFO][5285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.752 [INFO][5285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.791 [WARNING][5285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" HandleID="k8s-pod-network.3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.792 [INFO][5285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" HandleID="k8s-pod-network.3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.801 [INFO][5285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:32.819292 containerd[2014]: 2025-01-17 12:03:32.816 [INFO][5266] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:32.822108 containerd[2014]: time="2025-01-17T12:03:32.820450767Z" level=info msg="TearDown network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\" successfully" Jan 17 12:03:32.822108 containerd[2014]: time="2025-01-17T12:03:32.820509039Z" level=info msg="StopPodSandbox for \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\" returns successfully" Jan 17 12:03:32.827308 containerd[2014]: time="2025-01-17T12:03:32.824667255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b75cd897-82x5r,Uid:e5afc712-523e-4ec9-a9bc-ea9f7fc8548e,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:03:32.829969 systemd[1]: run-netns-cni\x2da8fc32ba\x2d88ad\x2db161\x2d80cd\x2da5b483218bee.mount: Deactivated successfully. Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.634 [INFO][5254] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.634 [INFO][5254] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" iface="eth0" netns="/var/run/netns/cni-b71db02e-9659-14ec-4b45-3ad107a73e6b" Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.635 [INFO][5254] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" iface="eth0" netns="/var/run/netns/cni-b71db02e-9659-14ec-4b45-3ad107a73e6b" Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.636 [INFO][5254] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" iface="eth0" netns="/var/run/netns/cni-b71db02e-9659-14ec-4b45-3ad107a73e6b" Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.636 [INFO][5254] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.636 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.790 [INFO][5289] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" HandleID="k8s-pod-network.ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.791 [INFO][5289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.803 [INFO][5289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.847 [WARNING][5289] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" HandleID="k8s-pod-network.ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.847 [INFO][5289] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" HandleID="k8s-pod-network.ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.851 [INFO][5289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:32.885313 containerd[2014]: 2025-01-17 12:03:32.861 [INFO][5254] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:32.889488 containerd[2014]: time="2025-01-17T12:03:32.885975220Z" level=info msg="TearDown network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\" successfully" Jan 17 12:03:32.889488 containerd[2014]: time="2025-01-17T12:03:32.886019656Z" level=info msg="StopPodSandbox for \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\" returns successfully" Jan 17 12:03:32.889488 containerd[2014]: time="2025-01-17T12:03:32.887832388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sblcc,Uid:19beb928-574a-4f64-bc0a-c826b89636c5,Namespace:kube-system,Attempt:1,}" Jan 17 12:03:32.898600 systemd[1]: run-netns-cni\x2db71db02e\x2d9659\x2d14ec\x2d4b45\x2d3ad107a73e6b.mount: Deactivated successfully. Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.619 [INFO][5261] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.619 [INFO][5261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" iface="eth0" netns="/var/run/netns/cni-f6fb3f45-2c90-e27f-605f-92b78347f953" Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.620 [INFO][5261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" iface="eth0" netns="/var/run/netns/cni-f6fb3f45-2c90-e27f-605f-92b78347f953" Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.621 [INFO][5261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" iface="eth0" netns="/var/run/netns/cni-f6fb3f45-2c90-e27f-605f-92b78347f953" Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.621 [INFO][5261] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.622 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.839 [INFO][5288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" HandleID="k8s-pod-network.18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.840 [INFO][5288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.851 [INFO][5288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.911 [WARNING][5288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" HandleID="k8s-pod-network.18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.911 [INFO][5288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" HandleID="k8s-pod-network.18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.917 [INFO][5288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:32.942559 containerd[2014]: 2025-01-17 12:03:32.927 [INFO][5261] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:32.944463 containerd[2014]: time="2025-01-17T12:03:32.944291296Z" level=info msg="TearDown network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\" successfully" Jan 17 12:03:32.944463 containerd[2014]: time="2025-01-17T12:03:32.944348008Z" level=info msg="StopPodSandbox for \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\" returns successfully" Jan 17 12:03:32.948827 containerd[2014]: time="2025-01-17T12:03:32.948566860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wt5l4,Uid:bb9ce353-9364-48a8-8b98-45115ba0dad6,Namespace:kube-system,Attempt:1,}" Jan 17 12:03:33.043757 systemd[1]: Started sshd@8-172.31.20.160:22-139.178.68.195:43500.service - OpenSSH per-connection server daemon (139.178.68.195:43500). Jan 17 12:03:33.181901 systemd-networkd[1839]: cali90b665bfca7: Gained IPv6LL Jan 17 12:03:33.248412 sshd[5358]: Accepted publickey for core from 139.178.68.195 port 43500 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:33.254992 sshd[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:33.279671 systemd-logind[2000]: New session 9 of user core. Jan 17 12:03:33.281228 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:03:33.471084 systemd-networkd[1839]: cali36712dd100c: Link UP Jan 17 12:03:33.473316 systemd-networkd[1839]: cali36712dd100c: Gained carrier Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:32.983 [INFO][5314] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0 csi-node-driver- calico-system baedf7e0-e7c5-4c51-97a9-7d0b1c402648 865 0 2025-01-17 12:03:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-160 csi-node-driver-4b886 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali36712dd100c [] []}} ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Namespace="calico-system" Pod="csi-node-driver-4b886" WorkloadEndpoint="ip--172--31--20--160-k8s-csi--node--driver--4b886-" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:32.984 [INFO][5314] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Namespace="calico-system" Pod="csi-node-driver-4b886" WorkloadEndpoint="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.220 [INFO][5360] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" HandleID="k8s-pod-network.0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.262 [INFO][5360] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" HandleID="k8s-pod-network.0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000274120), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-160", "pod":"csi-node-driver-4b886", "timestamp":"2025-01-17 12:03:33.219992485 +0000 UTC"}, Hostname:"ip-172-31-20-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.262 [INFO][5360] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.266 [INFO][5360] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.266 [INFO][5360] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-160' Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.283 [INFO][5360] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" host="ip-172-31-20-160" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.321 [INFO][5360] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-160" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.348 [INFO][5360] ipam/ipam.go 489: Trying affinity for 192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.353 [INFO][5360] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.368 [INFO][5360] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.368 [INFO][5360] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.192/26 handle="k8s-pod-network.0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" host="ip-172-31-20-160" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.383 [INFO][5360] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419 Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.426 [INFO][5360] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.192/26 handle="k8s-pod-network.0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" host="ip-172-31-20-160" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.444 [INFO][5360] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.195/26] block=192.168.118.192/26 handle="k8s-pod-network.0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" host="ip-172-31-20-160" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.444 [INFO][5360] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.195/26] handle="k8s-pod-network.0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" host="ip-172-31-20-160" Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.444 [INFO][5360] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:33.541659 containerd[2014]: 2025-01-17 12:03:33.444 [INFO][5360] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.195/26] IPv6=[] ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" HandleID="k8s-pod-network.0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:33.563377 containerd[2014]: 2025-01-17 12:03:33.459 [INFO][5314] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Namespace="calico-system" Pod="csi-node-driver-4b886" WorkloadEndpoint="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"baedf7e0-e7c5-4c51-97a9-7d0b1c402648", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"", Pod:"csi-node-driver-4b886", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36712dd100c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:33.563377 containerd[2014]: 2025-01-17 12:03:33.459 [INFO][5314] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.195/32] ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Namespace="calico-system" Pod="csi-node-driver-4b886" WorkloadEndpoint="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:33.563377 containerd[2014]: 2025-01-17 12:03:33.459 [INFO][5314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36712dd100c ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Namespace="calico-system" Pod="csi-node-driver-4b886" WorkloadEndpoint="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:33.563377 containerd[2014]: 2025-01-17 12:03:33.483 [INFO][5314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Namespace="calico-system" Pod="csi-node-driver-4b886" WorkloadEndpoint="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:33.563377 containerd[2014]: 2025-01-17 12:03:33.485 [INFO][5314] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Namespace="calico-system" Pod="csi-node-driver-4b886" WorkloadEndpoint="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"baedf7e0-e7c5-4c51-97a9-7d0b1c402648", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419", Pod:"csi-node-driver-4b886", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36712dd100c", MAC:"82:11:5e:46:74:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:33.563377 containerd[2014]: 2025-01-17 12:03:33.534 [INFO][5314] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419" Namespace="calico-system" Pod="csi-node-driver-4b886" WorkloadEndpoint="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:33.655272 systemd-networkd[1839]: cali86d79d49db2: Link UP Jan 17 12:03:33.660677 systemd-networkd[1839]: cali86d79d49db2: Gained carrier Jan 17 12:03:33.722250 containerd[2014]: time="2025-01-17T12:03:33.715085752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:33.722250 containerd[2014]: time="2025-01-17T12:03:33.716079172Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:33.722250 containerd[2014]: time="2025-01-17T12:03:33.716117860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:33.722250 containerd[2014]: time="2025-01-17T12:03:33.719113624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.119 [INFO][5336] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0 coredns-7db6d8ff4d- kube-system 19beb928-574a-4f64-bc0a-c826b89636c5 869 0 2025-01-17 12:02:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-160 coredns-7db6d8ff4d-sblcc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86d79d49db2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sblcc" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.120 [INFO][5336] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sblcc" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.285 [INFO][5374] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" HandleID="k8s-pod-network.4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.359 [INFO][5374] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" HandleID="k8s-pod-network.4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332300), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-160", "pod":"coredns-7db6d8ff4d-sblcc", "timestamp":"2025-01-17 12:03:33.285333014 +0000 UTC"}, Hostname:"ip-172-31-20-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.360 [INFO][5374] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.444 [INFO][5374] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.445 [INFO][5374] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-160' Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.458 [INFO][5374] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" host="ip-172-31-20-160" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.498 [INFO][5374] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-160" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.515 [INFO][5374] ipam/ipam.go 489: Trying affinity for 192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.524 [INFO][5374] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.535 [INFO][5374] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.536 [INFO][5374] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.192/26 handle="k8s-pod-network.4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" host="ip-172-31-20-160" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.563 [INFO][5374] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.582 [INFO][5374] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.192/26 handle="k8s-pod-network.4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" host="ip-172-31-20-160" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.614 [INFO][5374] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.196/26] block=192.168.118.192/26 handle="k8s-pod-network.4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" host="ip-172-31-20-160" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.615 [INFO][5374] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.196/26] handle="k8s-pod-network.4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" host="ip-172-31-20-160" Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.615 [INFO][5374] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:33.744975 containerd[2014]: 2025-01-17 12:03:33.615 [INFO][5374] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.196/26] IPv6=[] ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" HandleID="k8s-pod-network.4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:33.748672 containerd[2014]: 2025-01-17 12:03:33.624 [INFO][5336] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sblcc" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"19beb928-574a-4f64-bc0a-c826b89636c5", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"", Pod:"coredns-7db6d8ff4d-sblcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d79d49db2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:33.748672 containerd[2014]: 2025-01-17 12:03:33.626 [INFO][5336] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.196/32] ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sblcc" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:33.748672 containerd[2014]: 2025-01-17 12:03:33.627 [INFO][5336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86d79d49db2 ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sblcc" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:33.748672 containerd[2014]: 2025-01-17 12:03:33.665 [INFO][5336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sblcc" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:33.748672 containerd[2014]: 2025-01-17 12:03:33.676 [INFO][5336] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sblcc" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"19beb928-574a-4f64-bc0a-c826b89636c5", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a", Pod:"coredns-7db6d8ff4d-sblcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d79d49db2", MAC:"f6:7c:7f:e5:79:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:33.748672 containerd[2014]: 2025-01-17 12:03:33.730 [INFO][5336] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sblcc" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:33.763831 sshd[5358]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:33.796326 systemd[1]: run-netns-cni\x2df6fb3f45\x2d2c90\x2de27f\x2d605f\x2d92b78347f953.mount: Deactivated successfully. Jan 17 12:03:33.802788 systemd[1]: sshd@8-172.31.20.160:22-139.178.68.195:43500.service: Deactivated successfully. Jan 17 12:03:33.810253 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:03:33.824702 systemd-logind[2000]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:03:33.835776 systemd-logind[2000]: Removed session 9. Jan 17 12:03:33.928503 systemd[1]: Started cri-containerd-0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419.scope - libcontainer container 0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419. Jan 17 12:03:33.939753 systemd-networkd[1839]: calie2f137a5f48: Link UP Jan 17 12:03:33.948041 systemd-networkd[1839]: calie2f137a5f48: Gained carrier Jan 17 12:03:34.024283 containerd[2014]: time="2025-01-17T12:03:34.013877869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:34.024283 containerd[2014]: time="2025-01-17T12:03:34.016548937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:34.024283 containerd[2014]: time="2025-01-17T12:03:34.016971145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.024283 containerd[2014]: time="2025-01-17T12:03:34.017804929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.066803 systemd-networkd[1839]: cali4ddbdb62e56: Link UP Jan 17 12:03:34.070844 systemd-networkd[1839]: cali4ddbdb62e56: Gained carrier Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.078 [INFO][5322] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0 calico-apiserver-9b75cd897- calico-apiserver e5afc712-523e-4ec9-a9bc-ea9f7fc8548e 866 0 2025-01-17 12:03:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9b75cd897 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-160 calico-apiserver-9b75cd897-82x5r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie2f137a5f48 [] []}} ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-82x5r" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.080 [INFO][5322] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-82x5r" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.328 [INFO][5370] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" HandleID="k8s-pod-network.e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.413 [INFO][5370] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" HandleID="k8s-pod-network.e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a3820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-160", "pod":"calico-apiserver-9b75cd897-82x5r", "timestamp":"2025-01-17 12:03:33.32895791 +0000 UTC"}, Hostname:"ip-172-31-20-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.413 [INFO][5370] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.615 [INFO][5370] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.615 [INFO][5370] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-160' Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.621 [INFO][5370] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" host="ip-172-31-20-160" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.666 [INFO][5370] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-160" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.739 [INFO][5370] ipam/ipam.go 489: Trying affinity for 192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.750 [INFO][5370] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.792 [INFO][5370] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.802 [INFO][5370] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.192/26 handle="k8s-pod-network.e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" host="ip-172-31-20-160" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.830 [INFO][5370] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839 Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.843 [INFO][5370] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.192/26 handle="k8s-pod-network.e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" host="ip-172-31-20-160" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.865 [INFO][5370] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.197/26] block=192.168.118.192/26 handle="k8s-pod-network.e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" host="ip-172-31-20-160" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.867 [INFO][5370] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.197/26] handle="k8s-pod-network.e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" host="ip-172-31-20-160" Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.868 [INFO][5370] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:34.093333 containerd[2014]: 2025-01-17 12:03:33.868 [INFO][5370] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.197/26] IPv6=[] ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" HandleID="k8s-pod-network.e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:34.095165 containerd[2014]: 2025-01-17 12:03:33.889 [INFO][5322] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-82x5r" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0", GenerateName:"calico-apiserver-9b75cd897-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5afc712-523e-4ec9-a9bc-ea9f7fc8548e", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b75cd897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"", Pod:"calico-apiserver-9b75cd897-82x5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2f137a5f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:34.095165 containerd[2014]: 2025-01-17 12:03:33.890 [INFO][5322] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.197/32] ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-82x5r" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:34.095165 containerd[2014]: 2025-01-17 12:03:33.890 [INFO][5322] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2f137a5f48 ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-82x5r" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:34.095165 containerd[2014]: 2025-01-17 12:03:33.949 [INFO][5322] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-82x5r" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:34.095165 containerd[2014]: 2025-01-17 12:03:33.966 [INFO][5322] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-82x5r" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0", GenerateName:"calico-apiserver-9b75cd897-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5afc712-523e-4ec9-a9bc-ea9f7fc8548e", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b75cd897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839", Pod:"calico-apiserver-9b75cd897-82x5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2f137a5f48", MAC:"7e:9a:b8:7a:fa:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:34.095165 containerd[2014]: 2025-01-17 12:03:34.023 [INFO][5322] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839" Namespace="calico-apiserver" Pod="calico-apiserver-9b75cd897-82x5r" WorkloadEndpoint="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:34.140353 systemd[1]: Started cri-containerd-4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a.scope - libcontainer container 4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a. Jan 17 12:03:34.167506 containerd[2014]: time="2025-01-17T12:03:34.167234474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4b886,Uid:baedf7e0-e7c5-4c51-97a9-7d0b1c402648,Namespace:calico-system,Attempt:1,} returns sandbox id \"0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419\"" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.243 [INFO][5344] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0 coredns-7db6d8ff4d- kube-system bb9ce353-9364-48a8-8b98-45115ba0dad6 867 0 2025-01-17 12:02:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-160 coredns-7db6d8ff4d-wt5l4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4ddbdb62e56 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wt5l4" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.243 [INFO][5344] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wt5l4" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.422 [INFO][5386] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" HandleID="k8s-pod-network.d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.481 [INFO][5386] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" HandleID="k8s-pod-network.d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000303b50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-160", "pod":"coredns-7db6d8ff4d-wt5l4", "timestamp":"2025-01-17 12:03:33.422837258 +0000 UTC"}, Hostname:"ip-172-31-20-160", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.481 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.870 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.870 [INFO][5386] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-160' Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.876 [INFO][5386] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" host="ip-172-31-20-160" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.891 [INFO][5386] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-20-160" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.931 [INFO][5386] ipam/ipam.go 489: Trying affinity for 192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.944 [INFO][5386] ipam/ipam.go 155: Attempting to load block cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.954 [INFO][5386] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.118.192/26 host="ip-172-31-20-160" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.954 [INFO][5386] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.118.192/26 handle="k8s-pod-network.d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" host="ip-172-31-20-160" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.960 [INFO][5386] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3 Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:33.990 [INFO][5386] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.118.192/26 handle="k8s-pod-network.d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" host="ip-172-31-20-160" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:34.025 [INFO][5386] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.118.198/26] block=192.168.118.192/26 handle="k8s-pod-network.d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" host="ip-172-31-20-160" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:34.025 [INFO][5386] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.118.198/26] handle="k8s-pod-network.d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" host="ip-172-31-20-160" Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:34.025 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:34.178310 containerd[2014]: 2025-01-17 12:03:34.025 [INFO][5386] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.118.198/26] IPv6=[] ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" HandleID="k8s-pod-network.d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:34.179441 containerd[2014]: 2025-01-17 12:03:34.044 [INFO][5344] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wt5l4" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb9ce353-9364-48a8-8b98-45115ba0dad6", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"", Pod:"coredns-7db6d8ff4d-wt5l4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ddbdb62e56", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:34.179441 containerd[2014]: 2025-01-17 12:03:34.044 [INFO][5344] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.118.198/32] ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wt5l4" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:34.179441 containerd[2014]: 2025-01-17 12:03:34.044 [INFO][5344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ddbdb62e56 ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wt5l4" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:34.179441 containerd[2014]: 2025-01-17 12:03:34.087 [INFO][5344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wt5l4" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:34.179441 containerd[2014]: 2025-01-17 12:03:34.110 [INFO][5344] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wt5l4" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb9ce353-9364-48a8-8b98-45115ba0dad6", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3", Pod:"coredns-7db6d8ff4d-wt5l4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ddbdb62e56", MAC:"2a:ed:d7:1b:92:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:34.179441 containerd[2014]: 2025-01-17 12:03:34.152 [INFO][5344] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3" Namespace="kube-system" Pod="coredns-7db6d8ff4d-wt5l4" WorkloadEndpoint="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:34.303692 containerd[2014]: time="2025-01-17T12:03:34.303430323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sblcc,Uid:19beb928-574a-4f64-bc0a-c826b89636c5,Namespace:kube-system,Attempt:1,} returns sandbox id \"4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a\"" Jan 17 12:03:34.319055 containerd[2014]: time="2025-01-17T12:03:34.318685935Z" level=info msg="CreateContainer within sandbox \"4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:03:34.332188 containerd[2014]: time="2025-01-17T12:03:34.330423543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:34.332188 containerd[2014]: time="2025-01-17T12:03:34.330543447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:34.332188 containerd[2014]: time="2025-01-17T12:03:34.330582015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.332188 containerd[2014]: time="2025-01-17T12:03:34.330782499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.357802 containerd[2014]: time="2025-01-17T12:03:34.357399567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:03:34.357802 containerd[2014]: time="2025-01-17T12:03:34.357480087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:03:34.357802 containerd[2014]: time="2025-01-17T12:03:34.357505479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.359132 containerd[2014]: time="2025-01-17T12:03:34.358348395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:03:34.359399 containerd[2014]: time="2025-01-17T12:03:34.359328135Z" level=info msg="CreateContainer within sandbox \"4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60c5947e910fe71d531547fa21742c67950991cc8424b0520ebeab9ae30fe84a\"" Jan 17 12:03:34.362164 containerd[2014]: time="2025-01-17T12:03:34.361341747Z" level=info msg="StartContainer for \"60c5947e910fe71d531547fa21742c67950991cc8424b0520ebeab9ae30fe84a\"" Jan 17 12:03:34.408726 systemd[1]: Started cri-containerd-e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839.scope - libcontainer container e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839. Jan 17 12:03:34.450377 systemd[1]: Started cri-containerd-d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3.scope - libcontainer container d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3. Jan 17 12:03:34.511503 systemd[1]: Started cri-containerd-60c5947e910fe71d531547fa21742c67950991cc8424b0520ebeab9ae30fe84a.scope - libcontainer container 60c5947e910fe71d531547fa21742c67950991cc8424b0520ebeab9ae30fe84a. Jan 17 12:03:34.590401 systemd-networkd[1839]: cali36712dd100c: Gained IPv6LL Jan 17 12:03:34.626161 containerd[2014]: time="2025-01-17T12:03:34.624700732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9b75cd897-82x5r,Uid:e5afc712-523e-4ec9-a9bc-ea9f7fc8548e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839\"" Jan 17 12:03:34.673423 containerd[2014]: time="2025-01-17T12:03:34.673246528Z" level=info msg="StartContainer for \"60c5947e910fe71d531547fa21742c67950991cc8424b0520ebeab9ae30fe84a\" returns successfully" Jan 17 12:03:34.683086 containerd[2014]: time="2025-01-17T12:03:34.682986929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-wt5l4,Uid:bb9ce353-9364-48a8-8b98-45115ba0dad6,Namespace:kube-system,Attempt:1,} returns sandbox id \"d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3\"" Jan 17 12:03:34.695641 containerd[2014]: time="2025-01-17T12:03:34.695128685Z" level=info msg="CreateContainer within sandbox \"d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:03:34.738749 containerd[2014]: time="2025-01-17T12:03:34.738674501Z" level=info msg="CreateContainer within sandbox \"d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19a628dc55178963a8500042c3c015e8c0e9654cc78a95bbd9d0e132a9591b25\"" Jan 17 12:03:34.740814 containerd[2014]: time="2025-01-17T12:03:34.740526797Z" level=info msg="StartContainer for \"19a628dc55178963a8500042c3c015e8c0e9654cc78a95bbd9d0e132a9591b25\"" Jan 17 12:03:34.848493 systemd-networkd[1839]: cali86d79d49db2: Gained IPv6LL Jan 17 12:03:34.882261 systemd[1]: Started cri-containerd-19a628dc55178963a8500042c3c015e8c0e9654cc78a95bbd9d0e132a9591b25.scope - libcontainer container 19a628dc55178963a8500042c3c015e8c0e9654cc78a95bbd9d0e132a9591b25. Jan 17 12:03:35.024222 kubelet[3563]: I0117 12:03:35.021893 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sblcc" podStartSLOduration=42.02187077 podStartE2EDuration="42.02187077s" podCreationTimestamp="2025-01-17 12:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:03:34.979678806 +0000 UTC m=+55.952447355" watchObservedRunningTime="2025-01-17 12:03:35.02187077 +0000 UTC m=+55.994639319" Jan 17 12:03:35.028827 containerd[2014]: time="2025-01-17T12:03:35.028619234Z" level=info msg="StartContainer for \"19a628dc55178963a8500042c3c015e8c0e9654cc78a95bbd9d0e132a9591b25\" returns successfully" Jan 17 12:03:35.421967 systemd-networkd[1839]: cali4ddbdb62e56: Gained IPv6LL Jan 17 12:03:35.742045 systemd-networkd[1839]: calie2f137a5f48: Gained IPv6LL Jan 17 12:03:35.915596 containerd[2014]: time="2025-01-17T12:03:35.915517939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:35.919654 containerd[2014]: time="2025-01-17T12:03:35.919086487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 17 12:03:35.924721 containerd[2014]: time="2025-01-17T12:03:35.922096915Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:35.931441 containerd[2014]: time="2025-01-17T12:03:35.931383043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:35.938442 containerd[2014]: time="2025-01-17T12:03:35.938268559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 4.982840713s" Jan 17 12:03:35.939558 containerd[2014]: time="2025-01-17T12:03:35.939030775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 17 12:03:35.947622 containerd[2014]: time="2025-01-17T12:03:35.947548951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:03:35.990042 containerd[2014]: time="2025-01-17T12:03:35.988512451Z" level=info msg="CreateContainer within sandbox \"18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:03:36.032783 containerd[2014]: time="2025-01-17T12:03:36.032029635Z" level=info msg="CreateContainer within sandbox \"18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"72d80427c61f89cdfbc591eac1cb48dfa50b021a14e52052451d38e07b59d3c7\"" Jan 17 12:03:36.037123 containerd[2014]: time="2025-01-17T12:03:36.034900263Z" level=info msg="StartContainer for \"72d80427c61f89cdfbc591eac1cb48dfa50b021a14e52052451d38e07b59d3c7\"" Jan 17 12:03:36.122529 systemd[1]: Started cri-containerd-72d80427c61f89cdfbc591eac1cb48dfa50b021a14e52052451d38e07b59d3c7.scope - libcontainer container 72d80427c61f89cdfbc591eac1cb48dfa50b021a14e52052451d38e07b59d3c7. Jan 17 12:03:36.259809 containerd[2014]: time="2025-01-17T12:03:36.258552928Z" level=info msg="StartContainer for \"72d80427c61f89cdfbc591eac1cb48dfa50b021a14e52052451d38e07b59d3c7\" returns successfully" Jan 17 12:03:36.810135 kubelet[3563]: I0117 12:03:36.810003 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-wt5l4" podStartSLOduration=43.809979463 podStartE2EDuration="43.809979463s" podCreationTimestamp="2025-01-17 12:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:03:35.997761343 +0000 UTC m=+56.970529916" watchObservedRunningTime="2025-01-17 12:03:36.809979463 +0000 UTC m=+57.782748060" Jan 17 12:03:37.005214 kubelet[3563]: I0117 12:03:37.003554 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-bd8d9fff6-r5hwg" podStartSLOduration=26.005713007 podStartE2EDuration="31.003531928s" podCreationTimestamp="2025-01-17 12:03:06 +0000 UTC" firstStartedPulling="2025-01-17 12:03:30.948771098 +0000 UTC m=+51.921539635" lastFinishedPulling="2025-01-17 12:03:35.946590019 +0000 UTC m=+56.919358556" observedRunningTime="2025-01-17 12:03:37.00032398 +0000 UTC m=+57.973092529" watchObservedRunningTime="2025-01-17 12:03:37.003531928 +0000 UTC m=+57.976300465" Jan 17 12:03:37.031407 systemd[1]: run-containerd-runc-k8s.io-72d80427c61f89cdfbc591eac1cb48dfa50b021a14e52052451d38e07b59d3c7-runc.qlbbzY.mount: Deactivated successfully. Jan 17 12:03:38.420218 ntpd[1991]: Listen normally on 6 vxlan.calico 192.168.118.192:123 Jan 17 12:03:38.420598 ntpd[1991]: Listen normally on 7 vxlan.calico [fe80::646e:e1ff:fe78:ec4d%4]:123 Jan 17 12:03:38.421075 ntpd[1991]: 17 Jan 12:03:38 ntpd[1991]: Listen normally on 6 vxlan.calico 192.168.118.192:123 Jan 17 12:03:38.421075 ntpd[1991]: 17 Jan 12:03:38 ntpd[1991]: Listen normally on 7 vxlan.calico [fe80::646e:e1ff:fe78:ec4d%4]:123 Jan 17 12:03:38.421075 ntpd[1991]: 17 Jan 12:03:38 ntpd[1991]: Listen normally on 8 cali36ab48825b3 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:03:38.421075 ntpd[1991]: 17 Jan 12:03:38 ntpd[1991]: Listen normally on 9 cali90b665bfca7 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:03:38.421075 ntpd[1991]: 17 Jan 12:03:38 ntpd[1991]: Listen normally on 10 cali36712dd100c [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:03:38.421075 ntpd[1991]: 17 Jan 12:03:38 ntpd[1991]: Listen normally on 11 cali86d79d49db2 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:03:38.421075 ntpd[1991]: 17 Jan 12:03:38 ntpd[1991]: Listen normally on 12 calie2f137a5f48 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:03:38.420699 ntpd[1991]: Listen normally on 8 cali36ab48825b3 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 12:03:38.422257 ntpd[1991]: 17 Jan 12:03:38 ntpd[1991]: Listen normally on 13 cali4ddbdb62e56 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:03:38.420772 ntpd[1991]: Listen normally on 9 cali90b665bfca7 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 12:03:38.420840 ntpd[1991]: Listen normally on 10 cali36712dd100c [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 12:03:38.420945 ntpd[1991]: Listen normally on 11 cali86d79d49db2 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 12:03:38.421021 ntpd[1991]: Listen normally on 12 calie2f137a5f48 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 12:03:38.421096 ntpd[1991]: Listen normally on 13 cali4ddbdb62e56 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 12:03:38.809480 systemd[1]: Started sshd@9-172.31.20.160:22-139.178.68.195:46550.service - OpenSSH per-connection server daemon (139.178.68.195:46550). Jan 17 12:03:39.042700 sshd[5785]: Accepted publickey for core from 139.178.68.195 port 46550 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:39.045477 containerd[2014]: time="2025-01-17T12:03:39.045410742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:39.049475 containerd[2014]: time="2025-01-17T12:03:39.049264266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 17 12:03:39.050406 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:39.053686 containerd[2014]: time="2025-01-17T12:03:39.050877630Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:39.067839 containerd[2014]: time="2025-01-17T12:03:39.067580250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:39.070648 containerd[2014]: time="2025-01-17T12:03:39.070283862Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.122663787s" Jan 17 12:03:39.070648 containerd[2014]: time="2025-01-17T12:03:39.070356618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:03:39.072673 systemd-logind[2000]: New session 10 of user core. Jan 17 12:03:39.080315 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:03:39.081764 containerd[2014]: time="2025-01-17T12:03:39.077607474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:03:39.084342 containerd[2014]: time="2025-01-17T12:03:39.083887398Z" level=info msg="CreateContainer within sandbox \"f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:03:39.126649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1597038480.mount: Deactivated successfully. Jan 17 12:03:39.132026 containerd[2014]: time="2025-01-17T12:03:39.131811679Z" level=info msg="CreateContainer within sandbox \"f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6a81e072a628652582cb2dc80bdb0271db2b497bfb95da084921db6f5d2adc44\"" Jan 17 12:03:39.133017 containerd[2014]: time="2025-01-17T12:03:39.132744043Z" level=info msg="StartContainer for \"6a81e072a628652582cb2dc80bdb0271db2b497bfb95da084921db6f5d2adc44\"" Jan 17 12:03:39.243240 systemd[1]: Started cri-containerd-6a81e072a628652582cb2dc80bdb0271db2b497bfb95da084921db6f5d2adc44.scope - libcontainer container 6a81e072a628652582cb2dc80bdb0271db2b497bfb95da084921db6f5d2adc44. Jan 17 12:03:39.264009 containerd[2014]: time="2025-01-17T12:03:39.260868055Z" level=info msg="StopPodSandbox for \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\"" Jan 17 12:03:39.479741 containerd[2014]: time="2025-01-17T12:03:39.477835604Z" level=info msg="StartContainer for \"6a81e072a628652582cb2dc80bdb0271db2b497bfb95da084921db6f5d2adc44\" returns successfully" Jan 17 12:03:39.537589 sshd[5785]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.402 [WARNING][5836] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0", GenerateName:"calico-kube-controllers-bd8d9fff6-", Namespace:"calico-system", SelfLink:"", UID:"2d430a54-0d30-40c8-9b81-8b7e74c0d58c", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bd8d9fff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f", Pod:"calico-kube-controllers-bd8d9fff6-r5hwg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36ab48825b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.403 [INFO][5836] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.403 [INFO][5836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" iface="eth0" netns="" Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.403 [INFO][5836] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.403 [INFO][5836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.492 [INFO][5845] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" HandleID="k8s-pod-network.c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.492 [INFO][5845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.492 [INFO][5845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.525 [WARNING][5845] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" HandleID="k8s-pod-network.c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.525 [INFO][5845] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" HandleID="k8s-pod-network.c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.530 [INFO][5845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:39.542165 containerd[2014]: 2025-01-17 12:03:39.533 [INFO][5836] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:39.544164 containerd[2014]: time="2025-01-17T12:03:39.542220285Z" level=info msg="TearDown network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\" successfully" Jan 17 12:03:39.544164 containerd[2014]: time="2025-01-17T12:03:39.542259045Z" level=info msg="StopPodSandbox for \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\" returns successfully" Jan 17 12:03:39.546897 containerd[2014]: time="2025-01-17T12:03:39.546816801Z" level=info msg="RemovePodSandbox for \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\"" Jan 17 12:03:39.547254 containerd[2014]: time="2025-01-17T12:03:39.547123209Z" level=info msg="Forcibly stopping sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\"" Jan 17 12:03:39.556297 systemd[1]: sshd@9-172.31.20.160:22-139.178.68.195:46550.service: Deactivated successfully. Jan 17 12:03:39.567364 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:03:39.572477 systemd-logind[2000]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:03:39.605536 systemd[1]: Started sshd@10-172.31.20.160:22-139.178.68.195:46554.service - OpenSSH per-connection server daemon (139.178.68.195:46554). Jan 17 12:03:39.610740 systemd-logind[2000]: Removed session 10. Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.713 [WARNING][5877] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0", GenerateName:"calico-kube-controllers-bd8d9fff6-", Namespace:"calico-system", SelfLink:"", UID:"2d430a54-0d30-40c8-9b81-8b7e74c0d58c", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bd8d9fff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"18be1e9d55c2041510d4d2a54b1690c33395949875601bc6e4df843633a1323f", Pod:"calico-kube-controllers-bd8d9fff6-r5hwg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.118.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali36ab48825b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.713 [INFO][5877] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.713 [INFO][5877] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" iface="eth0" netns="" Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.713 [INFO][5877] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.713 [INFO][5877] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.767 [INFO][5890] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" HandleID="k8s-pod-network.c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.768 [INFO][5890] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.769 [INFO][5890] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.792 [WARNING][5890] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" HandleID="k8s-pod-network.c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.792 [INFO][5890] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" HandleID="k8s-pod-network.c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Workload="ip--172--31--20--160-k8s-calico--kube--controllers--bd8d9fff6--r5hwg-eth0" Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.797 [INFO][5890] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:39.809961 containerd[2014]: 2025-01-17 12:03:39.800 [INFO][5877] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed" Jan 17 12:03:39.809961 containerd[2014]: time="2025-01-17T12:03:39.808392574Z" level=info msg="TearDown network for sandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\" successfully" Jan 17 12:03:39.818180 sshd[5882]: Accepted publickey for core from 139.178.68.195 port 46554 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:39.821736 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:39.828581 containerd[2014]: time="2025-01-17T12:03:39.828509854Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:39.829015 containerd[2014]: time="2025-01-17T12:03:39.828848830Z" level=info msg="RemovePodSandbox \"c2028ab7a428e57ea2f6043a5f26bbc1a7dc517ee04b22b3ecde0d13c42221ed\" returns successfully" Jan 17 12:03:39.830821 containerd[2014]: time="2025-01-17T12:03:39.830256322Z" level=info msg="StopPodSandbox for \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\"" Jan 17 12:03:39.841473 systemd-logind[2000]: New session 11 of user core. Jan 17 12:03:39.844632 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:39.956 [WARNING][5910] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"baedf7e0-e7c5-4c51-97a9-7d0b1c402648", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419", Pod:"csi-node-driver-4b886", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36712dd100c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:39.958 [INFO][5910] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:39.958 [INFO][5910] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" iface="eth0" netns="" Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:39.958 [INFO][5910] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:39.958 [INFO][5910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:40.140 [INFO][5920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" HandleID="k8s-pod-network.8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:40.140 [INFO][5920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:40.140 [INFO][5920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:40.168 [WARNING][5920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" HandleID="k8s-pod-network.8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:40.169 [INFO][5920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" HandleID="k8s-pod-network.8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:40.173 [INFO][5920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:40.191638 containerd[2014]: 2025-01-17 12:03:40.182 [INFO][5910] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:40.191638 containerd[2014]: time="2025-01-17T12:03:40.191449652Z" level=info msg="TearDown network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\" successfully" Jan 17 12:03:40.191638 containerd[2014]: time="2025-01-17T12:03:40.191487896Z" level=info msg="StopPodSandbox for \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\" returns successfully" Jan 17 12:03:40.196125 containerd[2014]: time="2025-01-17T12:03:40.193692740Z" level=info msg="RemovePodSandbox for \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\"" Jan 17 12:03:40.196125 containerd[2014]: time="2025-01-17T12:03:40.193743968Z" level=info msg="Forcibly stopping sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\"" Jan 17 12:03:40.441299 sshd[5882]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:40.450621 systemd[1]: sshd@10-172.31.20.160:22-139.178.68.195:46554.service: Deactivated successfully. Jan 17 12:03:40.459350 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:03:40.488264 systemd-logind[2000]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:03:40.503170 systemd[1]: Started sshd@11-172.31.20.160:22-139.178.68.195:46566.service - OpenSSH per-connection server daemon (139.178.68.195:46566). Jan 17 12:03:40.506516 systemd-logind[2000]: Removed session 11. Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.359 [WARNING][5942] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"baedf7e0-e7c5-4c51-97a9-7d0b1c402648", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419", Pod:"csi-node-driver-4b886", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.118.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali36712dd100c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.359 [INFO][5942] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.359 [INFO][5942] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" iface="eth0" netns="" Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.359 [INFO][5942] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.359 [INFO][5942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.474 [INFO][5948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" HandleID="k8s-pod-network.8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.474 [INFO][5948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.474 [INFO][5948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.551 [WARNING][5948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" HandleID="k8s-pod-network.8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.551 [INFO][5948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" HandleID="k8s-pod-network.8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Workload="ip--172--31--20--160-k8s-csi--node--driver--4b886-eth0" Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.555 [INFO][5948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:40.568971 containerd[2014]: 2025-01-17 12:03:40.559 [INFO][5942] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48" Jan 17 12:03:40.569894 containerd[2014]: time="2025-01-17T12:03:40.568994410Z" level=info msg="TearDown network for sandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\" successfully" Jan 17 12:03:40.576834 containerd[2014]: time="2025-01-17T12:03:40.575453866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:40.576834 containerd[2014]: time="2025-01-17T12:03:40.575561134Z" level=info msg="RemovePodSandbox \"8ed3b972080b5eeb74e7fdebe79bc9a10340a01d9872bbd4733c3be34905ba48\" returns successfully" Jan 17 12:03:40.576834 containerd[2014]: time="2025-01-17T12:03:40.576357094Z" level=info msg="StopPodSandbox for \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\"" Jan 17 12:03:40.729057 sshd[5958]: Accepted publickey for core from 139.178.68.195 port 46566 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:40.737894 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:40.755366 systemd-logind[2000]: New session 12 of user core. Jan 17 12:03:40.761754 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.708 [WARNING][5972] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"19beb928-574a-4f64-bc0a-c826b89636c5", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a", Pod:"coredns-7db6d8ff4d-sblcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d79d49db2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.710 [INFO][5972] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.710 [INFO][5972] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" iface="eth0" netns="" Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.710 [INFO][5972] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.710 [INFO][5972] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.802 [INFO][5984] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" HandleID="k8s-pod-network.ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.802 [INFO][5984] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.802 [INFO][5984] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.825 [WARNING][5984] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" HandleID="k8s-pod-network.ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.825 [INFO][5984] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" HandleID="k8s-pod-network.ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.829 [INFO][5984] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:40.841225 containerd[2014]: 2025-01-17 12:03:40.837 [INFO][5972] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:40.841225 containerd[2014]: time="2025-01-17T12:03:40.841180475Z" level=info msg="TearDown network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\" successfully" Jan 17 12:03:40.841225 containerd[2014]: time="2025-01-17T12:03:40.841222367Z" level=info msg="StopPodSandbox for \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\" returns successfully" Jan 17 12:03:40.844661 containerd[2014]: time="2025-01-17T12:03:40.842178587Z" level=info msg="RemovePodSandbox for \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\"" Jan 17 12:03:40.844661 containerd[2014]: time="2025-01-17T12:03:40.842228483Z" level=info msg="Forcibly stopping sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\"" Jan 17 12:03:41.082961 containerd[2014]: time="2025-01-17T12:03:41.081702056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:41.085980 containerd[2014]: time="2025-01-17T12:03:41.085775900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 17 12:03:41.092205 containerd[2014]: time="2025-01-17T12:03:41.089896424Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:41.113937 containerd[2014]: time="2025-01-17T12:03:41.112423928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:41.125366 containerd[2014]: time="2025-01-17T12:03:41.125139681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.047254719s" Jan 17 12:03:41.125366 containerd[2014]: time="2025-01-17T12:03:41.125226837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 17 12:03:41.133329 containerd[2014]: time="2025-01-17T12:03:41.132672513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:03:41.136742 containerd[2014]: time="2025-01-17T12:03:41.136385673Z" level=info msg="CreateContainer within sandbox \"0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:03:41.187243 containerd[2014]: time="2025-01-17T12:03:41.187184037Z" level=info msg="CreateContainer within sandbox \"0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ca868eeee315d47d8fd4b6da19b108166e0f7d72290d90c30581e00514df4a34\"" Jan 17 12:03:41.192152 sshd[5958]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:41.197843 containerd[2014]: time="2025-01-17T12:03:41.194615877Z" level=info msg="StartContainer for \"ca868eeee315d47d8fd4b6da19b108166e0f7d72290d90c30581e00514df4a34\"" Jan 17 12:03:41.207531 systemd[1]: sshd@11-172.31.20.160:22-139.178.68.195:46566.service: Deactivated successfully. Jan 17 12:03:41.214789 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:03:41.220446 systemd-logind[2000]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:03:41.224388 systemd-logind[2000]: Removed session 12. Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.114 [WARNING][6007] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"19beb928-574a-4f64-bc0a-c826b89636c5", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"4b229941674b71662da9a9bee54f7b84427aeb4aa9ff43dc118ceb4ab101091a", Pod:"coredns-7db6d8ff4d-sblcc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86d79d49db2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.115 [INFO][6007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.115 [INFO][6007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" iface="eth0" netns="" Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.115 [INFO][6007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.115 [INFO][6007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.248 [INFO][6017] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" HandleID="k8s-pod-network.ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.248 [INFO][6017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.248 [INFO][6017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.275 [WARNING][6017] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" HandleID="k8s-pod-network.ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.275 [INFO][6017] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" HandleID="k8s-pod-network.ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--sblcc-eth0" Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.282 [INFO][6017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:41.319301 containerd[2014]: 2025-01-17 12:03:41.310 [INFO][6007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1" Jan 17 12:03:41.320180 containerd[2014]: time="2025-01-17T12:03:41.319352518Z" level=info msg="TearDown network for sandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\" successfully" Jan 17 12:03:41.334332 containerd[2014]: time="2025-01-17T12:03:41.334150954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:41.334332 containerd[2014]: time="2025-01-17T12:03:41.334274266Z" level=info msg="RemovePodSandbox \"ac9e8c88ef2101f843d903d7866bc9faa1734098f5d9de0dccbcb32cd3145bc1\" returns successfully" Jan 17 12:03:41.337500 containerd[2014]: time="2025-01-17T12:03:41.336438166Z" level=info msg="StopPodSandbox for \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\"" Jan 17 12:03:41.348284 systemd[1]: Started cri-containerd-ca868eeee315d47d8fd4b6da19b108166e0f7d72290d90c30581e00514df4a34.scope - libcontainer container ca868eeee315d47d8fd4b6da19b108166e0f7d72290d90c30581e00514df4a34. Jan 17 12:03:41.466363 containerd[2014]: time="2025-01-17T12:03:41.466288102Z" level=info msg="StartContainer for \"ca868eeee315d47d8fd4b6da19b108166e0f7d72290d90c30581e00514df4a34\" returns successfully" Jan 17 12:03:41.567938 containerd[2014]: time="2025-01-17T12:03:41.563697191Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:41.567938 containerd[2014]: time="2025-01-17T12:03:41.566407091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:03:41.570057 containerd[2014]: time="2025-01-17T12:03:41.569885795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 437.153558ms" Jan 17 12:03:41.570057 containerd[2014]: time="2025-01-17T12:03:41.569985719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:03:41.574452 containerd[2014]: time="2025-01-17T12:03:41.574387055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:03:41.577248 containerd[2014]: time="2025-01-17T12:03:41.577177463Z" level=info msg="CreateContainer within sandbox \"e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.505 [WARNING][6060] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0", GenerateName:"calico-apiserver-9b75cd897-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da31367-17d6-48fb-b3e8-b7aa30d9e927", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b75cd897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9", Pod:"calico-apiserver-9b75cd897-2lj2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90b665bfca7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.506 [INFO][6060] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.506 [INFO][6060] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" iface="eth0" netns="" Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.506 [INFO][6060] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.506 [INFO][6060] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.558 [INFO][6075] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" HandleID="k8s-pod-network.deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.558 [INFO][6075] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.558 [INFO][6075] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.582 [WARNING][6075] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" HandleID="k8s-pod-network.deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.582 [INFO][6075] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" HandleID="k8s-pod-network.deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.586 [INFO][6075] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:41.600760 containerd[2014]: 2025-01-17 12:03:41.590 [INFO][6060] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:41.600760 containerd[2014]: time="2025-01-17T12:03:41.600724331Z" level=info msg="TearDown network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\" successfully" Jan 17 12:03:41.604309 containerd[2014]: time="2025-01-17T12:03:41.600778979Z" level=info msg="StopPodSandbox for \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\" returns successfully" Jan 17 12:03:41.604309 containerd[2014]: time="2025-01-17T12:03:41.602481647Z" level=info msg="RemovePodSandbox for \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\"" Jan 17 12:03:41.604309 containerd[2014]: time="2025-01-17T12:03:41.602533163Z" level=info msg="Forcibly stopping sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\"" Jan 17 12:03:41.616702 containerd[2014]: time="2025-01-17T12:03:41.616616339Z" level=info msg="CreateContainer within sandbox \"e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ddafbe25ee4f500da111dbe4fbb8d0418a87217430f9c0e9b2993162dba2c260\"" Jan 17 12:03:41.621991 containerd[2014]: time="2025-01-17T12:03:41.621874631Z" level=info msg="StartContainer for \"ddafbe25ee4f500da111dbe4fbb8d0418a87217430f9c0e9b2993162dba2c260\"" Jan 17 12:03:41.728236 systemd[1]: Started cri-containerd-ddafbe25ee4f500da111dbe4fbb8d0418a87217430f9c0e9b2993162dba2c260.scope - libcontainer container ddafbe25ee4f500da111dbe4fbb8d0418a87217430f9c0e9b2993162dba2c260. Jan 17 12:03:41.870026 containerd[2014]: time="2025-01-17T12:03:41.868799532Z" level=info msg="StartContainer for \"ddafbe25ee4f500da111dbe4fbb8d0418a87217430f9c0e9b2993162dba2c260\" returns successfully" Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.755 [WARNING][6094] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0", GenerateName:"calico-apiserver-9b75cd897-", Namespace:"calico-apiserver", SelfLink:"", UID:"3da31367-17d6-48fb-b3e8-b7aa30d9e927", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b75cd897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"f1a042ed22ac75419acaa0b484247b6d993e0efd16eefc1cd89066fa1b9280e9", Pod:"calico-apiserver-9b75cd897-2lj2c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali90b665bfca7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.755 [INFO][6094] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.755 [INFO][6094] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" iface="eth0" netns="" Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.755 [INFO][6094] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.755 [INFO][6094] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.839 [INFO][6120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" HandleID="k8s-pod-network.deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.839 [INFO][6120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.839 [INFO][6120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.858 [WARNING][6120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" HandleID="k8s-pod-network.deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.858 [INFO][6120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" HandleID="k8s-pod-network.deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--2lj2c-eth0" Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.865 [INFO][6120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:41.875702 containerd[2014]: 2025-01-17 12:03:41.870 [INFO][6094] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145" Jan 17 12:03:41.875702 containerd[2014]: time="2025-01-17T12:03:41.875586540Z" level=info msg="TearDown network for sandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\" successfully" Jan 17 12:03:41.885766 containerd[2014]: time="2025-01-17T12:03:41.885524892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:41.885766 containerd[2014]: time="2025-01-17T12:03:41.885625356Z" level=info msg="RemovePodSandbox \"deb1f1c9c1c7cea2b11e2118404c9d5ced50547b872a68b9d67295cc67768145\" returns successfully" Jan 17 12:03:41.886294 containerd[2014]: time="2025-01-17T12:03:41.886255536Z" level=info msg="StopPodSandbox for \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\"" Jan 17 12:03:42.088666 kubelet[3563]: I0117 12:03:42.088549 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b75cd897-2lj2c" podStartSLOduration=30.905335854 podStartE2EDuration="38.088504473s" podCreationTimestamp="2025-01-17 12:03:04 +0000 UTC" firstStartedPulling="2025-01-17 12:03:31.893386515 +0000 UTC m=+52.866155052" lastFinishedPulling="2025-01-17 12:03:39.076555134 +0000 UTC m=+60.049323671" observedRunningTime="2025-01-17 12:03:40.036086767 +0000 UTC m=+61.008855328" watchObservedRunningTime="2025-01-17 12:03:42.088504473 +0000 UTC m=+63.061273010" Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:41.994 [WARNING][6153] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0", GenerateName:"calico-apiserver-9b75cd897-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5afc712-523e-4ec9-a9bc-ea9f7fc8548e", ResourceVersion:"889", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b75cd897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839", Pod:"calico-apiserver-9b75cd897-82x5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2f137a5f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:41.995 [INFO][6153] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:41.995 [INFO][6153] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" iface="eth0" netns="" Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:41.995 [INFO][6153] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:41.995 [INFO][6153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:42.079 [INFO][6160] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" HandleID="k8s-pod-network.3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:42.079 [INFO][6160] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:42.079 [INFO][6160] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:42.105 [WARNING][6160] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" HandleID="k8s-pod-network.3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:42.105 [INFO][6160] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" HandleID="k8s-pod-network.3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:42.108 [INFO][6160] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:42.119803 containerd[2014]: 2025-01-17 12:03:42.114 [INFO][6153] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:42.120754 containerd[2014]: time="2025-01-17T12:03:42.119884809Z" level=info msg="TearDown network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\" successfully" Jan 17 12:03:42.120754 containerd[2014]: time="2025-01-17T12:03:42.119998857Z" level=info msg="StopPodSandbox for \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\" returns successfully" Jan 17 12:03:42.125459 containerd[2014]: time="2025-01-17T12:03:42.125227690Z" level=info msg="RemovePodSandbox for \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\"" Jan 17 12:03:42.125610 containerd[2014]: time="2025-01-17T12:03:42.125481118Z" level=info msg="Forcibly stopping sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\"" Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.256 [WARNING][6182] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0", GenerateName:"calico-apiserver-9b75cd897-", Namespace:"calico-apiserver", SelfLink:"", UID:"e5afc712-523e-4ec9-a9bc-ea9f7fc8548e", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 3, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9b75cd897", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"e6ec82b4f30696383018fcb15ab4420e3d739946c42ccd1f88f4bbb2051c6839", Pod:"calico-apiserver-9b75cd897-82x5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.118.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie2f137a5f48", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.256 [INFO][6182] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.256 [INFO][6182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" iface="eth0" netns="" Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.256 [INFO][6182] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.256 [INFO][6182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.334 [INFO][6189] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" HandleID="k8s-pod-network.3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.334 [INFO][6189] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.334 [INFO][6189] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.354 [WARNING][6189] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" HandleID="k8s-pod-network.3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.354 [INFO][6189] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" HandleID="k8s-pod-network.3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Workload="ip--172--31--20--160-k8s-calico--apiserver--9b75cd897--82x5r-eth0" Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.357 [INFO][6189] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:42.370082 containerd[2014]: 2025-01-17 12:03:42.363 [INFO][6182] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a" Jan 17 12:03:42.371409 containerd[2014]: time="2025-01-17T12:03:42.370160327Z" level=info msg="TearDown network for sandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\" successfully" Jan 17 12:03:42.381463 containerd[2014]: time="2025-01-17T12:03:42.381386351Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:42.381617 containerd[2014]: time="2025-01-17T12:03:42.381492215Z" level=info msg="RemovePodSandbox \"3cd12f82d27e77763de91db2af8468f3f6195b21b088152b3f20d513f766378a\" returns successfully" Jan 17 12:03:42.383403 containerd[2014]: time="2025-01-17T12:03:42.383344151Z" level=info msg="StopPodSandbox for \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\"" Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.494 [WARNING][6207] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb9ce353-9364-48a8-8b98-45115ba0dad6", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3", Pod:"coredns-7db6d8ff4d-wt5l4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ddbdb62e56", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.494 [INFO][6207] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.495 [INFO][6207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" iface="eth0" netns="" Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.495 [INFO][6207] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.495 [INFO][6207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.557 [INFO][6213] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" HandleID="k8s-pod-network.18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.557 [INFO][6213] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.557 [INFO][6213] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.577 [WARNING][6213] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" HandleID="k8s-pod-network.18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.578 [INFO][6213] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" HandleID="k8s-pod-network.18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.581 [INFO][6213] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:42.589594 containerd[2014]: 2025-01-17 12:03:42.587 [INFO][6207] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:42.589594 containerd[2014]: time="2025-01-17T12:03:42.589425792Z" level=info msg="TearDown network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\" successfully" Jan 17 12:03:42.589594 containerd[2014]: time="2025-01-17T12:03:42.589463184Z" level=info msg="StopPodSandbox for \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\" returns successfully" Jan 17 12:03:42.592637 containerd[2014]: time="2025-01-17T12:03:42.591443508Z" level=info msg="RemovePodSandbox for \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\"" Jan 17 12:03:42.592637 containerd[2014]: time="2025-01-17T12:03:42.591496956Z" level=info msg="Forcibly stopping sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\"" Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.714 [WARNING][6232] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"bb9ce353-9364-48a8-8b98-45115ba0dad6", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 2, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-160", ContainerID:"d9d2401cd97c610c0ed549d57ce66e4b39ab6ac01c098777dc7e3ab3b15e1ae3", Pod:"coredns-7db6d8ff4d-wt5l4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.118.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4ddbdb62e56", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.715 [INFO][6232] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.715 [INFO][6232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" iface="eth0" netns="" Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.715 [INFO][6232] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.715 [INFO][6232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.785 [INFO][6238] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" HandleID="k8s-pod-network.18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.785 [INFO][6238] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.785 [INFO][6238] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.801 [WARNING][6238] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" HandleID="k8s-pod-network.18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.801 [INFO][6238] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" HandleID="k8s-pod-network.18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Workload="ip--172--31--20--160-k8s-coredns--7db6d8ff4d--wt5l4-eth0" Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.804 [INFO][6238] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:03:42.812454 containerd[2014]: 2025-01-17 12:03:42.808 [INFO][6232] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753" Jan 17 12:03:42.815369 containerd[2014]: time="2025-01-17T12:03:42.814039813Z" level=info msg="TearDown network for sandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\" successfully" Jan 17 12:03:42.823262 containerd[2014]: time="2025-01-17T12:03:42.823142149Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:03:42.823552 containerd[2014]: time="2025-01-17T12:03:42.823512001Z" level=info msg="RemovePodSandbox \"18d5a01425edf61e92c13f06f2bde6d7a2179b3fd4ba88e58704bdcc4ed7f753\" returns successfully" Jan 17 12:03:43.069093 kubelet[3563]: I0117 12:03:43.066989 3563 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:03:43.302225 containerd[2014]: time="2025-01-17T12:03:43.301727651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:43.306631 containerd[2014]: time="2025-01-17T12:03:43.305625455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 17 12:03:43.309275 containerd[2014]: time="2025-01-17T12:03:43.309098771Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:43.321078 containerd[2014]: time="2025-01-17T12:03:43.319644563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:03:43.327422 containerd[2014]: time="2025-01-17T12:03:43.325708307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.751252816s" Jan 17 12:03:43.327422 containerd[2014]: time="2025-01-17T12:03:43.326198747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 17 12:03:43.334682 containerd[2014]: time="2025-01-17T12:03:43.334454208Z" level=info msg="CreateContainer within sandbox \"0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:03:43.370130 containerd[2014]: time="2025-01-17T12:03:43.370048860Z" level=info msg="CreateContainer within sandbox \"0dff31a443878acc37e38044859c264e849379e951290b2959eba890a0161419\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"144622cfa002e82ddeda9cc1b1f867f7bc709de417a83556f9ea41baa32fb31c\"" Jan 17 12:03:43.373937 containerd[2014]: time="2025-01-17T12:03:43.373855884Z" level=info msg="StartContainer for \"144622cfa002e82ddeda9cc1b1f867f7bc709de417a83556f9ea41baa32fb31c\"" Jan 17 12:03:43.479367 systemd[1]: Started cri-containerd-144622cfa002e82ddeda9cc1b1f867f7bc709de417a83556f9ea41baa32fb31c.scope - libcontainer container 144622cfa002e82ddeda9cc1b1f867f7bc709de417a83556f9ea41baa32fb31c. Jan 17 12:03:43.589008 containerd[2014]: time="2025-01-17T12:03:43.588705889Z" level=info msg="StartContainer for \"144622cfa002e82ddeda9cc1b1f867f7bc709de417a83556f9ea41baa32fb31c\" returns successfully" Jan 17 12:03:43.832149 kubelet[3563]: I0117 12:03:43.832063 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-9b75cd897-82x5r" podStartSLOduration=32.890631319 podStartE2EDuration="39.832039598s" podCreationTimestamp="2025-01-17 12:03:04 +0000 UTC" firstStartedPulling="2025-01-17 12:03:34.631797964 +0000 UTC m=+55.604566501" lastFinishedPulling="2025-01-17 12:03:41.573206243 +0000 UTC m=+62.545974780" observedRunningTime="2025-01-17 12:03:42.092290785 +0000 UTC m=+63.065059346" watchObservedRunningTime="2025-01-17 12:03:43.832039598 +0000 UTC m=+64.804808135" Jan 17 12:03:44.503799 kubelet[3563]: I0117 12:03:44.503762 3563 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:03:44.504378 kubelet[3563]: I0117 12:03:44.504059 3563 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:03:44.634029 kubelet[3563]: I0117 12:03:44.633323 3563 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4b886" podStartSLOduration=29.488874965 podStartE2EDuration="38.633092666s" podCreationTimestamp="2025-01-17 12:03:06 +0000 UTC" firstStartedPulling="2025-01-17 12:03:34.184982378 +0000 UTC m=+55.157750903" lastFinishedPulling="2025-01-17 12:03:43.329200079 +0000 UTC m=+64.301968604" observedRunningTime="2025-01-17 12:03:44.160939968 +0000 UTC m=+65.133708529" watchObservedRunningTime="2025-01-17 12:03:44.633092666 +0000 UTC m=+65.605861203" Jan 17 12:03:46.230487 systemd[1]: Started sshd@12-172.31.20.160:22-139.178.68.195:50610.service - OpenSSH per-connection server daemon (139.178.68.195:50610). Jan 17 12:03:46.411064 sshd[6316]: Accepted publickey for core from 139.178.68.195 port 50610 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:46.414421 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:46.423127 systemd-logind[2000]: New session 13 of user core. Jan 17 12:03:46.430179 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:03:46.686171 sshd[6316]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:46.695208 systemd[1]: sshd@12-172.31.20.160:22-139.178.68.195:50610.service: Deactivated successfully. Jan 17 12:03:46.699469 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:03:46.702730 systemd-logind[2000]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:03:46.707901 systemd-logind[2000]: Removed session 13. Jan 17 12:03:51.724510 systemd[1]: Started sshd@13-172.31.20.160:22-139.178.68.195:50614.service - OpenSSH per-connection server daemon (139.178.68.195:50614). Jan 17 12:03:51.913671 sshd[6361]: Accepted publickey for core from 139.178.68.195 port 50614 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:51.917061 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:51.926592 systemd-logind[2000]: New session 14 of user core. Jan 17 12:03:51.934178 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:03:52.232873 sshd[6361]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:52.240767 systemd[1]: sshd@13-172.31.20.160:22-139.178.68.195:50614.service: Deactivated successfully. Jan 17 12:03:52.245458 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:03:52.247256 systemd-logind[2000]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:03:52.249128 systemd-logind[2000]: Removed session 14. Jan 17 12:03:57.273451 systemd[1]: Started sshd@14-172.31.20.160:22-139.178.68.195:45078.service - OpenSSH per-connection server daemon (139.178.68.195:45078). Jan 17 12:03:57.451164 sshd[6379]: Accepted publickey for core from 139.178.68.195 port 45078 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:03:57.454033 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:03:57.463657 systemd-logind[2000]: New session 15 of user core. Jan 17 12:03:57.470178 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:03:57.716352 sshd[6379]: pam_unix(sshd:session): session closed for user core Jan 17 12:03:57.723135 systemd-logind[2000]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:03:57.723689 systemd[1]: sshd@14-172.31.20.160:22-139.178.68.195:45078.service: Deactivated successfully. Jan 17 12:03:57.727592 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:03:57.732205 systemd-logind[2000]: Removed session 15. Jan 17 12:03:59.384655 kubelet[3563]: I0117 12:03:59.384192 3563 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:04:02.753433 systemd[1]: Started sshd@15-172.31.20.160:22-139.178.68.195:45090.service - OpenSSH per-connection server daemon (139.178.68.195:45090). Jan 17 12:04:02.931688 sshd[6394]: Accepted publickey for core from 139.178.68.195 port 45090 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:02.934877 sshd[6394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:02.943027 systemd-logind[2000]: New session 16 of user core. Jan 17 12:04:02.950186 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:04:03.194145 systemd[1]: run-containerd-runc-k8s.io-72d80427c61f89cdfbc591eac1cb48dfa50b021a14e52052451d38e07b59d3c7-runc.MhsI2V.mount: Deactivated successfully. Jan 17 12:04:03.289277 sshd[6394]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:03.304272 systemd[1]: sshd@15-172.31.20.160:22-139.178.68.195:45090.service: Deactivated successfully. Jan 17 12:04:03.311517 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:04:03.314781 systemd-logind[2000]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:04:03.343666 systemd[1]: Started sshd@16-172.31.20.160:22-139.178.68.195:45106.service - OpenSSH per-connection server daemon (139.178.68.195:45106). Jan 17 12:04:03.349287 systemd-logind[2000]: Removed session 16. Jan 17 12:04:03.554634 sshd[6427]: Accepted publickey for core from 139.178.68.195 port 45106 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:03.557279 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:03.573280 systemd-logind[2000]: New session 17 of user core. Jan 17 12:04:03.578278 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:04:04.111726 sshd[6427]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:04.119211 systemd[1]: sshd@16-172.31.20.160:22-139.178.68.195:45106.service: Deactivated successfully. Jan 17 12:04:04.123779 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:04:04.125601 systemd-logind[2000]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:04:04.127808 systemd-logind[2000]: Removed session 17. Jan 17 12:04:04.148458 systemd[1]: Started sshd@17-172.31.20.160:22-139.178.68.195:45108.service - OpenSSH per-connection server daemon (139.178.68.195:45108). Jan 17 12:04:04.323537 sshd[6439]: Accepted publickey for core from 139.178.68.195 port 45108 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:04.326205 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:04.338855 systemd-logind[2000]: New session 18 of user core. Jan 17 12:04:04.345224 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:04:08.180847 sshd[6439]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:08.188880 systemd[1]: sshd@17-172.31.20.160:22-139.178.68.195:45108.service: Deactivated successfully. Jan 17 12:04:08.199149 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:04:08.202252 systemd[1]: session-18.scope: Consumed 1.005s CPU time. Jan 17 12:04:08.205342 systemd-logind[2000]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:04:08.240832 systemd[1]: Started sshd@18-172.31.20.160:22-139.178.68.195:43972.service - OpenSSH per-connection server daemon (139.178.68.195:43972). Jan 17 12:04:08.244400 systemd-logind[2000]: Removed session 18. Jan 17 12:04:08.443760 sshd[6463]: Accepted publickey for core from 139.178.68.195 port 43972 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:08.446324 sshd[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:08.455138 systemd-logind[2000]: New session 19 of user core. Jan 17 12:04:08.461181 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:04:09.038983 sshd[6463]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:09.049341 systemd[1]: sshd@18-172.31.20.160:22-139.178.68.195:43972.service: Deactivated successfully. Jan 17 12:04:09.056400 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:04:09.061268 systemd-logind[2000]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:04:09.086447 systemd[1]: Started sshd@19-172.31.20.160:22-139.178.68.195:43980.service - OpenSSH per-connection server daemon (139.178.68.195:43980). Jan 17 12:04:09.090026 systemd-logind[2000]: Removed session 19. Jan 17 12:04:09.276074 sshd[6474]: Accepted publickey for core from 139.178.68.195 port 43980 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:09.279399 sshd[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:09.289862 systemd-logind[2000]: New session 20 of user core. Jan 17 12:04:09.301203 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:04:09.566057 sshd[6474]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:09.573079 systemd[1]: sshd@19-172.31.20.160:22-139.178.68.195:43980.service: Deactivated successfully. Jan 17 12:04:09.579244 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:04:09.582749 systemd-logind[2000]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:04:09.586146 systemd-logind[2000]: Removed session 20. Jan 17 12:04:14.608722 systemd[1]: Started sshd@20-172.31.20.160:22-139.178.68.195:43986.service - OpenSSH per-connection server daemon (139.178.68.195:43986). Jan 17 12:04:14.811098 sshd[6510]: Accepted publickey for core from 139.178.68.195 port 43986 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:14.813843 sshd[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:14.825379 systemd-logind[2000]: New session 21 of user core. Jan 17 12:04:14.831702 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:04:15.116594 sshd[6510]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:15.123552 systemd[1]: sshd@20-172.31.20.160:22-139.178.68.195:43986.service: Deactivated successfully. Jan 17 12:04:15.127725 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:04:15.134142 systemd-logind[2000]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:04:15.137158 systemd-logind[2000]: Removed session 21. Jan 17 12:04:20.155455 systemd[1]: Started sshd@21-172.31.20.160:22-139.178.68.195:52434.service - OpenSSH per-connection server daemon (139.178.68.195:52434). Jan 17 12:04:20.330130 sshd[6546]: Accepted publickey for core from 139.178.68.195 port 52434 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:20.332945 sshd[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:20.341065 systemd-logind[2000]: New session 22 of user core. Jan 17 12:04:20.355173 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:04:20.594746 sshd[6546]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:20.600647 systemd[1]: sshd@21-172.31.20.160:22-139.178.68.195:52434.service: Deactivated successfully. Jan 17 12:04:20.604389 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:04:20.609242 systemd-logind[2000]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:04:20.611688 systemd-logind[2000]: Removed session 22. Jan 17 12:04:25.635457 systemd[1]: Started sshd@22-172.31.20.160:22-139.178.68.195:47618.service - OpenSSH per-connection server daemon (139.178.68.195:47618). Jan 17 12:04:25.809750 sshd[6561]: Accepted publickey for core from 139.178.68.195 port 47618 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:25.812516 sshd[6561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:25.820554 systemd-logind[2000]: New session 23 of user core. Jan 17 12:04:25.832183 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:04:26.082712 sshd[6561]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:26.089231 systemd[1]: sshd@22-172.31.20.160:22-139.178.68.195:47618.service: Deactivated successfully. Jan 17 12:04:26.093825 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:04:26.095731 systemd-logind[2000]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:04:26.098693 systemd-logind[2000]: Removed session 23. Jan 17 12:04:31.125509 systemd[1]: Started sshd@23-172.31.20.160:22-139.178.68.195:47626.service - OpenSSH per-connection server daemon (139.178.68.195:47626). Jan 17 12:04:31.303087 sshd[6574]: Accepted publickey for core from 139.178.68.195 port 47626 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:31.305953 sshd[6574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:31.315050 systemd-logind[2000]: New session 24 of user core. Jan 17 12:04:31.321229 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:04:31.567369 sshd[6574]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:31.575450 systemd[1]: sshd@23-172.31.20.160:22-139.178.68.195:47626.service: Deactivated successfully. Jan 17 12:04:31.579088 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:04:31.581554 systemd-logind[2000]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:04:31.584280 systemd-logind[2000]: Removed session 24. Jan 17 12:04:36.610493 systemd[1]: Started sshd@24-172.31.20.160:22-139.178.68.195:40726.service - OpenSSH per-connection server daemon (139.178.68.195:40726). Jan 17 12:04:36.797854 sshd[6586]: Accepted publickey for core from 139.178.68.195 port 40726 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:36.800695 sshd[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:36.809371 systemd-logind[2000]: New session 25 of user core. Jan 17 12:04:36.816196 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:04:37.063261 sshd[6586]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:37.069893 systemd[1]: sshd@24-172.31.20.160:22-139.178.68.195:40726.service: Deactivated successfully. Jan 17 12:04:37.074216 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:04:37.075818 systemd-logind[2000]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:04:37.079493 systemd-logind[2000]: Removed session 25. Jan 17 12:04:42.104352 systemd[1]: Started sshd@25-172.31.20.160:22-139.178.68.195:40734.service - OpenSSH per-connection server daemon (139.178.68.195:40734). Jan 17 12:04:42.281855 sshd[6600]: Accepted publickey for core from 139.178.68.195 port 40734 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:42.284712 sshd[6600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:42.292268 systemd-logind[2000]: New session 26 of user core. Jan 17 12:04:42.300189 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:04:42.539780 sshd[6600]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:42.547275 systemd-logind[2000]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:04:42.549052 systemd[1]: sshd@25-172.31.20.160:22-139.178.68.195:40734.service: Deactivated successfully. Jan 17 12:04:42.553311 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:04:42.555677 systemd-logind[2000]: Removed session 26. Jan 17 12:04:47.584412 systemd[1]: Started sshd@26-172.31.20.160:22-139.178.68.195:56980.service - OpenSSH per-connection server daemon (139.178.68.195:56980). Jan 17 12:04:47.771516 sshd[6653]: Accepted publickey for core from 139.178.68.195 port 56980 ssh2: RSA SHA256:Zqklpn1BD7cif5BxEt+bbixuKLYffvJBAg0qCUQaM3k Jan 17 12:04:47.774370 sshd[6653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:04:47.783718 systemd-logind[2000]: New session 27 of user core. Jan 17 12:04:47.787188 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:04:48.037381 sshd[6653]: pam_unix(sshd:session): session closed for user core Jan 17 12:04:48.044050 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:04:48.046465 systemd[1]: sshd@26-172.31.20.160:22-139.178.68.195:56980.service: Deactivated successfully. Jan 17 12:04:48.051616 systemd-logind[2000]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:04:48.053566 systemd-logind[2000]: Removed session 27. Jan 17 12:05:01.968866 systemd[1]: cri-containerd-0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141.scope: Deactivated successfully. Jan 17 12:05:01.969437 systemd[1]: cri-containerd-0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141.scope: Consumed 6.288s CPU time. Jan 17 12:05:02.006239 containerd[2014]: time="2025-01-17T12:05:02.006137090Z" level=info msg="shim disconnected" id=0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141 namespace=k8s.io Jan 17 12:05:02.006239 containerd[2014]: time="2025-01-17T12:05:02.006230918Z" level=warning msg="cleaning up after shim disconnected" id=0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141 namespace=k8s.io Jan 17 12:05:02.006855 containerd[2014]: time="2025-01-17T12:05:02.006252794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:05:02.013693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141-rootfs.mount: Deactivated successfully. Jan 17 12:05:02.325859 kubelet[3563]: I0117 12:05:02.325701 3563 scope.go:117] "RemoveContainer" containerID="0f8b53da1de9488882387dd53015709ef1321e55bb4f890538dbb8ff48d72141" Jan 17 12:05:02.333802 containerd[2014]: time="2025-01-17T12:05:02.333724408Z" level=info msg="CreateContainer within sandbox \"189c86a95f7f710df0cded733c0cb3783783a562f83e429cb17c3373d2e1b271\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 12:05:02.365032 containerd[2014]: time="2025-01-17T12:05:02.364957648Z" level=info msg="CreateContainer within sandbox \"189c86a95f7f710df0cded733c0cb3783783a562f83e429cb17c3373d2e1b271\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fabbaa6dbdeff0650064df509126c27be5385b1b92024249ee086cf416860541\"" Jan 17 12:05:02.366406 containerd[2014]: time="2025-01-17T12:05:02.366355000Z" level=info msg="StartContainer for \"fabbaa6dbdeff0650064df509126c27be5385b1b92024249ee086cf416860541\"" Jan 17 12:05:02.427242 systemd[1]: Started cri-containerd-fabbaa6dbdeff0650064df509126c27be5385b1b92024249ee086cf416860541.scope - libcontainer container fabbaa6dbdeff0650064df509126c27be5385b1b92024249ee086cf416860541. Jan 17 12:05:02.475326 containerd[2014]: time="2025-01-17T12:05:02.475245593Z" level=info msg="StartContainer for \"fabbaa6dbdeff0650064df509126c27be5385b1b92024249ee086cf416860541\" returns successfully" Jan 17 12:05:02.577858 kubelet[3563]: E0117 12:05:02.576391 3563 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-160?timeout=10s\": context deadline exceeded" Jan 17 12:05:03.174147 systemd[1]: cri-containerd-3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4.scope: Deactivated successfully. Jan 17 12:05:03.175141 systemd[1]: cri-containerd-3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4.scope: Consumed 5.651s CPU time, 22.3M memory peak, 0B memory swap peak. Jan 17 12:05:03.226074 containerd[2014]: time="2025-01-17T12:05:03.225699460Z" level=info msg="shim disconnected" id=3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4 namespace=k8s.io Jan 17 12:05:03.226074 containerd[2014]: time="2025-01-17T12:05:03.225791764Z" level=warning msg="cleaning up after shim disconnected" id=3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4 namespace=k8s.io Jan 17 12:05:03.226074 containerd[2014]: time="2025-01-17T12:05:03.225813148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:05:03.232079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4-rootfs.mount: Deactivated successfully. Jan 17 12:05:03.329727 kubelet[3563]: I0117 12:05:03.329585 3563 scope.go:117] "RemoveContainer" containerID="3a442cf027c34fe39a58237b6c5462877d36487cffeb42567222b2845cf78ed4" Jan 17 12:05:03.335878 containerd[2014]: time="2025-01-17T12:05:03.335780705Z" level=info msg="CreateContainer within sandbox \"8ed741057ed8a013564c175e433ced89eee567bcf09278072388a0815ef1fc2a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 12:05:03.365811 containerd[2014]: time="2025-01-17T12:05:03.365242049Z" level=info msg="CreateContainer within sandbox \"8ed741057ed8a013564c175e433ced89eee567bcf09278072388a0815ef1fc2a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"3d84e97bacb1fbe51095f56ec4c6ea9b4ba5042a7531e9bc9878bca5f9001708\"" Jan 17 12:05:03.366112 containerd[2014]: time="2025-01-17T12:05:03.366056753Z" level=info msg="StartContainer for \"3d84e97bacb1fbe51095f56ec4c6ea9b4ba5042a7531e9bc9878bca5f9001708\"" Jan 17 12:05:03.415237 systemd[1]: Started cri-containerd-3d84e97bacb1fbe51095f56ec4c6ea9b4ba5042a7531e9bc9878bca5f9001708.scope - libcontainer container 3d84e97bacb1fbe51095f56ec4c6ea9b4ba5042a7531e9bc9878bca5f9001708. Jan 17 12:05:03.483675 containerd[2014]: time="2025-01-17T12:05:03.483598722Z" level=info msg="StartContainer for \"3d84e97bacb1fbe51095f56ec4c6ea9b4ba5042a7531e9bc9878bca5f9001708\" returns successfully" Jan 17 12:05:07.093185 systemd[1]: cri-containerd-c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac.scope: Deactivated successfully. Jan 17 12:05:07.093692 systemd[1]: cri-containerd-c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac.scope: Consumed 2.444s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 17 12:05:07.136017 containerd[2014]: time="2025-01-17T12:05:07.135879452Z" level=info msg="shim disconnected" id=c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac namespace=k8s.io Jan 17 12:05:07.139223 containerd[2014]: time="2025-01-17T12:05:07.136562636Z" level=warning msg="cleaning up after shim disconnected" id=c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac namespace=k8s.io Jan 17 12:05:07.139223 containerd[2014]: time="2025-01-17T12:05:07.136609496Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:05:07.138793 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac-rootfs.mount: Deactivated successfully. Jan 17 12:05:07.163374 containerd[2014]: time="2025-01-17T12:05:07.163267460Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:05:07Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:05:07.349119 kubelet[3563]: I0117 12:05:07.348990 3563 scope.go:117] "RemoveContainer" containerID="c5a37e474e1b03fcebbdf145bd664d0f8692fc5dc59288b63ad8ced4370501ac" Jan 17 12:05:07.353026 containerd[2014]: time="2025-01-17T12:05:07.352838505Z" level=info msg="CreateContainer within sandbox \"98fbf1db043882d46a4a9e871abbcb2fb718ae9844df5f022b00c4e8ad19a134\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 12:05:07.379249 containerd[2014]: time="2025-01-17T12:05:07.378837129Z" level=info msg="CreateContainer within sandbox \"98fbf1db043882d46a4a9e871abbcb2fb718ae9844df5f022b00c4e8ad19a134\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9d69d3965b4787f5b33f4a9e9e0a67300f741593034ed84608227d71a9242ffb\"" Jan 17 12:05:07.379748 containerd[2014]: time="2025-01-17T12:05:07.379693797Z" level=info msg="StartContainer for \"9d69d3965b4787f5b33f4a9e9e0a67300f741593034ed84608227d71a9242ffb\"" Jan 17 12:05:07.441233 systemd[1]: Started cri-containerd-9d69d3965b4787f5b33f4a9e9e0a67300f741593034ed84608227d71a9242ffb.scope - libcontainer container 9d69d3965b4787f5b33f4a9e9e0a67300f741593034ed84608227d71a9242ffb. Jan 17 12:05:07.508245 containerd[2014]: time="2025-01-17T12:05:07.508172386Z" level=info msg="StartContainer for \"9d69d3965b4787f5b33f4a9e9e0a67300f741593034ed84608227d71a9242ffb\" returns successfully" Jan 17 12:05:12.577523 kubelet[3563]: E0117 12:05:12.577439 3563 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-160?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 12:05:22.578789 kubelet[3563]: E0117 12:05:22.578709 3563 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.160:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-160?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"