Sep 4 17:09:45.260357 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 17:09:45.260405 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:09:45.260432 kernel: KASLR disabled due to lack of seed Sep 4 17:09:45.260449 kernel: efi: EFI v2.7 by EDK II Sep 4 17:09:45.260466 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Sep 4 17:09:45.260482 kernel: ACPI: Early table checksum verification disabled Sep 4 17:09:45.260501 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 17:09:45.260517 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 17:09:45.260534 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:09:45.260551 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 17:09:45.260573 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:09:45.260589 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 17:09:45.260634 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 17:09:45.260652 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 17:09:45.260671 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:09:45.260695 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 17:09:45.260713 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 17:09:45.260729 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 17:09:45.260746 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 17:09:45.260763 kernel: printk: bootconsole [uart0] enabled Sep 4 17:09:45.260779 kernel: NUMA: Failed to initialise from firmware Sep 4 17:09:45.260797 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:09:45.260815 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 17:09:45.260834 kernel: Zone ranges: Sep 4 17:09:45.260852 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 17:09:45.260869 kernel: DMA32 empty Sep 4 17:09:45.260893 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 17:09:45.260911 kernel: Movable zone start for each node Sep 4 17:09:45.260928 kernel: Early memory node ranges Sep 4 17:09:45.260945 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 17:09:45.260962 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 17:09:45.260979 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 17:09:45.260996 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 17:09:45.261013 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 17:09:45.261030 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 17:09:45.261048 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 17:09:45.261065 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 17:09:45.261082 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:09:45.261105 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 17:09:45.261123 kernel: psci: probing for conduit method from ACPI. Sep 4 17:09:45.261148 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 17:09:45.261166 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:09:45.261184 kernel: psci: Trusted OS migration not required Sep 4 17:09:45.261208 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:09:45.261226 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:09:45.261245 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:09:45.261263 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:09:45.261281 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:09:45.261299 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:09:45.261317 kernel: CPU features: detected: Spectre-v2 Sep 4 17:09:45.261335 kernel: CPU features: detected: Spectre-v3a Sep 4 17:09:45.261353 kernel: CPU features: detected: Spectre-BHB Sep 4 17:09:45.261372 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 17:09:45.261391 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 17:09:45.261414 kernel: alternatives: applying boot alternatives Sep 4 17:09:45.261435 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:09:45.261455 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:09:45.261473 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:09:45.261491 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:09:45.261508 kernel: Fallback order for Node 0: 0 Sep 4 17:09:45.261526 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 17:09:45.261543 kernel: Policy zone: Normal Sep 4 17:09:45.261561 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:09:45.261578 kernel: software IO TLB: area num 2. Sep 4 17:09:45.265139 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 17:09:45.265186 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Sep 4 17:09:45.265205 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:09:45.265223 kernel: trace event string verifier disabled Sep 4 17:09:45.265241 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:09:45.265261 kernel: rcu: RCU event tracing is enabled. Sep 4 17:09:45.265279 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:09:45.265298 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:09:45.265316 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:09:45.265334 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:09:45.265352 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:09:45.265370 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:09:45.265392 kernel: GICv3: 96 SPIs implemented Sep 4 17:09:45.265410 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:09:45.265429 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:09:45.265446 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 17:09:45.265465 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 17:09:45.265482 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 17:09:45.265501 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:09:45.265520 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:09:45.265538 kernel: GICv3: using LPI property table @0x00000004000e0000 Sep 4 17:09:45.265556 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 17:09:45.265574 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Sep 4 17:09:45.265627 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:09:45.265659 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 17:09:45.265678 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 17:09:45.265696 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 17:09:45.265714 kernel: Console: colour dummy device 80x25 Sep 4 17:09:45.265733 kernel: printk: console [tty1] enabled Sep 4 17:09:45.265751 kernel: ACPI: Core revision 20230628 Sep 4 17:09:45.265770 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 17:09:45.265788 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:09:45.265807 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:09:45.265825 kernel: SELinux: Initializing. Sep 4 17:09:45.265849 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:09:45.265867 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:09:45.265887 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:09:45.265905 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:09:45.265924 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:09:45.265942 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:09:45.265961 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 17:09:45.265980 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 17:09:45.265998 kernel: Remapping and enabling EFI services. Sep 4 17:09:45.266021 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:09:45.266039 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:09:45.266057 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 17:09:45.266075 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Sep 4 17:09:45.266094 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 17:09:45.266111 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:09:45.266129 kernel: SMP: Total of 2 processors activated. Sep 4 17:09:45.266148 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:09:45.266166 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 17:09:45.266188 kernel: CPU features: detected: CRC32 instructions Sep 4 17:09:45.266206 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:09:45.266235 kernel: alternatives: applying system-wide alternatives Sep 4 17:09:45.266259 kernel: devtmpfs: initialized Sep 4 17:09:45.266277 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:09:45.266296 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:09:45.266315 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:09:45.266333 kernel: SMBIOS 3.0.0 present. Sep 4 17:09:45.266352 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 17:09:45.266376 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:09:45.266395 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:09:45.266414 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:09:45.266433 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:09:45.266452 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:09:45.266471 kernel: audit: type=2000 audit(0.306:1): state=initialized audit_enabled=0 res=1 Sep 4 17:09:45.266490 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:09:45.266514 kernel: cpuidle: using governor menu Sep 4 17:09:45.266533 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:09:45.266552 kernel: ASID allocator initialised with 65536 entries Sep 4 17:09:45.266572 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:09:45.266614 kernel: Serial: AMBA PL011 UART driver Sep 4 17:09:45.266640 kernel: Modules: 17600 pages in range for non-PLT usage Sep 4 17:09:45.266660 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:09:45.266679 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:09:45.266698 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:09:45.266724 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:09:45.266743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:09:45.266762 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:09:45.266782 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:09:45.266801 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:09:45.266821 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:09:45.266840 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:09:45.266860 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:09:45.266879 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:09:45.266903 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:09:45.266923 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:09:45.266942 kernel: ACPI: Interpreter enabled Sep 4 17:09:45.266961 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:09:45.266980 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:09:45.267000 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 17:09:45.267353 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:09:45.269470 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:09:45.273991 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:09:45.274216 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 17:09:45.274443 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 17:09:45.274470 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 17:09:45.274490 kernel: acpiphp: Slot [1] registered Sep 4 17:09:45.274509 kernel: acpiphp: Slot [2] registered Sep 4 17:09:45.274529 kernel: acpiphp: Slot [3] registered Sep 4 17:09:45.274548 kernel: acpiphp: Slot [4] registered Sep 4 17:09:45.274566 kernel: acpiphp: Slot [5] registered Sep 4 17:09:45.274634 kernel: acpiphp: Slot [6] registered Sep 4 17:09:45.274661 kernel: acpiphp: Slot [7] registered Sep 4 17:09:45.274680 kernel: acpiphp: Slot [8] registered Sep 4 17:09:45.274700 kernel: acpiphp: Slot [9] registered Sep 4 17:09:45.274720 kernel: acpiphp: Slot [10] registered Sep 4 17:09:45.274739 kernel: acpiphp: Slot [11] registered Sep 4 17:09:45.274759 kernel: acpiphp: Slot [12] registered Sep 4 17:09:45.274778 kernel: acpiphp: Slot [13] registered Sep 4 17:09:45.274797 kernel: acpiphp: Slot [14] registered Sep 4 17:09:45.274825 kernel: acpiphp: Slot [15] registered Sep 4 17:09:45.274846 kernel: acpiphp: Slot [16] registered Sep 4 17:09:45.274865 kernel: acpiphp: Slot [17] registered Sep 4 17:09:45.274884 kernel: acpiphp: Slot [18] registered Sep 4 17:09:45.274905 kernel: acpiphp: Slot [19] registered Sep 4 17:09:45.274924 kernel: acpiphp: Slot [20] registered Sep 4 17:09:45.274944 kernel: acpiphp: Slot [21] registered Sep 4 17:09:45.274963 kernel: acpiphp: Slot [22] registered Sep 4 17:09:45.274982 kernel: acpiphp: Slot [23] registered Sep 4 17:09:45.275002 kernel: acpiphp: Slot [24] registered Sep 4 17:09:45.275026 kernel: acpiphp: Slot [25] registered Sep 4 17:09:45.275045 kernel: acpiphp: Slot [26] registered Sep 4 17:09:45.275064 kernel: acpiphp: Slot [27] registered Sep 4 17:09:45.275083 kernel: acpiphp: Slot [28] registered Sep 4 17:09:45.275102 kernel: acpiphp: Slot [29] registered Sep 4 17:09:45.275121 kernel: acpiphp: Slot [30] registered Sep 4 17:09:45.275141 kernel: acpiphp: Slot [31] registered Sep 4 17:09:45.275160 kernel: PCI host bridge to bus 0000:00 Sep 4 17:09:45.275452 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 17:09:45.276190 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:09:45.276459 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 17:09:45.276769 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 17:09:45.277041 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 17:09:45.277276 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 17:09:45.277499 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 17:09:45.279959 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:09:45.280193 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 17:09:45.280430 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:09:45.280690 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:09:45.280922 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 17:09:45.281136 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 17:09:45.281360 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 17:09:45.281587 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:09:45.284580 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 17:09:45.284874 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 17:09:45.285113 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 17:09:45.285346 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 17:09:45.285573 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 17:09:45.287461 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 17:09:45.287725 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:09:45.287919 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 17:09:45.287946 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:09:45.287966 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:09:45.287986 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:09:45.288005 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:09:45.288025 kernel: iommu: Default domain type: Translated Sep 4 17:09:45.288046 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:09:45.288075 kernel: efivars: Registered efivars operations Sep 4 17:09:45.288094 kernel: vgaarb: loaded Sep 4 17:09:45.288114 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:09:45.288134 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:09:45.288153 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:09:45.288172 kernel: pnp: PnP ACPI init Sep 4 17:09:45.288444 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 17:09:45.288478 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:09:45.288505 kernel: NET: Registered PF_INET protocol family Sep 4 17:09:45.288525 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:09:45.288545 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:09:45.288564 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:09:45.288583 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:09:45.288643 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:09:45.288666 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:09:45.288685 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:09:45.288704 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:09:45.288731 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:09:45.288751 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:09:45.288770 kernel: kvm [1]: HYP mode not available Sep 4 17:09:45.288789 kernel: Initialise system trusted keyrings Sep 4 17:09:45.288808 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:09:45.288827 kernel: Key type asymmetric registered Sep 4 17:09:45.288846 kernel: Asymmetric key parser 'x509' registered Sep 4 17:09:45.288865 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:09:45.288884 kernel: io scheduler mq-deadline registered Sep 4 17:09:45.288908 kernel: io scheduler kyber registered Sep 4 17:09:45.288927 kernel: io scheduler bfq registered Sep 4 17:09:45.289172 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 17:09:45.289202 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:09:45.289222 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:09:45.289241 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 17:09:45.289260 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 17:09:45.289279 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:09:45.289306 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 17:09:45.289520 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 17:09:45.289547 kernel: printk: console [ttyS0] disabled Sep 4 17:09:45.289567 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 17:09:45.289586 kernel: printk: console [ttyS0] enabled Sep 4 17:09:45.292280 kernel: printk: bootconsole [uart0] disabled Sep 4 17:09:45.292317 kernel: thunder_xcv, ver 1.0 Sep 4 17:09:45.292338 kernel: thunder_bgx, ver 1.0 Sep 4 17:09:45.292359 kernel: nicpf, ver 1.0 Sep 4 17:09:45.292379 kernel: nicvf, ver 1.0 Sep 4 17:09:45.292717 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:09:45.292957 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:09:44 UTC (1725469784) Sep 4 17:09:45.292993 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:09:45.293013 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 17:09:45.293032 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:09:45.293052 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:09:45.293071 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:09:45.293090 kernel: Segment Routing with IPv6 Sep 4 17:09:45.293121 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:09:45.293142 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:09:45.293162 kernel: Key type dns_resolver registered Sep 4 17:09:45.293181 kernel: registered taskstats version 1 Sep 4 17:09:45.293201 kernel: Loading compiled-in X.509 certificates Sep 4 17:09:45.293221 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:09:45.293240 kernel: Key type .fscrypt registered Sep 4 17:09:45.293258 kernel: Key type fscrypt-provisioning registered Sep 4 17:09:45.293277 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:09:45.293302 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:09:45.293322 kernel: ima: No architecture policies found Sep 4 17:09:45.293341 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:09:45.293360 kernel: clk: Disabling unused clocks Sep 4 17:09:45.293379 kernel: Freeing unused kernel memory: 39040K Sep 4 17:09:45.293398 kernel: Run /init as init process Sep 4 17:09:45.293417 kernel: with arguments: Sep 4 17:09:45.293454 kernel: /init Sep 4 17:09:45.293479 kernel: with environment: Sep 4 17:09:45.293505 kernel: HOME=/ Sep 4 17:09:45.293525 kernel: TERM=linux Sep 4 17:09:45.293543 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:09:45.293567 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:09:45.293628 systemd[1]: Detected virtualization amazon. Sep 4 17:09:45.293656 systemd[1]: Detected architecture arm64. Sep 4 17:09:45.293676 systemd[1]: Running in initrd. Sep 4 17:09:45.293696 systemd[1]: No hostname configured, using default hostname. Sep 4 17:09:45.293722 systemd[1]: Hostname set to . Sep 4 17:09:45.293743 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:09:45.293763 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:09:45.293783 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:09:45.293804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:09:45.293826 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:09:45.293847 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:09:45.293872 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:09:45.293893 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:09:45.293917 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:09:45.293938 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:09:45.293958 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:09:45.293979 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:09:45.293999 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:09:45.294024 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:09:45.294045 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:09:45.294065 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:09:45.294085 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:09:45.294105 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:09:45.294126 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:09:45.294146 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:09:45.294167 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:09:45.294187 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:09:45.294212 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:09:45.294232 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:09:45.294252 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:09:45.294273 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:09:45.294293 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:09:45.294314 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:09:45.294335 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:09:45.294360 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:09:45.294387 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:09:45.294408 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:09:45.294429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:09:45.294449 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:09:45.294521 systemd-journald[250]: Collecting audit messages is disabled. Sep 4 17:09:45.294571 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:09:45.294622 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:09:45.294648 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:45.294668 kernel: Bridge firewalling registered Sep 4 17:09:45.294694 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:09:45.294715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:09:45.294736 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:09:45.294757 systemd-journald[250]: Journal started Sep 4 17:09:45.294796 systemd-journald[250]: Runtime Journal (/run/log/journal/ec297292763b592a2d1c76824a2b38ca) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:09:45.236903 systemd-modules-load[251]: Inserted module 'overlay' Sep 4 17:09:45.274677 systemd-modules-load[251]: Inserted module 'br_netfilter' Sep 4 17:09:45.309698 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:09:45.324333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:09:45.324412 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:09:45.348401 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:09:45.352285 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:09:45.365950 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:09:45.385365 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:09:45.392182 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:09:45.412807 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:09:45.425264 dracut-cmdline[280]: dracut-dracut-053 Sep 4 17:09:45.429905 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:09:45.439250 dracut-cmdline[280]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:09:45.515576 systemd-resolved[293]: Positive Trust Anchors: Sep 4 17:09:45.515631 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:09:45.515694 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:09:45.607633 kernel: SCSI subsystem initialized Sep 4 17:09:45.614622 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:09:45.627819 kernel: iscsi: registered transport (tcp) Sep 4 17:09:45.651144 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:09:45.651215 kernel: QLogic iSCSI HBA Driver Sep 4 17:09:45.742648 kernel: random: crng init done Sep 4 17:09:45.742871 systemd-resolved[293]: Defaulting to hostname 'linux'. Sep 4 17:09:45.746391 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:09:45.750432 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:09:45.773234 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:09:45.786007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:09:45.822499 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:09:45.822646 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:09:45.824642 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:09:45.893691 kernel: raid6: neonx8 gen() 6706 MB/s Sep 4 17:09:45.910661 kernel: raid6: neonx4 gen() 6512 MB/s Sep 4 17:09:45.927647 kernel: raid6: neonx2 gen() 5408 MB/s Sep 4 17:09:45.944653 kernel: raid6: neonx1 gen() 3931 MB/s Sep 4 17:09:45.961651 kernel: raid6: int64x8 gen() 3796 MB/s Sep 4 17:09:45.978654 kernel: raid6: int64x4 gen() 3694 MB/s Sep 4 17:09:45.995648 kernel: raid6: int64x2 gen() 3578 MB/s Sep 4 17:09:46.013541 kernel: raid6: int64x1 gen() 2740 MB/s Sep 4 17:09:46.013643 kernel: raid6: using algorithm neonx8 gen() 6706 MB/s Sep 4 17:09:46.031438 kernel: raid6: .... xor() 4825 MB/s, rmw enabled Sep 4 17:09:46.031526 kernel: raid6: using neon recovery algorithm Sep 4 17:09:46.039652 kernel: xor: measuring software checksum speed Sep 4 17:09:46.041630 kernel: 8regs : 11093 MB/sec Sep 4 17:09:46.043639 kernel: 32regs : 11998 MB/sec Sep 4 17:09:46.045825 kernel: arm64_neon : 9647 MB/sec Sep 4 17:09:46.045899 kernel: xor: using function: 32regs (11998 MB/sec) Sep 4 17:09:46.133650 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:09:46.155437 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:09:46.175905 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:09:46.224103 systemd-udevd[468]: Using default interface naming scheme 'v255'. Sep 4 17:09:46.233890 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:09:46.261979 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:09:46.293629 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Sep 4 17:09:46.360484 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:09:46.371921 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:09:46.504578 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:09:46.518284 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:09:46.567427 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:09:46.584942 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:09:46.588270 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:09:46.609449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:09:46.620933 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:09:46.672028 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:09:46.725644 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:09:46.725740 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 17:09:46.738240 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:09:46.738670 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:09:46.766670 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:b9:49:44:60:df Sep 4 17:09:46.772099 (udev-worker)[540]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:09:46.784477 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:09:46.786891 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:09:46.796194 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:09:46.810818 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 17:09:46.810878 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:09:46.798471 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:09:46.798806 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:46.818911 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:09:46.825191 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:09:46.836142 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:09:46.836224 kernel: GPT:9289727 != 16777215 Sep 4 17:09:46.836272 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:09:46.836302 kernel: GPT:9289727 != 16777215 Sep 4 17:09:46.836695 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:09:46.836742 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:09:46.839161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:09:46.869128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:46.882068 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:09:46.924612 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:09:46.991196 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (513) Sep 4 17:09:46.991294 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (540) Sep 4 17:09:47.003658 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:09:47.126300 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:09:47.126659 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:09:47.153722 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:09:47.170040 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:09:47.186066 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:09:47.206655 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:09:47.207029 disk-uuid[658]: Primary Header is updated. Sep 4 17:09:47.207029 disk-uuid[658]: Secondary Entries is updated. Sep 4 17:09:47.207029 disk-uuid[658]: Secondary Header is updated. Sep 4 17:09:47.237696 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:09:48.248650 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:09:48.249570 disk-uuid[660]: The operation has completed successfully. Sep 4 17:09:48.437312 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:09:48.439374 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:09:48.472907 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:09:48.492099 sh[1004]: Success Sep 4 17:09:48.521646 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:09:48.639810 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:09:48.661859 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:09:48.675273 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:09:48.696950 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:09:48.697035 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:09:48.698839 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:09:48.698943 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:09:48.700148 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:09:48.770655 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:09:48.829373 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:09:48.832959 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:09:48.851040 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:09:48.858952 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:09:48.883041 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:48.883116 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:09:48.883153 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:09:48.891948 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:09:48.909230 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:09:48.912632 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:48.935566 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:09:48.953055 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:09:49.068829 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:09:49.085973 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:09:49.141308 systemd-networkd[1208]: lo: Link UP Sep 4 17:09:49.141332 systemd-networkd[1208]: lo: Gained carrier Sep 4 17:09:49.144966 systemd-networkd[1208]: Enumeration completed Sep 4 17:09:49.145799 systemd-networkd[1208]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:09:49.145806 systemd-networkd[1208]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:09:49.146930 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:09:49.159819 systemd[1]: Reached target network.target - Network. Sep 4 17:09:49.160497 systemd-networkd[1208]: eth0: Link UP Sep 4 17:09:49.160506 systemd-networkd[1208]: eth0: Gained carrier Sep 4 17:09:49.160524 systemd-networkd[1208]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:09:49.192733 systemd-networkd[1208]: eth0: DHCPv4 address 172.31.21.183/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:09:49.377433 ignition[1125]: Ignition 2.18.0 Sep 4 17:09:49.377458 ignition[1125]: Stage: fetch-offline Sep 4 17:09:49.378111 ignition[1125]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:49.378138 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:49.379810 ignition[1125]: Ignition finished successfully Sep 4 17:09:49.384146 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:09:49.398933 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:09:49.431458 ignition[1218]: Ignition 2.18.0 Sep 4 17:09:49.431484 ignition[1218]: Stage: fetch Sep 4 17:09:49.432979 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:49.433014 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:49.433157 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:49.474782 ignition[1218]: PUT result: OK Sep 4 17:09:49.478145 ignition[1218]: parsed url from cmdline: "" Sep 4 17:09:49.478164 ignition[1218]: no config URL provided Sep 4 17:09:49.478180 ignition[1218]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:09:49.478208 ignition[1218]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:09:49.478258 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:49.481533 ignition[1218]: PUT result: OK Sep 4 17:09:49.482081 ignition[1218]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:09:49.488190 ignition[1218]: GET result: OK Sep 4 17:09:49.488426 ignition[1218]: parsing config with SHA512: f5cf82f2091fe918d5a4052b13f5ac3dd0f59da51c5e6eb5b6947629066e0ae66e2e8f66b113864481fac29255abe7d20df68320fceacb9c9f618139f0d25e95 Sep 4 17:09:49.501310 unknown[1218]: fetched base config from "system" Sep 4 17:09:49.501985 ignition[1218]: fetch: fetch complete Sep 4 17:09:49.501327 unknown[1218]: fetched base config from "system" Sep 4 17:09:49.501997 ignition[1218]: fetch: fetch passed Sep 4 17:09:49.501341 unknown[1218]: fetched user config from "aws" Sep 4 17:09:49.502087 ignition[1218]: Ignition finished successfully Sep 4 17:09:49.516300 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:09:49.539036 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:09:49.564960 ignition[1225]: Ignition 2.18.0 Sep 4 17:09:49.565482 ignition[1225]: Stage: kargs Sep 4 17:09:49.566158 ignition[1225]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:49.566183 ignition[1225]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:49.566366 ignition[1225]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:49.570577 ignition[1225]: PUT result: OK Sep 4 17:09:49.579246 ignition[1225]: kargs: kargs passed Sep 4 17:09:49.579346 ignition[1225]: Ignition finished successfully Sep 4 17:09:49.585576 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:09:49.592893 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:09:49.632466 ignition[1232]: Ignition 2.18.0 Sep 4 17:09:49.632494 ignition[1232]: Stage: disks Sep 4 17:09:49.633152 ignition[1232]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:49.633177 ignition[1232]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:49.634268 ignition[1232]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:49.639700 ignition[1232]: PUT result: OK Sep 4 17:09:49.646347 ignition[1232]: disks: disks passed Sep 4 17:09:49.646535 ignition[1232]: Ignition finished successfully Sep 4 17:09:49.651697 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:09:49.656699 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:09:49.661437 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:09:49.663911 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:09:49.672162 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:09:49.675032 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:09:49.684871 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:09:49.736719 systemd-fsck[1241]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:09:49.745954 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:09:49.763013 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:09:49.839620 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:09:49.840698 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:09:49.843805 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:09:49.865828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:09:49.872798 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:09:49.881126 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:09:49.881225 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:09:49.881276 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:09:49.900837 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1260) Sep 4 17:09:49.905375 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:49.905432 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:09:49.905459 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:09:49.913637 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:09:49.914646 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:09:49.927907 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:09:49.935642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:09:50.330801 systemd-networkd[1208]: eth0: Gained IPv6LL Sep 4 17:09:50.341196 initrd-setup-root[1285]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:09:50.361550 initrd-setup-root[1292]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:09:50.382375 initrd-setup-root[1299]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:09:50.392832 initrd-setup-root[1306]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:09:50.743082 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:09:50.763904 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:09:50.773904 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:09:50.787412 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:09:50.791678 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:50.835022 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:09:50.846297 ignition[1373]: INFO : Ignition 2.18.0 Sep 4 17:09:50.846297 ignition[1373]: INFO : Stage: mount Sep 4 17:09:50.850016 ignition[1373]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:50.850016 ignition[1373]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:50.850016 ignition[1373]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:50.850016 ignition[1373]: INFO : PUT result: OK Sep 4 17:09:50.872356 ignition[1373]: INFO : mount: mount passed Sep 4 17:09:50.874036 ignition[1373]: INFO : Ignition finished successfully Sep 4 17:09:50.876396 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:09:50.884858 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:09:50.923025 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:09:50.940637 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1386) Sep 4 17:09:50.946641 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:09:50.946737 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:09:50.946767 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:09:50.951648 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:09:50.956013 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:09:50.993459 ignition[1403]: INFO : Ignition 2.18.0 Sep 4 17:09:50.993459 ignition[1403]: INFO : Stage: files Sep 4 17:09:50.998191 ignition[1403]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:50.998191 ignition[1403]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:50.998191 ignition[1403]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:50.998191 ignition[1403]: INFO : PUT result: OK Sep 4 17:09:51.009468 ignition[1403]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:09:51.012428 ignition[1403]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:09:51.012428 ignition[1403]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:09:51.045291 ignition[1403]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:09:51.048173 ignition[1403]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:09:51.051250 unknown[1403]: wrote ssh authorized keys file for user: core Sep 4 17:09:51.053473 ignition[1403]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:09:51.066801 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:09:51.066801 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:09:51.122105 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:09:51.195696 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:09:51.195696 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:09:51.195696 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:09:51.195696 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:09:51.209250 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Sep 4 17:09:51.547121 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:09:52.339573 ignition[1403]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Sep 4 17:09:52.339573 ignition[1403]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:09:52.354629 ignition[1403]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:09:52.358471 ignition[1403]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:09:52.358471 ignition[1403]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:09:52.358471 ignition[1403]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:09:52.358471 ignition[1403]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:09:52.358471 ignition[1403]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:09:52.358471 ignition[1403]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:09:52.358471 ignition[1403]: INFO : files: files passed Sep 4 17:09:52.358471 ignition[1403]: INFO : Ignition finished successfully Sep 4 17:09:52.362514 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:09:52.385727 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:09:52.392962 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:09:52.409231 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:09:52.409455 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:09:52.430261 initrd-setup-root-after-ignition[1432]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:09:52.430261 initrd-setup-root-after-ignition[1432]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:09:52.439118 initrd-setup-root-after-ignition[1436]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:09:52.444882 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:09:52.451871 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:09:52.466140 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:09:52.531096 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:09:52.532989 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:09:52.538176 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:09:52.543852 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:09:52.546088 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:09:52.559043 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:09:52.590115 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:09:52.613119 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:09:52.640891 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:09:52.644157 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:09:52.649190 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:09:52.650309 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:09:52.650551 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:09:52.660483 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:09:52.663247 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:09:52.668319 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:09:52.672707 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:09:52.675342 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:09:52.677941 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:09:52.680161 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:09:52.682734 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:09:52.684875 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:09:52.687093 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:09:52.688898 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:09:52.689139 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:09:52.691914 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:09:52.714174 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:09:52.716558 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:09:52.720896 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:09:52.723629 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:09:52.723894 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:09:52.732308 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:09:52.732557 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:09:52.739165 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:09:52.739386 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:09:52.758121 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:09:52.762843 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:09:52.763204 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:09:52.775056 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:09:52.779510 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:09:52.780116 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:09:52.786105 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:09:52.786343 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:09:52.806817 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:09:52.809246 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:09:52.824978 ignition[1456]: INFO : Ignition 2.18.0 Sep 4 17:09:52.824978 ignition[1456]: INFO : Stage: umount Sep 4 17:09:52.829570 ignition[1456]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:09:52.829570 ignition[1456]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:09:52.829570 ignition[1456]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:09:52.836710 ignition[1456]: INFO : PUT result: OK Sep 4 17:09:52.844015 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:09:52.847176 ignition[1456]: INFO : umount: umount passed Sep 4 17:09:52.847176 ignition[1456]: INFO : Ignition finished successfully Sep 4 17:09:52.852786 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:09:52.854024 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:09:52.860158 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:09:52.861998 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:09:52.866911 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:09:52.867034 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:09:52.869437 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:09:52.869528 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:09:52.870228 systemd[1]: Stopped target network.target - Network. Sep 4 17:09:52.870515 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:09:52.870790 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:09:52.880070 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:09:52.882037 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:09:52.893777 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:09:52.896240 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:09:52.897946 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:09:52.899784 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:09:52.899872 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:09:52.904886 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:09:52.904974 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:09:52.907428 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:09:52.907537 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:09:52.911238 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:09:52.911335 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:09:52.913731 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:09:52.917688 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:09:52.919679 systemd-networkd[1208]: eth0: DHCPv6 lease lost Sep 4 17:09:52.921820 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:09:52.922016 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:09:52.924699 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:09:52.924875 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:09:52.936082 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:09:52.936358 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:09:52.959980 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:09:52.961701 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:09:52.968867 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:09:52.968989 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:09:52.981800 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:09:52.985325 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:09:52.985525 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:09:52.990438 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:09:52.990553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:09:52.995238 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:09:52.995355 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:09:53.004330 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:09:53.004441 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:09:53.007537 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:09:53.062344 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:09:53.062678 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:09:53.070272 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:09:53.070425 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:09:53.071729 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:09:53.071802 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:09:53.071938 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:09:53.072018 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:09:53.076122 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:09:53.076234 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:09:53.077344 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:09:53.077431 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:09:53.099090 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:09:53.118991 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:09:53.119328 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:09:53.127541 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:09:53.127697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:53.130843 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:09:53.131497 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:09:53.142852 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:09:53.143041 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:09:53.151190 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:09:53.168107 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:09:53.187111 systemd[1]: Switching root. Sep 4 17:09:53.222652 systemd-journald[250]: Journal stopped Sep 4 17:09:56.352796 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Sep 4 17:09:56.352945 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:09:56.352992 kernel: SELinux: policy capability open_perms=1 Sep 4 17:09:56.353035 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:09:56.353076 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:09:56.353116 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:09:56.353159 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:09:56.353190 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:09:56.353221 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:09:56.353253 kernel: audit: type=1403 audit(1725469794.349:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:09:56.353289 systemd[1]: Successfully loaded SELinux policy in 55.695ms. Sep 4 17:09:56.353329 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.601ms. Sep 4 17:09:56.353369 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:09:56.353403 systemd[1]: Detected virtualization amazon. Sep 4 17:09:56.353442 systemd[1]: Detected architecture arm64. Sep 4 17:09:56.353483 systemd[1]: Detected first boot. Sep 4 17:09:56.353516 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:09:56.353548 zram_generator::config[1499]: No configuration found. Sep 4 17:09:56.353588 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:09:56.359192 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:09:56.359233 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:09:56.359271 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:09:56.359314 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:09:56.359349 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:09:56.359380 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:09:56.359411 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:09:56.359444 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:09:56.359477 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:09:56.359510 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:09:56.359544 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:09:56.359575 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:09:56.359740 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:09:56.359782 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:09:56.359820 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:09:56.359855 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:09:56.359891 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:09:56.359924 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:09:56.359958 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:09:56.359992 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:09:56.360024 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:09:56.360065 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:09:56.360097 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:09:56.360132 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:09:56.360166 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:09:56.360219 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:09:56.360259 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:09:56.360291 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:09:56.360328 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:09:56.360362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:09:56.360393 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:09:56.360424 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:09:56.360457 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:09:56.360488 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:09:56.360520 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:09:56.360551 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:09:56.360580 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:09:56.360656 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:09:56.360689 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:09:56.360723 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:09:56.360756 systemd[1]: Reached target machines.target - Containers. Sep 4 17:09:56.360790 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:09:56.360830 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:09:56.360862 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:09:56.360896 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:09:56.360927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:09:56.360964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:09:56.360995 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:09:56.361026 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:09:56.361055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:09:56.361089 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:09:56.361123 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:09:56.361153 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:09:56.361183 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:09:56.361219 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:09:56.361250 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:09:56.361282 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:09:56.361314 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:09:56.361344 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:09:56.361374 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:09:56.361406 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:09:56.361436 systemd[1]: Stopped verity-setup.service. Sep 4 17:09:56.361467 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:09:56.361503 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:09:56.361534 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:09:56.361564 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:09:56.366362 kernel: fuse: init (API version 7.39) Sep 4 17:09:56.366451 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:09:56.366580 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:09:56.366653 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:09:56.366689 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:09:56.366721 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:09:56.366751 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:09:56.366782 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:09:56.366818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:09:56.366849 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:09:56.370734 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:09:56.370785 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:09:56.370821 kernel: loop: module loaded Sep 4 17:09:56.370853 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:09:56.370884 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:09:56.370914 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:09:56.370945 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:09:56.370984 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:09:56.371017 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:09:56.371106 systemd-journald[1580]: Collecting audit messages is disabled. Sep 4 17:09:56.371169 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:09:56.371201 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:09:56.371231 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:09:56.371266 systemd-journald[1580]: Journal started Sep 4 17:09:56.371317 systemd-journald[1580]: Runtime Journal (/run/log/journal/ec297292763b592a2d1c76824a2b38ca) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:09:55.634249 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:09:55.725695 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:09:55.726776 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:09:56.380349 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:09:56.389846 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:09:56.413680 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:09:56.427628 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:09:56.427734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:09:56.446899 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:09:56.447014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:09:56.451654 kernel: ACPI: bus type drm_connector registered Sep 4 17:09:56.465949 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:09:56.466079 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:09:56.488639 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:09:56.508060 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:09:56.517816 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:09:56.517345 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:09:56.517904 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:09:56.520585 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:09:56.525748 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:09:56.528558 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:09:56.531738 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:09:56.555695 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:09:56.605438 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:09:56.618135 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:09:56.634209 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:09:56.664942 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:09:56.681861 kernel: loop0: detected capacity change from 0 to 59688 Sep 4 17:09:56.681964 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:09:56.720984 systemd-journald[1580]: Time spent on flushing to /var/log/journal/ec297292763b592a2d1c76824a2b38ca is 98.649ms for 910 entries. Sep 4 17:09:56.720984 systemd-journald[1580]: System Journal (/var/log/journal/ec297292763b592a2d1c76824a2b38ca) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:09:56.848086 systemd-journald[1580]: Received client request to flush runtime journal. Sep 4 17:09:56.848180 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:09:56.730436 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:09:56.742722 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:09:56.746699 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:09:56.819088 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:09:56.839010 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:09:56.857825 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:09:56.872053 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:09:56.882528 kernel: loop1: detected capacity change from 0 to 51896 Sep 4 17:09:56.887112 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:09:56.916930 udevadm[1642]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:09:56.952260 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Sep 4 17:09:56.952305 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Sep 4 17:09:56.964557 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:09:57.018724 kernel: loop2: detected capacity change from 0 to 113672 Sep 4 17:09:57.138653 kernel: loop3: detected capacity change from 0 to 194096 Sep 4 17:09:57.184037 kernel: loop4: detected capacity change from 0 to 59688 Sep 4 17:09:57.200676 kernel: loop5: detected capacity change from 0 to 51896 Sep 4 17:09:57.218645 kernel: loop6: detected capacity change from 0 to 113672 Sep 4 17:09:57.239874 kernel: loop7: detected capacity change from 0 to 194096 Sep 4 17:09:57.257228 (sd-merge)[1652]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:09:57.260375 (sd-merge)[1652]: Merged extensions into '/usr'. Sep 4 17:09:57.271070 systemd[1]: Reloading requested from client PID 1609 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:09:57.271253 systemd[1]: Reloading... Sep 4 17:09:57.399648 zram_generator::config[1673]: No configuration found. Sep 4 17:09:57.872287 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:09:58.002802 systemd[1]: Reloading finished in 730 ms. Sep 4 17:09:58.061883 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:09:58.065314 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:09:58.089011 systemd[1]: Starting ensure-sysext.service... Sep 4 17:09:58.101845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:09:58.122806 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:09:58.134066 systemd[1]: Reloading requested from client PID 1728 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:09:58.134991 systemd[1]: Reloading... Sep 4 17:09:58.178972 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:09:58.180391 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:09:58.185456 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:09:58.188296 systemd-tmpfiles[1729]: ACLs are not supported, ignoring. Sep 4 17:09:58.188747 systemd-tmpfiles[1729]: ACLs are not supported, ignoring. Sep 4 17:09:58.209020 systemd-tmpfiles[1729]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:09:58.209056 systemd-tmpfiles[1729]: Skipping /boot Sep 4 17:09:58.262987 systemd-tmpfiles[1729]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:09:58.264703 systemd-tmpfiles[1729]: Skipping /boot Sep 4 17:09:58.267143 systemd-udevd[1730]: Using default interface naming scheme 'v255'. Sep 4 17:09:58.377734 ldconfig[1602]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:09:58.381936 zram_generator::config[1758]: No configuration found. Sep 4 17:09:58.584208 (udev-worker)[1774]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:09:58.596251 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1796) Sep 4 17:09:58.848127 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:09:58.891277 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1778) Sep 4 17:09:59.036313 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:09:59.038025 systemd[1]: Reloading finished in 901 ms. Sep 4 17:09:59.073277 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:09:59.078733 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:09:59.109692 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:09:59.188704 systemd[1]: Finished ensure-sysext.service. Sep 4 17:09:59.216016 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:09:59.234821 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:09:59.237526 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:09:59.249376 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:09:59.257057 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:09:59.266040 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:09:59.274499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:09:59.278046 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:09:59.283309 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:09:59.295013 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:09:59.305970 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:09:59.308242 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:09:59.314731 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:09:59.324087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:09:59.412270 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:09:59.421012 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:09:59.434104 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:09:59.437326 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:09:59.439691 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:09:59.480644 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:09:59.484433 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:09:59.487893 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:09:59.492225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:09:59.515755 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:09:59.519706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:09:59.520147 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:09:59.539129 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:09:59.542907 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:09:59.545187 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:09:59.548696 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:09:59.549885 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:09:59.559988 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:09:59.581970 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:09:59.584795 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:09:59.638585 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:09:59.655665 augenrules[1964]: No rules Sep 4 17:09:59.658721 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:09:59.665728 lvm[1962]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:09:59.688370 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:09:59.703293 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:09:59.723375 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:09:59.727222 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:09:59.758018 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:09:59.761708 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:09:59.806643 lvm[1981]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:09:59.845069 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:09:59.871705 systemd-networkd[1934]: lo: Link UP Sep 4 17:09:59.871729 systemd-networkd[1934]: lo: Gained carrier Sep 4 17:09:59.875174 systemd-networkd[1934]: Enumeration completed Sep 4 17:09:59.875440 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:09:59.878362 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:09:59.878390 systemd-networkd[1934]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:09:59.880836 systemd-networkd[1934]: eth0: Link UP Sep 4 17:09:59.881279 systemd-networkd[1934]: eth0: Gained carrier Sep 4 17:09:59.881320 systemd-networkd[1934]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:09:59.887067 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:09:59.899768 systemd-networkd[1934]: eth0: DHCPv4 address 172.31.21.183/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:09:59.909468 systemd-resolved[1936]: Positive Trust Anchors: Sep 4 17:09:59.910204 systemd-resolved[1936]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:09:59.910281 systemd-resolved[1936]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:09:59.919352 systemd-resolved[1936]: Defaulting to hostname 'linux'. Sep 4 17:09:59.923763 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:09:59.926447 systemd[1]: Reached target network.target - Network. Sep 4 17:09:59.929140 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:09:59.932205 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:09:59.934475 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:09:59.937103 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:09:59.940790 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:09:59.943426 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:09:59.946321 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:09:59.948838 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:09:59.948919 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:09:59.951578 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:09:59.954917 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:09:59.960686 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:09:59.972686 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:09:59.976333 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:09:59.979094 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:09:59.981957 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:09:59.983901 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:09:59.983969 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:09:59.990898 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:10:00.000811 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:10:00.016112 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:10:00.023878 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:10:00.036297 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:10:00.039266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:10:00.046215 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:10:00.058567 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:10:00.065953 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:10:00.078908 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:10:00.101160 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:10:00.118792 jq[1992]: false Sep 4 17:10:00.118379 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:10:00.126363 dbus-daemon[1991]: [system] SELinux support is enabled Sep 4 17:10:00.141994 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:10:00.145990 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:10:00.146979 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:10:00.149056 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:10:00.163870 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:10:00.171969 dbus-daemon[1991]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1934 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:10:00.166227 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:10:00.188793 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:10:00.190743 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:10:00.208775 extend-filesystems[1993]: Found loop4 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found loop5 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found loop6 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found loop7 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found nvme0n1 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found nvme0n1p1 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found nvme0n1p2 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found nvme0n1p3 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found usr Sep 4 17:10:00.212421 extend-filesystems[1993]: Found nvme0n1p4 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found nvme0n1p6 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found nvme0n1p7 Sep 4 17:10:00.212421 extend-filesystems[1993]: Found nvme0n1p9 Sep 4 17:10:00.212421 extend-filesystems[1993]: Checking size of /dev/nvme0n1p9 Sep 4 17:10:00.260329 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:10:00.260777 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:10:00.260903 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:10:00.263938 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:10:00.264010 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:10:00.288061 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:10:00.313546 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:10:00.314260 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:10:00.340512 extend-filesystems[1993]: Resized partition /dev/nvme0n1p9 Sep 4 17:10:00.355657 extend-filesystems[2032]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:10:00.363420 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:10:00.365649 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:10:00.365733 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:10:00.374905 (ntainerd)[2024]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:10:00.377651 jq[2003]: true Sep 4 17:10:00.378118 update_engine[2002]: I0904 17:10:00.376777 2002 main.cc:92] Flatcar Update Engine starting Sep 4 17:10:00.383475 ntpd[1995]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:10:00.388190 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:10:00.388190 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:10:00.388190 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: ---------------------------------------------------- Sep 4 17:10:00.388190 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:10:00.388190 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:10:00.388190 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: corporation. Support and training for ntp-4 are Sep 4 17:10:00.388190 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: available at https://www.nwtime.org/support Sep 4 17:10:00.388190 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: ---------------------------------------------------- Sep 4 17:10:00.383542 ntpd[1995]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:10:00.383565 ntpd[1995]: ---------------------------------------------------- Sep 4 17:10:00.383584 ntpd[1995]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:10:00.385788 ntpd[1995]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:10:00.385835 ntpd[1995]: corporation. Support and training for ntp-4 are Sep 4 17:10:00.385858 ntpd[1995]: available at https://www.nwtime.org/support Sep 4 17:10:00.391920 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: proto: precision = 0.108 usec (-23) Sep 4 17:10:00.385877 ntpd[1995]: ---------------------------------------------------- Sep 4 17:10:00.391363 ntpd[1995]: proto: precision = 0.108 usec (-23) Sep 4 17:10:00.393497 ntpd[1995]: basedate set to 2024-08-23 Sep 4 17:10:00.395813 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: basedate set to 2024-08-23 Sep 4 17:10:00.395813 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: gps base set to 2024-08-25 (week 2329) Sep 4 17:10:00.393547 ntpd[1995]: gps base set to 2024-08-25 (week 2329) Sep 4 17:10:00.396960 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:10:00.402668 update_engine[2002]: I0904 17:10:00.399955 2002 update_check_scheduler.cc:74] Next update check in 11m35s Sep 4 17:10:00.402805 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:10:00.402805 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:10:00.402805 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:10:00.402805 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: Listen normally on 3 eth0 172.31.21.183:123 Sep 4 17:10:00.402805 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: Listen normally on 4 lo [::1]:123 Sep 4 17:10:00.401924 ntpd[1995]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:10:00.402020 ntpd[1995]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:10:00.402316 ntpd[1995]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:10:00.402396 ntpd[1995]: Listen normally on 3 eth0 172.31.21.183:123 Sep 4 17:10:00.402479 ntpd[1995]: Listen normally on 4 lo [::1]:123 Sep 4 17:10:00.402573 ntpd[1995]: bind(21) AF_INET6 fe80::4b9:49ff:fe44:60df%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:10:00.403690 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: bind(21) AF_INET6 fe80::4b9:49ff:fe44:60df%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:10:00.403840 ntpd[1995]: unable to create socket on eth0 (5) for fe80::4b9:49ff:fe44:60df%2#123 Sep 4 17:10:00.404069 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: unable to create socket on eth0 (5) for fe80::4b9:49ff:fe44:60df%2#123 Sep 4 17:10:00.404161 ntpd[1995]: failed to init interface for address fe80::4b9:49ff:fe44:60df%2 Sep 4 17:10:00.404349 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: failed to init interface for address fe80::4b9:49ff:fe44:60df%2 Sep 4 17:10:00.404511 ntpd[1995]: Listening on routing socket on fd #21 for interface updates Sep 4 17:10:00.404684 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: Listening on routing socket on fd #21 for interface updates Sep 4 17:10:00.409412 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:10:00.409675 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:10:00.409785 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:10:00.409907 ntpd[1995]: 4 Sep 17:10:00 ntpd[1995]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:10:00.424032 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:10:00.473491 tar[2006]: linux-arm64/helm Sep 4 17:10:00.481665 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:10:00.487635 jq[2035]: true Sep 4 17:10:00.539933 extend-filesystems[2032]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:10:00.539933 extend-filesystems[2032]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:10:00.539933 extend-filesystems[2032]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:10:00.530363 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:10:00.565220 extend-filesystems[1993]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:10:00.531939 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:10:00.542138 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:10:00.711740 systemd-logind[2001]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:10:00.711787 systemd-logind[2001]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 17:10:00.723797 bash[2068]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:10:00.724315 systemd-logind[2001]: New seat seat0. Sep 4 17:10:00.764274 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:10:00.767275 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:10:00.828720 coreos-metadata[1990]: Sep 04 17:10:00.828 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:10:00.828720 coreos-metadata[1990]: Sep 04 17:10:00.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:10:00.828720 coreos-metadata[1990]: Sep 04 17:10:00.828 INFO Fetch successful Sep 4 17:10:00.828720 coreos-metadata[1990]: Sep 04 17:10:00.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:10:00.828720 coreos-metadata[1990]: Sep 04 17:10:00.828 INFO Fetch successful Sep 4 17:10:00.828720 coreos-metadata[1990]: Sep 04 17:10:00.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:10:00.828720 coreos-metadata[1990]: Sep 04 17:10:00.828 INFO Fetch successful Sep 4 17:10:00.829680 coreos-metadata[1990]: Sep 04 17:10:00.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:10:00.836637 coreos-metadata[1990]: Sep 04 17:10:00.831 INFO Fetch successful Sep 4 17:10:00.836637 coreos-metadata[1990]: Sep 04 17:10:00.831 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:10:00.836637 coreos-metadata[1990]: Sep 04 17:10:00.831 INFO Fetch failed with 404: resource not found Sep 4 17:10:00.836637 coreos-metadata[1990]: Sep 04 17:10:00.831 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:10:00.836637 coreos-metadata[1990]: Sep 04 17:10:00.835 INFO Fetch successful Sep 4 17:10:00.836637 coreos-metadata[1990]: Sep 04 17:10:00.835 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:10:00.831763 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:10:00.838300 coreos-metadata[1990]: Sep 04 17:10:00.837 INFO Fetch successful Sep 4 17:10:00.838300 coreos-metadata[1990]: Sep 04 17:10:00.837 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:10:00.840798 coreos-metadata[1990]: Sep 04 17:10:00.839 INFO Fetch successful Sep 4 17:10:00.840798 coreos-metadata[1990]: Sep 04 17:10:00.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:10:00.841429 coreos-metadata[1990]: Sep 04 17:10:00.841 INFO Fetch successful Sep 4 17:10:00.841429 coreos-metadata[1990]: Sep 04 17:10:00.841 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:10:00.844799 coreos-metadata[1990]: Sep 04 17:10:00.844 INFO Fetch successful Sep 4 17:10:00.849050 systemd[1]: Starting sshkeys.service... Sep 4 17:10:00.878954 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1766) Sep 4 17:10:00.973818 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:10:00.974165 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:10:00.984868 dbus-daemon[1991]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2023 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:10:01.010963 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:10:01.021347 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:10:01.038384 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:10:01.089688 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:10:01.094366 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:10:01.145870 systemd-networkd[1934]: eth0: Gained IPv6LL Sep 4 17:10:01.164424 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:10:01.171816 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:10:01.213367 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:10:01.226009 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:01.242787 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:10:01.259466 polkitd[2096]: Started polkitd version 121 Sep 4 17:10:01.289668 locksmithd[2039]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:10:01.354120 containerd[2024]: time="2024-09-04T17:10:01.303984671Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:10:01.334250 polkitd[2096]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:10:01.334384 polkitd[2096]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:10:01.365258 polkitd[2096]: Finished loading, compiling and executing 2 rules Sep 4 17:10:01.377771 dbus-daemon[1991]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:10:01.382475 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:10:01.395004 polkitd[2096]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:10:01.473850 containerd[2024]: time="2024-09-04T17:10:01.473686272Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:10:01.477639 containerd[2024]: time="2024-09-04T17:10:01.475677312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:10:01.499443 containerd[2024]: time="2024-09-04T17:10:01.499247160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:10:01.499668 containerd[2024]: time="2024-09-04T17:10:01.499627380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:10:01.500856 containerd[2024]: time="2024-09-04T17:10:01.500760636Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:10:01.501059 containerd[2024]: time="2024-09-04T17:10:01.501018708Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:10:01.502828 containerd[2024]: time="2024-09-04T17:10:01.501748668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:10:01.507041 containerd[2024]: time="2024-09-04T17:10:01.505128852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:10:01.507041 containerd[2024]: time="2024-09-04T17:10:01.505228308Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:10:01.507041 containerd[2024]: time="2024-09-04T17:10:01.505506792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:10:01.507041 containerd[2024]: time="2024-09-04T17:10:01.506075736Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:10:01.507041 containerd[2024]: time="2024-09-04T17:10:01.506130192Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:10:01.507041 containerd[2024]: time="2024-09-04T17:10:01.506156184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:10:01.514955 containerd[2024]: time="2024-09-04T17:10:01.510490188Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:10:01.514955 containerd[2024]: time="2024-09-04T17:10:01.512744664Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:10:01.514955 containerd[2024]: time="2024-09-04T17:10:01.514703016Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:10:01.514955 containerd[2024]: time="2024-09-04T17:10:01.514817724Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.539959692Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540051600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540085272Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540191580Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540339036Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540372372Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540402192Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540762552Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540812916Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540845352Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540879792Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540914724Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540958980Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:10:01.541646 containerd[2024]: time="2024-09-04T17:10:01.540994308Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:10:01.542376 containerd[2024]: time="2024-09-04T17:10:01.541030212Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:10:01.542376 containerd[2024]: time="2024-09-04T17:10:01.541065600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:10:01.542376 containerd[2024]: time="2024-09-04T17:10:01.541105068Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:10:01.542376 containerd[2024]: time="2024-09-04T17:10:01.541137336Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:10:01.542376 containerd[2024]: time="2024-09-04T17:10:01.541165656Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:10:01.542376 containerd[2024]: time="2024-09-04T17:10:01.541524492Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:10:01.543762 containerd[2024]: time="2024-09-04T17:10:01.543439872Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:10:01.543762 containerd[2024]: time="2024-09-04T17:10:01.543518964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.543762 containerd[2024]: time="2024-09-04T17:10:01.543553752Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:10:01.543762 containerd[2024]: time="2024-09-04T17:10:01.543622908Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:10:01.544330 containerd[2024]: time="2024-09-04T17:10:01.544126452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.544494876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.544550220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.544581708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.544670304Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.544708524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.544740396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.544770012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.544809972Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.545171688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.545219868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.545256432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.545288916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.545395 containerd[2024]: time="2024-09-04T17:10:01.545319288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.546187 containerd[2024]: time="2024-09-04T17:10:01.546141540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.548316 containerd[2024]: time="2024-09-04T17:10:01.546676548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.548316 containerd[2024]: time="2024-09-04T17:10:01.546736236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:10:01.548511 containerd[2024]: time="2024-09-04T17:10:01.547309476Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:10:01.548511 containerd[2024]: time="2024-09-04T17:10:01.547438260Z" level=info msg="Connect containerd service" Sep 4 17:10:01.548511 containerd[2024]: time="2024-09-04T17:10:01.547503696Z" level=info msg="using legacy CRI server" Sep 4 17:10:01.548511 containerd[2024]: time="2024-09-04T17:10:01.547524120Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:10:01.548511 containerd[2024]: time="2024-09-04T17:10:01.547729728Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:10:01.551626 containerd[2024]: time="2024-09-04T17:10:01.550432956Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:10:01.551626 containerd[2024]: time="2024-09-04T17:10:01.550684908Z" level=info msg="Start subscribing containerd event" Sep 4 17:10:01.551626 containerd[2024]: time="2024-09-04T17:10:01.550794408Z" level=info msg="Start recovering state" Sep 4 17:10:01.551626 containerd[2024]: time="2024-09-04T17:10:01.550933032Z" level=info msg="Start event monitor" Sep 4 17:10:01.551626 containerd[2024]: time="2024-09-04T17:10:01.550964016Z" level=info msg="Start snapshots syncer" Sep 4 17:10:01.551626 containerd[2024]: time="2024-09-04T17:10:01.550986528Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:10:01.551626 containerd[2024]: time="2024-09-04T17:10:01.551005788Z" level=info msg="Start streaming server" Sep 4 17:10:01.552273 containerd[2024]: time="2024-09-04T17:10:01.552155004Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:10:01.552930 containerd[2024]: time="2024-09-04T17:10:01.552469728Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:10:01.552930 containerd[2024]: time="2024-09-04T17:10:01.552517548Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:10:01.552930 containerd[2024]: time="2024-09-04T17:10:01.552549276Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:10:01.554198 containerd[2024]: time="2024-09-04T17:10:01.553980636Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:10:01.554198 containerd[2024]: time="2024-09-04T17:10:01.554113644Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:10:01.555635 containerd[2024]: time="2024-09-04T17:10:01.554421672Z" level=info msg="containerd successfully booted in 0.270876s" Sep 4 17:10:01.565036 amazon-ssm-agent[2132]: Initializing new seelog logger Sep 4 17:10:01.565036 amazon-ssm-agent[2132]: New Seelog Logger Creation Complete Sep 4 17:10:01.565036 amazon-ssm-agent[2132]: 2024/09/04 17:10:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:10:01.565036 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:10:01.578078 amazon-ssm-agent[2132]: 2024/09/04 17:10:01 processing appconfig overrides Sep 4 17:10:01.574174 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:10:01.583311 amazon-ssm-agent[2132]: 2024/09/04 17:10:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:10:01.583311 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:10:01.583311 amazon-ssm-agent[2132]: 2024/09/04 17:10:01 processing appconfig overrides Sep 4 17:10:01.581262 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:10:01.592764 systemd-hostnamed[2023]: Hostname set to (transient) Sep 4 17:10:01.594538 amazon-ssm-agent[2132]: 2024/09/04 17:10:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:10:01.594538 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:10:01.594538 amazon-ssm-agent[2132]: 2024/09/04 17:10:01 processing appconfig overrides Sep 4 17:10:01.594538 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO Proxy environment variables: Sep 4 17:10:01.594710 systemd-resolved[1936]: System hostname changed to 'ip-172-31-21-183'. Sep 4 17:10:01.602739 amazon-ssm-agent[2132]: 2024/09/04 17:10:01 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:10:01.602739 amazon-ssm-agent[2132]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:10:01.602739 amazon-ssm-agent[2132]: 2024/09/04 17:10:01 processing appconfig overrides Sep 4 17:10:01.658767 coreos-metadata[2091]: Sep 04 17:10:01.658 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:10:01.663801 coreos-metadata[2091]: Sep 04 17:10:01.662 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:10:01.666082 coreos-metadata[2091]: Sep 04 17:10:01.666 INFO Fetch successful Sep 4 17:10:01.667858 coreos-metadata[2091]: Sep 04 17:10:01.666 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:10:01.669728 coreos-metadata[2091]: Sep 04 17:10:01.669 INFO Fetch successful Sep 4 17:10:01.680016 unknown[2091]: wrote ssh authorized keys file for user: core Sep 4 17:10:01.697053 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO https_proxy: Sep 4 17:10:01.754440 update-ssh-keys[2206]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:10:01.760771 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:10:01.776180 systemd[1]: Finished sshkeys.service. Sep 4 17:10:01.799658 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO http_proxy: Sep 4 17:10:01.898847 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO no_proxy: Sep 4 17:10:01.997727 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:10:02.096809 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:10:02.196698 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO Agent will take identity from EC2 Sep 4 17:10:02.232267 sshd_keygen[2034]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:10:02.296142 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:10:02.315688 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:10:02.329174 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:10:02.337724 systemd[1]: Started sshd@0-172.31.21.183:22-139.178.89.65:60566.service - OpenSSH per-connection server daemon (139.178.89.65:60566). Sep 4 17:10:02.397644 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:10:02.409069 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:10:02.409541 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:10:02.424361 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:10:02.498180 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:10:02.495925 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:10:02.512363 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:10:02.525300 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:10:02.528237 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:10:02.598297 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:10:02.645696 sshd[2224]: Accepted publickey for core from 139.178.89.65 port 60566 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:02.652439 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:02.681422 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:10:02.692271 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:10:02.702858 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 17:10:02.709144 systemd-logind[2001]: New session 1 of user core. Sep 4 17:10:02.751674 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:10:02.770203 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:10:02.796098 (systemd)[2236]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:02.802560 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:10:02.820102 tar[2006]: linux-arm64/LICENSE Sep 4 17:10:02.820725 tar[2006]: linux-arm64/README.md Sep 4 17:10:02.877745 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:10:02.903720 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:10:03.005690 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO [Registrar] Starting registrar module Sep 4 17:10:03.104521 amazon-ssm-agent[2132]: 2024-09-04 17:10:01 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:10:03.111059 systemd[2236]: Queued start job for default target default.target. Sep 4 17:10:03.118471 systemd[2236]: Created slice app.slice - User Application Slice. Sep 4 17:10:03.118549 systemd[2236]: Reached target paths.target - Paths. Sep 4 17:10:03.118583 systemd[2236]: Reached target timers.target - Timers. Sep 4 17:10:03.131922 systemd[2236]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:10:03.173564 systemd[2236]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:10:03.175344 systemd[2236]: Reached target sockets.target - Sockets. Sep 4 17:10:03.175398 systemd[2236]: Reached target basic.target - Basic System. Sep 4 17:10:03.175504 systemd[2236]: Reached target default.target - Main User Target. Sep 4 17:10:03.175578 systemd[2236]: Startup finished in 356ms. Sep 4 17:10:03.175846 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:10:03.186920 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:10:03.261801 amazon-ssm-agent[2132]: 2024-09-04 17:10:03 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:10:03.289363 amazon-ssm-agent[2132]: 2024-09-04 17:10:03 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:10:03.289363 amazon-ssm-agent[2132]: 2024-09-04 17:10:03 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:10:03.289363 amazon-ssm-agent[2132]: 2024-09-04 17:10:03 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:10:03.354330 systemd[1]: Started sshd@1-172.31.21.183:22-139.178.89.65:52120.service - OpenSSH per-connection server daemon (139.178.89.65:52120). Sep 4 17:10:03.363872 amazon-ssm-agent[2132]: 2024-09-04 17:10:03 INFO [CredentialRefresher] Next credential rotation will be in 31.5499687484 minutes Sep 4 17:10:03.387134 ntpd[1995]: Listen normally on 6 eth0 [fe80::4b9:49ff:fe44:60df%2]:123 Sep 4 17:10:03.387907 ntpd[1995]: 4 Sep 17:10:03 ntpd[1995]: Listen normally on 6 eth0 [fe80::4b9:49ff:fe44:60df%2]:123 Sep 4 17:10:03.460985 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:03.464567 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:10:03.467451 systemd[1]: Startup finished in 1.252s (kernel) + 9.529s (initrd) + 9.171s (userspace) = 19.953s. Sep 4 17:10:03.478307 (kubelet)[2258]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:10:03.564916 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 52120 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:03.568746 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:03.581433 systemd-logind[2001]: New session 2 of user core. Sep 4 17:10:03.588027 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:10:03.721816 sshd[2251]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:03.727952 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:10:03.729390 systemd[1]: sshd@1-172.31.21.183:22-139.178.89.65:52120.service: Deactivated successfully. Sep 4 17:10:03.738183 systemd-logind[2001]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:10:03.751913 systemd-logind[2001]: Removed session 2. Sep 4 17:10:03.757847 systemd[1]: Started sshd@2-172.31.21.183:22-139.178.89.65:52134.service - OpenSSH per-connection server daemon (139.178.89.65:52134). Sep 4 17:10:03.938053 sshd[2272]: Accepted publickey for core from 139.178.89.65 port 52134 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:03.941970 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:03.956726 systemd-logind[2001]: New session 3 of user core. Sep 4 17:10:03.966003 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:10:04.093722 sshd[2272]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:04.100822 systemd[1]: sshd@2-172.31.21.183:22-139.178.89.65:52134.service: Deactivated successfully. Sep 4 17:10:04.106120 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:10:04.108005 systemd-logind[2001]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:10:04.111227 systemd-logind[2001]: Removed session 3. Sep 4 17:10:04.133241 systemd[1]: Started sshd@3-172.31.21.183:22-139.178.89.65:52142.service - OpenSSH per-connection server daemon (139.178.89.65:52142). Sep 4 17:10:04.313395 sshd[2281]: Accepted publickey for core from 139.178.89.65 port 52142 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:04.315110 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:04.321755 kubelet[2258]: E0904 17:10:04.321107 2258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:10:04.327559 systemd-logind[2001]: New session 4 of user core. Sep 4 17:10:04.336949 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:10:04.337722 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:10:04.338073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:10:04.340085 systemd[1]: kubelet.service: Consumed 1.379s CPU time. Sep 4 17:10:04.341830 amazon-ssm-agent[2132]: 2024-09-04 17:10:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:10:04.443749 amazon-ssm-agent[2132]: 2024-09-04 17:10:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2284) started Sep 4 17:10:04.477208 sshd[2281]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:04.488777 systemd-logind[2001]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:10:04.490000 systemd[1]: sshd@3-172.31.21.183:22-139.178.89.65:52142.service: Deactivated successfully. Sep 4 17:10:04.497543 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:10:04.512302 systemd-logind[2001]: Removed session 4. Sep 4 17:10:04.533236 systemd[1]: Started sshd@4-172.31.21.183:22-139.178.89.65:52144.service - OpenSSH per-connection server daemon (139.178.89.65:52144). Sep 4 17:10:04.544193 amazon-ssm-agent[2132]: 2024-09-04 17:10:04 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:10:04.715005 sshd[2296]: Accepted publickey for core from 139.178.89.65 port 52144 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:04.718434 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:04.726971 systemd-logind[2001]: New session 5 of user core. Sep 4 17:10:04.733909 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:10:04.900443 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:10:04.901042 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:10:04.917302 sudo[2303]: pam_unix(sudo:session): session closed for user root Sep 4 17:10:04.941143 sshd[2296]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:04.948977 systemd[1]: sshd@4-172.31.21.183:22-139.178.89.65:52144.service: Deactivated successfully. Sep 4 17:10:04.953138 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:10:04.955824 systemd-logind[2001]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:10:04.958167 systemd-logind[2001]: Removed session 5. Sep 4 17:10:04.979181 systemd[1]: Started sshd@5-172.31.21.183:22-139.178.89.65:52148.service - OpenSSH per-connection server daemon (139.178.89.65:52148). Sep 4 17:10:05.163082 sshd[2308]: Accepted publickey for core from 139.178.89.65 port 52148 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:05.165939 sshd[2308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:05.175155 systemd-logind[2001]: New session 6 of user core. Sep 4 17:10:05.183938 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:10:05.291436 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:10:05.293417 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:10:05.301506 sudo[2312]: pam_unix(sudo:session): session closed for user root Sep 4 17:10:05.313917 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:10:05.314510 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:10:05.342125 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:10:05.345657 auditctl[2315]: No rules Sep 4 17:10:05.346332 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:10:05.346785 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:10:05.359326 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:10:05.408065 augenrules[2333]: No rules Sep 4 17:10:05.410988 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:10:05.414056 sudo[2311]: pam_unix(sudo:session): session closed for user root Sep 4 17:10:05.437975 sshd[2308]: pam_unix(sshd:session): session closed for user core Sep 4 17:10:05.445425 systemd[1]: sshd@5-172.31.21.183:22-139.178.89.65:52148.service: Deactivated successfully. Sep 4 17:10:05.450361 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:10:05.452481 systemd-logind[2001]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:10:05.454456 systemd-logind[2001]: Removed session 6. Sep 4 17:10:05.484164 systemd[1]: Started sshd@6-172.31.21.183:22-139.178.89.65:52152.service - OpenSSH per-connection server daemon (139.178.89.65:52152). Sep 4 17:10:05.659484 sshd[2341]: Accepted publickey for core from 139.178.89.65 port 52152 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:10:05.662587 sshd[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:10:05.672567 systemd-logind[2001]: New session 7 of user core. Sep 4 17:10:05.680985 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:10:05.790047 sudo[2344]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:10:05.790665 sudo[2344]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:10:06.048195 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:10:06.067166 (dockerd)[2353]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:10:06.538830 dockerd[2353]: time="2024-09-04T17:10:06.538575425Z" level=info msg="Starting up" Sep 4 17:10:06.711873 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport106611548-merged.mount: Deactivated successfully. Sep 4 17:10:07.309164 dockerd[2353]: time="2024-09-04T17:10:07.308656889Z" level=info msg="Loading containers: start." Sep 4 17:10:07.490631 kernel: Initializing XFRM netlink socket Sep 4 17:10:07.567872 (udev-worker)[2367]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:10:07.676824 systemd-networkd[1934]: docker0: Link UP Sep 4 17:10:07.705337 dockerd[2353]: time="2024-09-04T17:10:07.705265735Z" level=info msg="Loading containers: done." Sep 4 17:10:07.856706 dockerd[2353]: time="2024-09-04T17:10:07.856102928Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:10:07.856706 dockerd[2353]: time="2024-09-04T17:10:07.856421744Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:10:07.857497 dockerd[2353]: time="2024-09-04T17:10:07.857000204Z" level=info msg="Daemon has completed initialization" Sep 4 17:10:07.922607 dockerd[2353]: time="2024-09-04T17:10:07.922501640Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:10:07.925049 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:10:09.242913 containerd[2024]: time="2024-09-04T17:10:09.242400360Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\"" Sep 4 17:10:09.958529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1748712975.mount: Deactivated successfully. Sep 4 17:10:12.780421 containerd[2024]: time="2024-09-04T17:10:12.780353857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:12.782519 containerd[2024]: time="2024-09-04T17:10:12.782445135Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.4: active requests=0, bytes read=29943740" Sep 4 17:10:12.784082 containerd[2024]: time="2024-09-04T17:10:12.784020838Z" level=info msg="ImageCreate event name:\"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:12.792645 containerd[2024]: time="2024-09-04T17:10:12.792526825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:12.795571 containerd[2024]: time="2024-09-04T17:10:12.794951785Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.4\" with image id \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\", size \"29940540\" in 3.552492919s" Sep 4 17:10:12.795571 containerd[2024]: time="2024-09-04T17:10:12.795012559Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.4\" returns image reference \"sha256:4fb024d2ca524db9b4b792ebc761ca44654c17ab90984a968b5276a64dbcc1ff\"" Sep 4 17:10:12.834619 containerd[2024]: time="2024-09-04T17:10:12.834459017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\"" Sep 4 17:10:14.588450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:10:14.594989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:16.328139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:16.341340 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:10:16.459019 kubelet[2557]: E0904 17:10:16.458722 2557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:10:16.467976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:10:16.468372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:10:17.217942 containerd[2024]: time="2024-09-04T17:10:17.217861545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:17.225655 containerd[2024]: time="2024-09-04T17:10:17.223462981Z" level=info msg="ImageCreate event name:\"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:17.225655 containerd[2024]: time="2024-09-04T17:10:17.223663985Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.4: active requests=0, bytes read=26881132" Sep 4 17:10:17.231876 containerd[2024]: time="2024-09-04T17:10:17.231794208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:17.234572 containerd[2024]: time="2024-09-04T17:10:17.234493229Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.4\" with image id \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\", size \"28368399\" in 4.399799255s" Sep 4 17:10:17.234572 containerd[2024]: time="2024-09-04T17:10:17.234565469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.4\" returns image reference \"sha256:4316ad972d94918481885d608f381e51d1e8d84458354f6240668016b5e9d6f5\"" Sep 4 17:10:17.275353 containerd[2024]: time="2024-09-04T17:10:17.275306954Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\"" Sep 4 17:10:19.626638 containerd[2024]: time="2024-09-04T17:10:19.626499471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:19.628723 containerd[2024]: time="2024-09-04T17:10:19.628645988Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.4: active requests=0, bytes read=16154063" Sep 4 17:10:19.631049 containerd[2024]: time="2024-09-04T17:10:19.630977445Z" level=info msg="ImageCreate event name:\"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:19.636700 containerd[2024]: time="2024-09-04T17:10:19.636552611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:19.639659 containerd[2024]: time="2024-09-04T17:10:19.639058744Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.4\" with image id \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\", size \"17641348\" in 2.363360562s" Sep 4 17:10:19.639659 containerd[2024]: time="2024-09-04T17:10:19.639121487Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.4\" returns image reference \"sha256:b0931aa794b8d14cc252b442a71c1d3e87f4781c2bbae23ebb37d18c9ee9acfe\"" Sep 4 17:10:19.679894 containerd[2024]: time="2024-09-04T17:10:19.679836847Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\"" Sep 4 17:10:21.769375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568156380.mount: Deactivated successfully. Sep 4 17:10:22.332748 containerd[2024]: time="2024-09-04T17:10:22.332082878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:22.333888 containerd[2024]: time="2024-09-04T17:10:22.333656912Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.4: active requests=0, bytes read=25646047" Sep 4 17:10:22.335755 containerd[2024]: time="2024-09-04T17:10:22.335625428Z" level=info msg="ImageCreate event name:\"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:22.340336 containerd[2024]: time="2024-09-04T17:10:22.340204060Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:22.342195 containerd[2024]: time="2024-09-04T17:10:22.341893713Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.4\" with image id \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\", size \"25645066\" in 2.661990916s" Sep 4 17:10:22.342195 containerd[2024]: time="2024-09-04T17:10:22.341979387Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.4\" returns image reference \"sha256:7fdda55d346bc23daec633f684e5ec2c91bd1469a5e006bdf45d15fbeb8dacdc\"" Sep 4 17:10:22.385226 containerd[2024]: time="2024-09-04T17:10:22.385150443Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:10:23.025265 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293722615.mount: Deactivated successfully. Sep 4 17:10:24.913666 containerd[2024]: time="2024-09-04T17:10:24.913489522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:24.916507 containerd[2024]: time="2024-09-04T17:10:24.916320981Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Sep 4 17:10:24.917984 containerd[2024]: time="2024-09-04T17:10:24.917857185Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:24.938785 containerd[2024]: time="2024-09-04T17:10:24.938336635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:24.940294 containerd[2024]: time="2024-09-04T17:10:24.940207338Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.554987393s" Sep 4 17:10:24.940294 containerd[2024]: time="2024-09-04T17:10:24.940283636Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:10:24.987163 containerd[2024]: time="2024-09-04T17:10:24.987040355Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:10:25.534946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611336093.mount: Deactivated successfully. Sep 4 17:10:25.545142 containerd[2024]: time="2024-09-04T17:10:25.545050773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:25.547194 containerd[2024]: time="2024-09-04T17:10:25.546910502Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:10:25.549004 containerd[2024]: time="2024-09-04T17:10:25.548884901Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:25.554281 containerd[2024]: time="2024-09-04T17:10:25.554154875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:25.556865 containerd[2024]: time="2024-09-04T17:10:25.556152949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 569.041362ms" Sep 4 17:10:25.556865 containerd[2024]: time="2024-09-04T17:10:25.556229944Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:10:25.598662 containerd[2024]: time="2024-09-04T17:10:25.598523348Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Sep 4 17:10:26.196930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2980703388.mount: Deactivated successfully. Sep 4 17:10:26.472508 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:10:26.483933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:28.144942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:28.155291 (kubelet)[2691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:10:28.265169 kubelet[2691]: E0904 17:10:28.264947 2691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:10:28.269499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:10:28.270281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:10:31.604516 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:10:33.081622 containerd[2024]: time="2024-09-04T17:10:33.079706275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:33.118745 containerd[2024]: time="2024-09-04T17:10:33.118659586Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Sep 4 17:10:33.156901 containerd[2024]: time="2024-09-04T17:10:33.156828845Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:33.213513 containerd[2024]: time="2024-09-04T17:10:33.213436259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:10:33.216858 containerd[2024]: time="2024-09-04T17:10:33.216788202Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 7.617890591s" Sep 4 17:10:33.216858 containerd[2024]: time="2024-09-04T17:10:33.216853071Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Sep 4 17:10:38.472457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:10:38.482751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:39.611876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:39.612320 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:10:39.707653 kubelet[2778]: E0904 17:10:39.704837 2778 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:10:39.708974 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:10:39.709326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:10:40.103520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:40.115149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:40.160752 systemd[1]: Reloading requested from client PID 2792 ('systemctl') (unit session-7.scope)... Sep 4 17:10:40.160777 systemd[1]: Reloading... Sep 4 17:10:40.336653 zram_generator::config[2831]: No configuration found. Sep 4 17:10:40.586877 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:10:40.773579 systemd[1]: Reloading finished in 612 ms. Sep 4 17:10:40.846287 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:10:40.846516 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:10:40.847133 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:40.857306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:42.112578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:42.131184 (kubelet)[2890]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:10:42.204664 kubelet[2890]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:10:42.204664 kubelet[2890]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:10:42.204664 kubelet[2890]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:10:42.205245 kubelet[2890]: I0904 17:10:42.204794 2890 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:10:43.942916 kubelet[2890]: I0904 17:10:43.942850 2890 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:10:43.942916 kubelet[2890]: I0904 17:10:43.942901 2890 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:10:43.943621 kubelet[2890]: I0904 17:10:43.943248 2890 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:10:43.972393 kubelet[2890]: E0904 17:10:43.972348 2890 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:43.973096 kubelet[2890]: I0904 17:10:43.972876 2890 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:10:43.991516 kubelet[2890]: I0904 17:10:43.991469 2890 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:10:43.994671 kubelet[2890]: I0904 17:10:43.994022 2890 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:10:43.994671 kubelet[2890]: I0904 17:10:43.994105 2890 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-183","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:10:43.994671 kubelet[2890]: I0904 17:10:43.994428 2890 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:10:43.994671 kubelet[2890]: I0904 17:10:43.994452 2890 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:10:43.995113 kubelet[2890]: I0904 17:10:43.994750 2890 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:10:43.996456 kubelet[2890]: I0904 17:10:43.996383 2890 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:10:43.996456 kubelet[2890]: I0904 17:10:43.996440 2890 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:10:43.996710 kubelet[2890]: I0904 17:10:43.996538 2890 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:10:43.996710 kubelet[2890]: I0904 17:10:43.996588 2890 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:10:44.000649 kubelet[2890]: W0904 17:10:43.998723 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-183&limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.000649 kubelet[2890]: E0904 17:10:43.998854 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-183&limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.000649 kubelet[2890]: W0904 17:10:43.999011 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.000649 kubelet[2890]: E0904 17:10:43.999074 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.000649 kubelet[2890]: I0904 17:10:43.999257 2890 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:10:44.000649 kubelet[2890]: I0904 17:10:43.999660 2890 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:10:44.000649 kubelet[2890]: W0904 17:10:43.999755 2890 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:10:44.001807 kubelet[2890]: I0904 17:10:44.001767 2890 server.go:1264] "Started kubelet" Sep 4 17:10:44.010610 kubelet[2890]: E0904 17:10:44.010351 2890 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.183:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-183.17f219ae625f324a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-183,UID:ip-172-31-21-183,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-183,},FirstTimestamp:2024-09-04 17:10:44.001731146 +0000 UTC m=+1.864284106,LastTimestamp:2024-09-04 17:10:44.001731146 +0000 UTC m=+1.864284106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-183,}" Sep 4 17:10:44.011518 kubelet[2890]: I0904 17:10:44.011472 2890 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:10:44.015557 kubelet[2890]: E0904 17:10:44.015499 2890 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:10:44.018665 kubelet[2890]: I0904 17:10:44.018553 2890 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:10:44.021914 kubelet[2890]: I0904 17:10:44.020387 2890 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:10:44.022257 kubelet[2890]: I0904 17:10:44.022157 2890 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:10:44.022706 kubelet[2890]: I0904 17:10:44.022549 2890 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:10:44.024172 kubelet[2890]: I0904 17:10:44.022992 2890 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:10:44.024172 kubelet[2890]: I0904 17:10:44.023778 2890 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:10:44.024172 kubelet[2890]: I0904 17:10:44.023918 2890 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:10:44.026691 kubelet[2890]: W0904 17:10:44.026565 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.027342 kubelet[2890]: E0904 17:10:44.027313 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.027795 kubelet[2890]: I0904 17:10:44.027757 2890 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:10:44.028133 kubelet[2890]: I0904 17:10:44.028091 2890 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:10:44.028880 kubelet[2890]: E0904 17:10:44.028821 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-183?timeout=10s\": dial tcp 172.31.21.183:6443: connect: connection refused" interval="200ms" Sep 4 17:10:44.031140 kubelet[2890]: I0904 17:10:44.031093 2890 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:10:44.057854 kubelet[2890]: I0904 17:10:44.057797 2890 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:10:44.061855 kubelet[2890]: I0904 17:10:44.061803 2890 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:10:44.062153 kubelet[2890]: I0904 17:10:44.062133 2890 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:10:44.062319 kubelet[2890]: I0904 17:10:44.062300 2890 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:10:44.068346 kubelet[2890]: E0904 17:10:44.068299 2890 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:10:44.070354 kubelet[2890]: W0904 17:10:44.070305 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.076561 kubelet[2890]: E0904 17:10:44.076513 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.080808 kubelet[2890]: I0904 17:10:44.080519 2890 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:10:44.082839 kubelet[2890]: I0904 17:10:44.082797 2890 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:10:44.083047 kubelet[2890]: I0904 17:10:44.083027 2890 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:10:44.125858 kubelet[2890]: I0904 17:10:44.125821 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-183" Sep 4 17:10:44.126892 kubelet[2890]: E0904 17:10:44.126824 2890 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.183:6443/api/v1/nodes\": dial tcp 172.31.21.183:6443: connect: connection refused" node="ip-172-31-21-183" Sep 4 17:10:44.148279 kubelet[2890]: I0904 17:10:44.148150 2890 policy_none.go:49] "None policy: Start" Sep 4 17:10:44.149469 kubelet[2890]: I0904 17:10:44.149438 2890 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:10:44.149815 kubelet[2890]: I0904 17:10:44.149671 2890 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:10:44.170956 kubelet[2890]: E0904 17:10:44.170886 2890 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:10:44.208093 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:10:44.230577 kubelet[2890]: E0904 17:10:44.230504 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-183?timeout=10s\": dial tcp 172.31.21.183:6443: connect: connection refused" interval="400ms" Sep 4 17:10:44.231609 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:10:44.241034 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:10:44.250542 kubelet[2890]: I0904 17:10:44.249854 2890 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:10:44.250542 kubelet[2890]: I0904 17:10:44.250184 2890 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:10:44.250542 kubelet[2890]: I0904 17:10:44.250343 2890 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:10:44.254275 kubelet[2890]: E0904 17:10:44.254039 2890 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-183\" not found" Sep 4 17:10:44.329437 kubelet[2890]: I0904 17:10:44.329383 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-183" Sep 4 17:10:44.330005 kubelet[2890]: E0904 17:10:44.329936 2890 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.183:6443/api/v1/nodes\": dial tcp 172.31.21.183:6443: connect: connection refused" node="ip-172-31-21-183" Sep 4 17:10:44.372185 kubelet[2890]: I0904 17:10:44.372104 2890 topology_manager.go:215] "Topology Admit Handler" podUID="33cc23d4387157bb170a7532d99d69c7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-183" Sep 4 17:10:44.374561 kubelet[2890]: I0904 17:10:44.374209 2890 topology_manager.go:215] "Topology Admit Handler" podUID="b80bd47dd1b10205ea1003be188e48fe" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:44.377448 kubelet[2890]: I0904 17:10:44.377006 2890 topology_manager.go:215] "Topology Admit Handler" podUID="f323ec47a7bddd9bfd275186928feded" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-183" Sep 4 17:10:44.394214 systemd[1]: Created slice kubepods-burstable-pod33cc23d4387157bb170a7532d99d69c7.slice - libcontainer container kubepods-burstable-pod33cc23d4387157bb170a7532d99d69c7.slice. Sep 4 17:10:44.427302 systemd[1]: Created slice kubepods-burstable-podb80bd47dd1b10205ea1003be188e48fe.slice - libcontainer container kubepods-burstable-podb80bd47dd1b10205ea1003be188e48fe.slice. Sep 4 17:10:44.437733 systemd[1]: Created slice kubepods-burstable-podf323ec47a7bddd9bfd275186928feded.slice - libcontainer container kubepods-burstable-podf323ec47a7bddd9bfd275186928feded.slice. Sep 4 17:10:44.527505 kubelet[2890]: I0904 17:10:44.527380 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:44.527505 kubelet[2890]: I0904 17:10:44.527458 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33cc23d4387157bb170a7532d99d69c7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-183\" (UID: \"33cc23d4387157bb170a7532d99d69c7\") " pod="kube-system/kube-apiserver-ip-172-31-21-183" Sep 4 17:10:44.527505 kubelet[2890]: I0904 17:10:44.527506 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:44.527889 kubelet[2890]: I0904 17:10:44.527544 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:44.527889 kubelet[2890]: I0904 17:10:44.527583 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:44.527889 kubelet[2890]: I0904 17:10:44.527657 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33cc23d4387157bb170a7532d99d69c7-ca-certs\") pod \"kube-apiserver-ip-172-31-21-183\" (UID: \"33cc23d4387157bb170a7532d99d69c7\") " pod="kube-system/kube-apiserver-ip-172-31-21-183" Sep 4 17:10:44.527889 kubelet[2890]: I0904 17:10:44.527693 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33cc23d4387157bb170a7532d99d69c7-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-183\" (UID: \"33cc23d4387157bb170a7532d99d69c7\") " pod="kube-system/kube-apiserver-ip-172-31-21-183" Sep 4 17:10:44.527889 kubelet[2890]: I0904 17:10:44.527728 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:44.528153 kubelet[2890]: I0904 17:10:44.527765 2890 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f323ec47a7bddd9bfd275186928feded-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-183\" (UID: \"f323ec47a7bddd9bfd275186928feded\") " pod="kube-system/kube-scheduler-ip-172-31-21-183" Sep 4 17:10:44.631530 kubelet[2890]: E0904 17:10:44.631467 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-183?timeout=10s\": dial tcp 172.31.21.183:6443: connect: connection refused" interval="800ms" Sep 4 17:10:44.720683 containerd[2024]: time="2024-09-04T17:10:44.720617666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-183,Uid:33cc23d4387157bb170a7532d99d69c7,Namespace:kube-system,Attempt:0,}" Sep 4 17:10:44.732481 kubelet[2890]: I0904 17:10:44.732428 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-183" Sep 4 17:10:44.733010 kubelet[2890]: E0904 17:10:44.732949 2890 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.183:6443/api/v1/nodes\": dial tcp 172.31.21.183:6443: connect: connection refused" node="ip-172-31-21-183" Sep 4 17:10:44.735239 containerd[2024]: time="2024-09-04T17:10:44.734789417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-183,Uid:b80bd47dd1b10205ea1003be188e48fe,Namespace:kube-system,Attempt:0,}" Sep 4 17:10:44.743616 containerd[2024]: time="2024-09-04T17:10:44.743534395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-183,Uid:f323ec47a7bddd9bfd275186928feded,Namespace:kube-system,Attempt:0,}" Sep 4 17:10:44.831065 kubelet[2890]: W0904 17:10:44.830854 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-183&limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.831298 kubelet[2890]: E0904 17:10:44.831253 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.21.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-183&limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.974006 kubelet[2890]: W0904 17:10:44.973905 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:44.974006 kubelet[2890]: E0904 17:10:44.973971 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.21.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:45.147869 kubelet[2890]: W0904 17:10:45.147674 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:45.147869 kubelet[2890]: E0904 17:10:45.147773 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.21.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:45.188792 kubelet[2890]: W0904 17:10:45.188698 2890 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:45.188792 kubelet[2890]: E0904 17:10:45.188799 2890 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.21.183:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:45.288907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292295284.mount: Deactivated successfully. Sep 4 17:10:45.301261 containerd[2024]: time="2024-09-04T17:10:45.301182664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:45.305615 containerd[2024]: time="2024-09-04T17:10:45.305526279Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:10:45.307667 containerd[2024]: time="2024-09-04T17:10:45.306799431Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:45.308940 containerd[2024]: time="2024-09-04T17:10:45.308750766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:45.310175 containerd[2024]: time="2024-09-04T17:10:45.310080887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:10:45.311666 containerd[2024]: time="2024-09-04T17:10:45.311576979Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:45.313348 containerd[2024]: time="2024-09-04T17:10:45.313252284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:10:45.322127 containerd[2024]: time="2024-09-04T17:10:45.321970045Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:10:45.324736 containerd[2024]: time="2024-09-04T17:10:45.324186292Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.415021ms" Sep 4 17:10:45.328394 containerd[2024]: time="2024-09-04T17:10:45.328299896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 593.371894ms" Sep 4 17:10:45.345257 containerd[2024]: time="2024-09-04T17:10:45.345169287Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 601.442555ms" Sep 4 17:10:45.432456 kubelet[2890]: E0904 17:10:45.432270 2890 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-183?timeout=10s\": dial tcp 172.31.21.183:6443: connect: connection refused" interval="1.6s" Sep 4 17:10:45.514751 update_engine[2002]: I0904 17:10:45.514658 2002 update_attempter.cc:509] Updating boot flags... Sep 4 17:10:45.542509 kubelet[2890]: I0904 17:10:45.542073 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-183" Sep 4 17:10:45.542878 kubelet[2890]: E0904 17:10:45.542588 2890 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.21.183:6443/api/v1/nodes\": dial tcp 172.31.21.183:6443: connect: connection refused" node="ip-172-31-21-183" Sep 4 17:10:45.627792 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2951) Sep 4 17:10:45.691002 containerd[2024]: time="2024-09-04T17:10:45.689962327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:10:45.691002 containerd[2024]: time="2024-09-04T17:10:45.690108656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:45.691002 containerd[2024]: time="2024-09-04T17:10:45.690155228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:10:45.691002 containerd[2024]: time="2024-09-04T17:10:45.690189565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:45.738834 containerd[2024]: time="2024-09-04T17:10:45.735585896Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:10:45.747830 containerd[2024]: time="2024-09-04T17:10:45.743134428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:45.747830 containerd[2024]: time="2024-09-04T17:10:45.743208421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:10:45.747830 containerd[2024]: time="2024-09-04T17:10:45.743235303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:45.760256 containerd[2024]: time="2024-09-04T17:10:45.760034074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:10:45.760256 containerd[2024]: time="2024-09-04T17:10:45.760150088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:45.760256 containerd[2024]: time="2024-09-04T17:10:45.760193550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:10:45.760663 containerd[2024]: time="2024-09-04T17:10:45.760227899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:10:45.862218 systemd[1]: Started cri-containerd-cbf27b3a5e8b4bb08b80e25d3bc5cec9e3e29308aab340415e0319412dcc112d.scope - libcontainer container cbf27b3a5e8b4bb08b80e25d3bc5cec9e3e29308aab340415e0319412dcc112d. Sep 4 17:10:45.884250 systemd[1]: Started cri-containerd-a7ee6056cda65d76f5c450d6aaf5aed8789a3f1589e12622cc922a35a53d53f0.scope - libcontainer container a7ee6056cda65d76f5c450d6aaf5aed8789a3f1589e12622cc922a35a53d53f0. Sep 4 17:10:46.037956 systemd[1]: Started cri-containerd-6c43c2854a179f17f31808f825b3f1e9f08b0c5a067eda9ee5b26409bdb6ae71.scope - libcontainer container 6c43c2854a179f17f31808f825b3f1e9f08b0c5a067eda9ee5b26409bdb6ae71. Sep 4 17:10:46.081663 kubelet[2890]: E0904 17:10:46.077003 2890 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.21.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.21.183:6443: connect: connection refused Sep 4 17:10:46.129933 containerd[2024]: time="2024-09-04T17:10:46.129866793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-183,Uid:33cc23d4387157bb170a7532d99d69c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"cbf27b3a5e8b4bb08b80e25d3bc5cec9e3e29308aab340415e0319412dcc112d\"" Sep 4 17:10:46.146775 containerd[2024]: time="2024-09-04T17:10:46.146387697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-183,Uid:b80bd47dd1b10205ea1003be188e48fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"a7ee6056cda65d76f5c450d6aaf5aed8789a3f1589e12622cc922a35a53d53f0\"" Sep 4 17:10:46.147246 containerd[2024]: time="2024-09-04T17:10:46.147016079Z" level=info msg="CreateContainer within sandbox \"cbf27b3a5e8b4bb08b80e25d3bc5cec9e3e29308aab340415e0319412dcc112d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:10:46.158886 containerd[2024]: time="2024-09-04T17:10:46.158524082Z" level=info msg="CreateContainer within sandbox \"a7ee6056cda65d76f5c450d6aaf5aed8789a3f1589e12622cc922a35a53d53f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:10:46.191918 containerd[2024]: time="2024-09-04T17:10:46.191785817Z" level=info msg="CreateContainer within sandbox \"cbf27b3a5e8b4bb08b80e25d3bc5cec9e3e29308aab340415e0319412dcc112d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e9b7e11d68b4cea2ef3f435f03ce39faddae8085d376cab75c693a7c9799f250\"" Sep 4 17:10:46.195497 containerd[2024]: time="2024-09-04T17:10:46.195384448Z" level=info msg="StartContainer for \"e9b7e11d68b4cea2ef3f435f03ce39faddae8085d376cab75c693a7c9799f250\"" Sep 4 17:10:46.200399 containerd[2024]: time="2024-09-04T17:10:46.200331220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-183,Uid:f323ec47a7bddd9bfd275186928feded,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c43c2854a179f17f31808f825b3f1e9f08b0c5a067eda9ee5b26409bdb6ae71\"" Sep 4 17:10:46.206529 containerd[2024]: time="2024-09-04T17:10:46.206467054Z" level=info msg="CreateContainer within sandbox \"6c43c2854a179f17f31808f825b3f1e9f08b0c5a067eda9ee5b26409bdb6ae71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:10:46.214006 containerd[2024]: time="2024-09-04T17:10:46.213932121Z" level=info msg="CreateContainer within sandbox \"a7ee6056cda65d76f5c450d6aaf5aed8789a3f1589e12622cc922a35a53d53f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f\"" Sep 4 17:10:46.214746 containerd[2024]: time="2024-09-04T17:10:46.214695234Z" level=info msg="StartContainer for \"a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f\"" Sep 4 17:10:46.237784 containerd[2024]: time="2024-09-04T17:10:46.237519637Z" level=info msg="CreateContainer within sandbox \"6c43c2854a179f17f31808f825b3f1e9f08b0c5a067eda9ee5b26409bdb6ae71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472\"" Sep 4 17:10:46.241831 containerd[2024]: time="2024-09-04T17:10:46.239553033Z" level=info msg="StartContainer for \"ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472\"" Sep 4 17:10:46.319719 systemd[1]: Started cri-containerd-e9b7e11d68b4cea2ef3f435f03ce39faddae8085d376cab75c693a7c9799f250.scope - libcontainer container e9b7e11d68b4cea2ef3f435f03ce39faddae8085d376cab75c693a7c9799f250. Sep 4 17:10:46.337810 systemd[1]: run-containerd-runc-k8s.io-ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472-runc.GsLnIT.mount: Deactivated successfully. Sep 4 17:10:46.348955 systemd[1]: Started cri-containerd-a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f.scope - libcontainer container a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f. Sep 4 17:10:46.363418 systemd[1]: Started cri-containerd-ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472.scope - libcontainer container ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472. Sep 4 17:10:46.490938 containerd[2024]: time="2024-09-04T17:10:46.490860418Z" level=info msg="StartContainer for \"e9b7e11d68b4cea2ef3f435f03ce39faddae8085d376cab75c693a7c9799f250\" returns successfully" Sep 4 17:10:46.510698 containerd[2024]: time="2024-09-04T17:10:46.509798706Z" level=info msg="StartContainer for \"a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f\" returns successfully" Sep 4 17:10:46.520731 containerd[2024]: time="2024-09-04T17:10:46.520654292Z" level=info msg="StartContainer for \"ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472\" returns successfully" Sep 4 17:10:47.149642 kubelet[2890]: I0904 17:10:47.146504 2890 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-183" Sep 4 17:10:51.001517 kubelet[2890]: I0904 17:10:51.001114 2890 apiserver.go:52] "Watching apiserver" Sep 4 17:10:51.105893 kubelet[2890]: E0904 17:10:51.105780 2890 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-21-183\" not found" node="ip-172-31-21-183" Sep 4 17:10:51.126128 kubelet[2890]: I0904 17:10:51.125858 2890 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:10:51.175642 kubelet[2890]: E0904 17:10:51.174375 2890 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-183.17f219ae625f324a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-183,UID:ip-172-31-21-183,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-183,},FirstTimestamp:2024-09-04 17:10:44.001731146 +0000 UTC m=+1.864284106,LastTimestamp:2024-09-04 17:10:44.001731146 +0000 UTC m=+1.864284106,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-183,}" Sep 4 17:10:51.231414 kubelet[2890]: E0904 17:10:51.230466 2890 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-183.17f219ae6330ecbf default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-183,UID:ip-172-31-21-183,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-21-183,},FirstTimestamp:2024-09-04 17:10:44.015475903 +0000 UTC m=+1.878028899,LastTimestamp:2024-09-04 17:10:44.015475903 +0000 UTC m=+1.878028899,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-183,}" Sep 4 17:10:51.288878 kubelet[2890]: I0904 17:10:51.288479 2890 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-183" Sep 4 17:10:51.330942 kubelet[2890]: E0904 17:10:51.330459 2890 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-183.17f219ae66f5ce2d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-183,UID:ip-172-31-21-183,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-21-183 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-21-183,},FirstTimestamp:2024-09-04 17:10:44.078710317 +0000 UTC m=+1.941263277,LastTimestamp:2024-09-04 17:10:44.078710317 +0000 UTC m=+1.941263277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-183,}" Sep 4 17:10:51.434337 kubelet[2890]: E0904 17:10:51.433916 2890 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-183.17f219ae66f63019 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-183,UID:ip-172-31-21-183,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ip-172-31-21-183 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ip-172-31-21-183,},FirstTimestamp:2024-09-04 17:10:44.078735385 +0000 UTC m=+1.941288345,LastTimestamp:2024-09-04 17:10:44.078735385 +0000 UTC m=+1.941288345,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-183,}" Sep 4 17:10:53.596846 systemd[1]: Reloading requested from client PID 3264 ('systemctl') (unit session-7.scope)... Sep 4 17:10:53.596877 systemd[1]: Reloading... Sep 4 17:10:53.851853 zram_generator::config[3302]: No configuration found. Sep 4 17:10:54.131569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:10:54.343850 systemd[1]: Reloading finished in 746 ms. Sep 4 17:10:54.431382 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:54.452314 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:10:54.452825 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:54.452923 systemd[1]: kubelet.service: Consumed 2.637s CPU time, 114.5M memory peak, 0B memory swap peak. Sep 4 17:10:54.459254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:10:54.965689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:10:54.982365 (kubelet)[3368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:10:55.090292 kubelet[3368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:10:55.090292 kubelet[3368]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:10:55.090292 kubelet[3368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:10:55.092646 kubelet[3368]: I0904 17:10:55.090992 3368 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:10:55.100213 kubelet[3368]: I0904 17:10:55.100170 3368 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Sep 4 17:10:55.100386 kubelet[3368]: I0904 17:10:55.100367 3368 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:10:55.101109 kubelet[3368]: I0904 17:10:55.101073 3368 server.go:927] "Client rotation is on, will bootstrap in background" Sep 4 17:10:55.103782 kubelet[3368]: I0904 17:10:55.103745 3368 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:10:55.106412 kubelet[3368]: I0904 17:10:55.106348 3368 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:10:55.128422 kubelet[3368]: I0904 17:10:55.128359 3368 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:10:55.129237 kubelet[3368]: I0904 17:10:55.129188 3368 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:10:55.129980 kubelet[3368]: I0904 17:10:55.129378 3368 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-183","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:10:55.129980 kubelet[3368]: I0904 17:10:55.129732 3368 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:10:55.129980 kubelet[3368]: I0904 17:10:55.129752 3368 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:10:55.129980 kubelet[3368]: I0904 17:10:55.129813 3368 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:10:55.131686 kubelet[3368]: I0904 17:10:55.130370 3368 kubelet.go:400] "Attempting to sync node with API server" Sep 4 17:10:55.131686 kubelet[3368]: I0904 17:10:55.131619 3368 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:10:55.132667 kubelet[3368]: I0904 17:10:55.131937 3368 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:10:55.132839 kubelet[3368]: I0904 17:10:55.132813 3368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:10:55.137825 kubelet[3368]: I0904 17:10:55.137780 3368 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:10:55.138298 kubelet[3368]: I0904 17:10:55.138273 3368 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:10:55.141163 kubelet[3368]: I0904 17:10:55.141124 3368 server.go:1264] "Started kubelet" Sep 4 17:10:55.149137 kubelet[3368]: I0904 17:10:55.148918 3368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:10:55.164707 kubelet[3368]: I0904 17:10:55.164172 3368 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:10:55.169093 kubelet[3368]: I0904 17:10:55.169050 3368 server.go:455] "Adding debug handlers to kubelet server" Sep 4 17:10:55.177929 kubelet[3368]: I0904 17:10:55.177372 3368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:10:55.177929 kubelet[3368]: I0904 17:10:55.177776 3368 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:10:55.184649 kubelet[3368]: I0904 17:10:55.184007 3368 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:10:55.191849 kubelet[3368]: I0904 17:10:55.191774 3368 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Sep 4 17:10:55.193857 kubelet[3368]: I0904 17:10:55.192057 3368 reconciler.go:26] "Reconciler: start to sync state" Sep 4 17:10:55.203277 kubelet[3368]: I0904 17:10:55.196970 3368 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:10:55.203277 kubelet[3368]: I0904 17:10:55.197120 3368 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:10:55.203277 kubelet[3368]: I0904 17:10:55.197879 3368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:10:55.204808 kubelet[3368]: I0904 17:10:55.204220 3368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:10:55.204808 kubelet[3368]: I0904 17:10:55.204277 3368 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:10:55.204808 kubelet[3368]: I0904 17:10:55.204306 3368 kubelet.go:2337] "Starting kubelet main sync loop" Sep 4 17:10:55.204808 kubelet[3368]: E0904 17:10:55.204380 3368 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:10:55.217077 kubelet[3368]: I0904 17:10:55.217004 3368 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:10:55.252730 kubelet[3368]: E0904 17:10:55.251326 3368 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:10:55.296980 kubelet[3368]: I0904 17:10:55.296919 3368 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-21-183" Sep 4 17:10:55.304945 kubelet[3368]: E0904 17:10:55.304873 3368 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:10:55.318360 kubelet[3368]: I0904 17:10:55.318309 3368 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-21-183" Sep 4 17:10:55.319494 kubelet[3368]: I0904 17:10:55.319439 3368 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-21-183" Sep 4 17:10:55.387578 kubelet[3368]: I0904 17:10:55.387087 3368 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:10:55.387578 kubelet[3368]: I0904 17:10:55.387119 3368 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:10:55.387578 kubelet[3368]: I0904 17:10:55.387156 3368 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:10:55.387578 kubelet[3368]: I0904 17:10:55.387404 3368 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:10:55.387578 kubelet[3368]: I0904 17:10:55.387424 3368 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:10:55.387578 kubelet[3368]: I0904 17:10:55.387459 3368 policy_none.go:49] "None policy: Start" Sep 4 17:10:55.389825 kubelet[3368]: I0904 17:10:55.389147 3368 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:10:55.389825 kubelet[3368]: I0904 17:10:55.389197 3368 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:10:55.389825 kubelet[3368]: I0904 17:10:55.389476 3368 state_mem.go:75] "Updated machine memory state" Sep 4 17:10:55.401379 kubelet[3368]: I0904 17:10:55.401317 3368 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:10:55.401805 kubelet[3368]: I0904 17:10:55.401698 3368 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 4 17:10:55.406105 kubelet[3368]: I0904 17:10:55.405262 3368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:10:55.505385 kubelet[3368]: I0904 17:10:55.505213 3368 topology_manager.go:215] "Topology Admit Handler" podUID="33cc23d4387157bb170a7532d99d69c7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-21-183" Sep 4 17:10:55.505525 kubelet[3368]: I0904 17:10:55.505417 3368 topology_manager.go:215] "Topology Admit Handler" podUID="b80bd47dd1b10205ea1003be188e48fe" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:55.505525 kubelet[3368]: I0904 17:10:55.505501 3368 topology_manager.go:215] "Topology Admit Handler" podUID="f323ec47a7bddd9bfd275186928feded" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-21-183" Sep 4 17:10:55.595087 kubelet[3368]: I0904 17:10:55.595006 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:55.595681 kubelet[3368]: I0904 17:10:55.595332 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:55.595681 kubelet[3368]: I0904 17:10:55.595414 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:55.595681 kubelet[3368]: I0904 17:10:55.595462 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:55.595681 kubelet[3368]: I0904 17:10:55.595506 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33cc23d4387157bb170a7532d99d69c7-ca-certs\") pod \"kube-apiserver-ip-172-31-21-183\" (UID: \"33cc23d4387157bb170a7532d99d69c7\") " pod="kube-system/kube-apiserver-ip-172-31-21-183" Sep 4 17:10:55.595681 kubelet[3368]: I0904 17:10:55.595559 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33cc23d4387157bb170a7532d99d69c7-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-183\" (UID: \"33cc23d4387157bb170a7532d99d69c7\") " pod="kube-system/kube-apiserver-ip-172-31-21-183" Sep 4 17:10:55.596308 kubelet[3368]: I0904 17:10:55.596060 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33cc23d4387157bb170a7532d99d69c7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-183\" (UID: \"33cc23d4387157bb170a7532d99d69c7\") " pod="kube-system/kube-apiserver-ip-172-31-21-183" Sep 4 17:10:55.596308 kubelet[3368]: I0904 17:10:55.596181 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b80bd47dd1b10205ea1003be188e48fe-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-183\" (UID: \"b80bd47dd1b10205ea1003be188e48fe\") " pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:55.596308 kubelet[3368]: I0904 17:10:55.596264 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f323ec47a7bddd9bfd275186928feded-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-183\" (UID: \"f323ec47a7bddd9bfd275186928feded\") " pod="kube-system/kube-scheduler-ip-172-31-21-183" Sep 4 17:10:56.135195 kubelet[3368]: I0904 17:10:56.134882 3368 apiserver.go:52] "Watching apiserver" Sep 4 17:10:56.192180 kubelet[3368]: I0904 17:10:56.192080 3368 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Sep 4 17:10:56.397083 kubelet[3368]: E0904 17:10:56.396926 3368 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-21-183\" already exists" pod="kube-system/kube-apiserver-ip-172-31-21-183" Sep 4 17:10:56.397528 kubelet[3368]: E0904 17:10:56.397476 3368 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-21-183\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-21-183" Sep 4 17:10:56.488065 kubelet[3368]: I0904 17:10:56.487964 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-183" podStartSLOduration=1.487941122 podStartE2EDuration="1.487941122s" podCreationTimestamp="2024-09-04 17:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:10:56.444685064 +0000 UTC m=+1.452510959" watchObservedRunningTime="2024-09-04 17:10:56.487941122 +0000 UTC m=+1.495766993" Sep 4 17:10:56.523668 kubelet[3368]: I0904 17:10:56.523534 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-183" podStartSLOduration=1.523482496 podStartE2EDuration="1.523482496s" podCreationTimestamp="2024-09-04 17:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:10:56.489239835 +0000 UTC m=+1.497065706" watchObservedRunningTime="2024-09-04 17:10:56.523482496 +0000 UTC m=+1.531308367" Sep 4 17:11:01.585854 sudo[2344]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:01.611056 sshd[2341]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:01.626507 systemd[1]: sshd@6-172.31.21.183:22-139.178.89.65:52152.service: Deactivated successfully. Sep 4 17:11:01.631343 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:11:01.632198 systemd[1]: session-7.scope: Consumed 10.196s CPU time, 134.7M memory peak, 0B memory swap peak. Sep 4 17:11:01.634167 systemd-logind[2001]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:11:01.637306 systemd-logind[2001]: Removed session 7. Sep 4 17:11:03.322227 kubelet[3368]: I0904 17:11:03.321850 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-183" podStartSLOduration=8.321827457 podStartE2EDuration="8.321827457s" podCreationTimestamp="2024-09-04 17:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:10:56.525272458 +0000 UTC m=+1.533098317" watchObservedRunningTime="2024-09-04 17:11:03.321827457 +0000 UTC m=+8.329653340" Sep 4 17:11:08.172376 kubelet[3368]: I0904 17:11:08.172289 3368 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:11:08.173789 kubelet[3368]: I0904 17:11:08.173649 3368 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:11:08.173903 containerd[2024]: time="2024-09-04T17:11:08.173223640Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:11:09.058702 kubelet[3368]: I0904 17:11:09.058630 3368 topology_manager.go:215] "Topology Admit Handler" podUID="1aadb2af-518d-4c85-a423-5ce6119d9dd7" podNamespace="kube-system" podName="kube-proxy-8v772" Sep 4 17:11:09.081262 kubelet[3368]: I0904 17:11:09.081204 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1aadb2af-518d-4c85-a423-5ce6119d9dd7-xtables-lock\") pod \"kube-proxy-8v772\" (UID: \"1aadb2af-518d-4c85-a423-5ce6119d9dd7\") " pod="kube-system/kube-proxy-8v772" Sep 4 17:11:09.081262 kubelet[3368]: I0904 17:11:09.081271 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1aadb2af-518d-4c85-a423-5ce6119d9dd7-lib-modules\") pod \"kube-proxy-8v772\" (UID: \"1aadb2af-518d-4c85-a423-5ce6119d9dd7\") " pod="kube-system/kube-proxy-8v772" Sep 4 17:11:09.081698 kubelet[3368]: I0904 17:11:09.081315 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1aadb2af-518d-4c85-a423-5ce6119d9dd7-kube-proxy\") pod \"kube-proxy-8v772\" (UID: \"1aadb2af-518d-4c85-a423-5ce6119d9dd7\") " pod="kube-system/kube-proxy-8v772" Sep 4 17:11:09.081698 kubelet[3368]: I0904 17:11:09.081376 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvq2l\" (UniqueName: \"kubernetes.io/projected/1aadb2af-518d-4c85-a423-5ce6119d9dd7-kube-api-access-dvq2l\") pod \"kube-proxy-8v772\" (UID: \"1aadb2af-518d-4c85-a423-5ce6119d9dd7\") " pod="kube-system/kube-proxy-8v772" Sep 4 17:11:09.083742 systemd[1]: Created slice kubepods-besteffort-pod1aadb2af_518d_4c85_a423_5ce6119d9dd7.slice - libcontainer container kubepods-besteffort-pod1aadb2af_518d_4c85_a423_5ce6119d9dd7.slice. Sep 4 17:11:09.360612 kubelet[3368]: I0904 17:11:09.360382 3368 topology_manager.go:215] "Topology Admit Handler" podUID="142524d4-486d-4c67-ad55-a6989cb253d0" podNamespace="tigera-operator" podName="tigera-operator-77f994b5bb-w5sdl" Sep 4 17:11:09.368815 kubelet[3368]: W0904 17:11:09.368722 3368 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-21-183" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-21-183' and this object Sep 4 17:11:09.368815 kubelet[3368]: E0904 17:11:09.368813 3368 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-21-183" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-21-183' and this object Sep 4 17:11:09.385958 kubelet[3368]: I0904 17:11:09.383074 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/142524d4-486d-4c67-ad55-a6989cb253d0-var-lib-calico\") pod \"tigera-operator-77f994b5bb-w5sdl\" (UID: \"142524d4-486d-4c67-ad55-a6989cb253d0\") " pod="tigera-operator/tigera-operator-77f994b5bb-w5sdl" Sep 4 17:11:09.385958 kubelet[3368]: I0904 17:11:09.383160 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p69t\" (UniqueName: \"kubernetes.io/projected/142524d4-486d-4c67-ad55-a6989cb253d0-kube-api-access-2p69t\") pod \"tigera-operator-77f994b5bb-w5sdl\" (UID: \"142524d4-486d-4c67-ad55-a6989cb253d0\") " pod="tigera-operator/tigera-operator-77f994b5bb-w5sdl" Sep 4 17:11:09.384468 systemd[1]: Created slice kubepods-besteffort-pod142524d4_486d_4c67_ad55_a6989cb253d0.slice - libcontainer container kubepods-besteffort-pod142524d4_486d_4c67_ad55_a6989cb253d0.slice. Sep 4 17:11:09.401169 containerd[2024]: time="2024-09-04T17:11:09.400548260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8v772,Uid:1aadb2af-518d-4c85-a423-5ce6119d9dd7,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:09.488344 containerd[2024]: time="2024-09-04T17:11:09.485451391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:09.488344 containerd[2024]: time="2024-09-04T17:11:09.485565196Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:09.488344 containerd[2024]: time="2024-09-04T17:11:09.486414741Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:09.488344 containerd[2024]: time="2024-09-04T17:11:09.487931579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:09.564972 systemd[1]: Started cri-containerd-af79ba13b6b9e54c93a8aa338fd578ccc7b050c8e293bb6b8adbad36b3cfb01f.scope - libcontainer container af79ba13b6b9e54c93a8aa338fd578ccc7b050c8e293bb6b8adbad36b3cfb01f. Sep 4 17:11:09.610463 containerd[2024]: time="2024-09-04T17:11:09.610389706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8v772,Uid:1aadb2af-518d-4c85-a423-5ce6119d9dd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"af79ba13b6b9e54c93a8aa338fd578ccc7b050c8e293bb6b8adbad36b3cfb01f\"" Sep 4 17:11:09.617898 containerd[2024]: time="2024-09-04T17:11:09.617150993Z" level=info msg="CreateContainer within sandbox \"af79ba13b6b9e54c93a8aa338fd578ccc7b050c8e293bb6b8adbad36b3cfb01f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:11:09.643935 containerd[2024]: time="2024-09-04T17:11:09.643840740Z" level=info msg="CreateContainer within sandbox \"af79ba13b6b9e54c93a8aa338fd578ccc7b050c8e293bb6b8adbad36b3cfb01f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5c49d6dbcbab656d19d821075b4b4219aa375a3f64a88c21642af5c1ff49f6d7\"" Sep 4 17:11:09.645278 containerd[2024]: time="2024-09-04T17:11:09.645189061Z" level=info msg="StartContainer for \"5c49d6dbcbab656d19d821075b4b4219aa375a3f64a88c21642af5c1ff49f6d7\"" Sep 4 17:11:09.694542 containerd[2024]: time="2024-09-04T17:11:09.694484509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-w5sdl,Uid:142524d4-486d-4c67-ad55-a6989cb253d0,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:11:09.698815 systemd[1]: Started cri-containerd-5c49d6dbcbab656d19d821075b4b4219aa375a3f64a88c21642af5c1ff49f6d7.scope - libcontainer container 5c49d6dbcbab656d19d821075b4b4219aa375a3f64a88c21642af5c1ff49f6d7. Sep 4 17:11:09.762484 containerd[2024]: time="2024-09-04T17:11:09.761157033Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:09.762484 containerd[2024]: time="2024-09-04T17:11:09.761334494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:09.762484 containerd[2024]: time="2024-09-04T17:11:09.762126182Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:09.762484 containerd[2024]: time="2024-09-04T17:11:09.762224115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:09.782643 containerd[2024]: time="2024-09-04T17:11:09.782095185Z" level=info msg="StartContainer for \"5c49d6dbcbab656d19d821075b4b4219aa375a3f64a88c21642af5c1ff49f6d7\" returns successfully" Sep 4 17:11:09.815960 systemd[1]: Started cri-containerd-30a95acb06f6dba47e4517d29a97c5dc00f4e16d4d6872608dfa9b24d5003530.scope - libcontainer container 30a95acb06f6dba47e4517d29a97c5dc00f4e16d4d6872608dfa9b24d5003530. Sep 4 17:11:09.922913 containerd[2024]: time="2024-09-04T17:11:09.922075902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-77f994b5bb-w5sdl,Uid:142524d4-486d-4c67-ad55-a6989cb253d0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"30a95acb06f6dba47e4517d29a97c5dc00f4e16d4d6872608dfa9b24d5003530\"" Sep 4 17:11:09.928590 containerd[2024]: time="2024-09-04T17:11:09.926790394Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:11:12.503295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1163552354.mount: Deactivated successfully. Sep 4 17:11:13.408542 containerd[2024]: time="2024-09-04T17:11:13.408044415Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:13.410624 containerd[2024]: time="2024-09-04T17:11:13.410557163Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485875" Sep 4 17:11:13.412747 containerd[2024]: time="2024-09-04T17:11:13.412488736Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:13.419577 containerd[2024]: time="2024-09-04T17:11:13.419487442Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:13.421277 containerd[2024]: time="2024-09-04T17:11:13.421068320Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 3.494147205s" Sep 4 17:11:13.421277 containerd[2024]: time="2024-09-04T17:11:13.421129226Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:11:13.427575 containerd[2024]: time="2024-09-04T17:11:13.427468634Z" level=info msg="CreateContainer within sandbox \"30a95acb06f6dba47e4517d29a97c5dc00f4e16d4d6872608dfa9b24d5003530\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:11:13.451344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount190701177.mount: Deactivated successfully. Sep 4 17:11:13.454678 containerd[2024]: time="2024-09-04T17:11:13.454556368Z" level=info msg="CreateContainer within sandbox \"30a95acb06f6dba47e4517d29a97c5dc00f4e16d4d6872608dfa9b24d5003530\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9\"" Sep 4 17:11:13.457618 containerd[2024]: time="2024-09-04T17:11:13.456499371Z" level=info msg="StartContainer for \"bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9\"" Sep 4 17:11:13.519918 systemd[1]: Started cri-containerd-bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9.scope - libcontainer container bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9. Sep 4 17:11:13.606209 containerd[2024]: time="2024-09-04T17:11:13.606053600Z" level=info msg="StartContainer for \"bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9\" returns successfully" Sep 4 17:11:14.387868 kubelet[3368]: I0904 17:11:14.387555 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8v772" podStartSLOduration=5.387511179 podStartE2EDuration="5.387511179s" podCreationTimestamp="2024-09-04 17:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:10.379555542 +0000 UTC m=+15.387381413" watchObservedRunningTime="2024-09-04 17:11:14.387511179 +0000 UTC m=+19.395337038" Sep 4 17:11:18.289377 kubelet[3368]: I0904 17:11:18.287723 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-77f994b5bb-w5sdl" podStartSLOduration=5.790710132 podStartE2EDuration="9.28770151s" podCreationTimestamp="2024-09-04 17:11:09 +0000 UTC" firstStartedPulling="2024-09-04 17:11:09.925952795 +0000 UTC m=+14.933778642" lastFinishedPulling="2024-09-04 17:11:13.422944173 +0000 UTC m=+18.430770020" observedRunningTime="2024-09-04 17:11:14.390088098 +0000 UTC m=+19.397913981" watchObservedRunningTime="2024-09-04 17:11:18.28770151 +0000 UTC m=+23.295527405" Sep 4 17:11:18.289377 kubelet[3368]: I0904 17:11:18.288097 3368 topology_manager.go:215] "Topology Admit Handler" podUID="1891c0aa-56f9-4d6b-83d0-e25b1f19680e" podNamespace="calico-system" podName="calico-typha-584468bdfc-msp6l" Sep 4 17:11:18.309645 systemd[1]: Created slice kubepods-besteffort-pod1891c0aa_56f9_4d6b_83d0_e25b1f19680e.slice - libcontainer container kubepods-besteffort-pod1891c0aa_56f9_4d6b_83d0_e25b1f19680e.slice. Sep 4 17:11:18.347090 kubelet[3368]: I0904 17:11:18.346971 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1891c0aa-56f9-4d6b-83d0-e25b1f19680e-typha-certs\") pod \"calico-typha-584468bdfc-msp6l\" (UID: \"1891c0aa-56f9-4d6b-83d0-e25b1f19680e\") " pod="calico-system/calico-typha-584468bdfc-msp6l" Sep 4 17:11:18.347459 kubelet[3368]: I0904 17:11:18.347316 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfs7\" (UniqueName: \"kubernetes.io/projected/1891c0aa-56f9-4d6b-83d0-e25b1f19680e-kube-api-access-4qfs7\") pod \"calico-typha-584468bdfc-msp6l\" (UID: \"1891c0aa-56f9-4d6b-83d0-e25b1f19680e\") " pod="calico-system/calico-typha-584468bdfc-msp6l" Sep 4 17:11:18.347710 kubelet[3368]: I0904 17:11:18.347632 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1891c0aa-56f9-4d6b-83d0-e25b1f19680e-tigera-ca-bundle\") pod \"calico-typha-584468bdfc-msp6l\" (UID: \"1891c0aa-56f9-4d6b-83d0-e25b1f19680e\") " pod="calico-system/calico-typha-584468bdfc-msp6l" Sep 4 17:11:18.538331 kubelet[3368]: I0904 17:11:18.538267 3368 topology_manager.go:215] "Topology Admit Handler" podUID="394e24f8-5cc4-4dda-bb52-467093e0eb69" podNamespace="calico-system" podName="calico-node-sqn2w" Sep 4 17:11:18.549988 kubelet[3368]: I0904 17:11:18.549133 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/394e24f8-5cc4-4dda-bb52-467093e0eb69-var-run-calico\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.549988 kubelet[3368]: I0904 17:11:18.549198 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/394e24f8-5cc4-4dda-bb52-467093e0eb69-var-lib-calico\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.549988 kubelet[3368]: I0904 17:11:18.549239 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/394e24f8-5cc4-4dda-bb52-467093e0eb69-flexvol-driver-host\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.549988 kubelet[3368]: I0904 17:11:18.549281 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glp5v\" (UniqueName: \"kubernetes.io/projected/394e24f8-5cc4-4dda-bb52-467093e0eb69-kube-api-access-glp5v\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.549988 kubelet[3368]: I0904 17:11:18.549319 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/394e24f8-5cc4-4dda-bb52-467093e0eb69-node-certs\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.550347 kubelet[3368]: I0904 17:11:18.549356 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/394e24f8-5cc4-4dda-bb52-467093e0eb69-lib-modules\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.550347 kubelet[3368]: I0904 17:11:18.549392 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/394e24f8-5cc4-4dda-bb52-467093e0eb69-xtables-lock\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.550347 kubelet[3368]: I0904 17:11:18.549432 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/394e24f8-5cc4-4dda-bb52-467093e0eb69-policysync\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.550347 kubelet[3368]: I0904 17:11:18.549469 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/394e24f8-5cc4-4dda-bb52-467093e0eb69-cni-log-dir\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.550347 kubelet[3368]: I0904 17:11:18.549527 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394e24f8-5cc4-4dda-bb52-467093e0eb69-tigera-ca-bundle\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.553455 kubelet[3368]: I0904 17:11:18.549566 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/394e24f8-5cc4-4dda-bb52-467093e0eb69-cni-bin-dir\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.553455 kubelet[3368]: I0904 17:11:18.549626 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/394e24f8-5cc4-4dda-bb52-467093e0eb69-cni-net-dir\") pod \"calico-node-sqn2w\" (UID: \"394e24f8-5cc4-4dda-bb52-467093e0eb69\") " pod="calico-system/calico-node-sqn2w" Sep 4 17:11:18.557486 systemd[1]: Created slice kubepods-besteffort-pod394e24f8_5cc4_4dda_bb52_467093e0eb69.slice - libcontainer container kubepods-besteffort-pod394e24f8_5cc4_4dda_bb52_467093e0eb69.slice. Sep 4 17:11:18.620245 containerd[2024]: time="2024-09-04T17:11:18.620175999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-584468bdfc-msp6l,Uid:1891c0aa-56f9-4d6b-83d0-e25b1f19680e,Namespace:calico-system,Attempt:0,}" Sep 4 17:11:18.674331 kubelet[3368]: E0904 17:11:18.672123 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.676641 kubelet[3368]: W0904 17:11:18.675293 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.676641 kubelet[3368]: E0904 17:11:18.676343 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.680956 kubelet[3368]: E0904 17:11:18.680458 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.680956 kubelet[3368]: W0904 17:11:18.680493 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.680956 kubelet[3368]: E0904 17:11:18.680528 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.727953 kubelet[3368]: E0904 17:11:18.727895 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.727953 kubelet[3368]: W0904 17:11:18.727948 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.730749 kubelet[3368]: E0904 17:11:18.727986 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.731452 containerd[2024]: time="2024-09-04T17:11:18.729416048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:18.731452 containerd[2024]: time="2024-09-04T17:11:18.730929945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:18.731452 containerd[2024]: time="2024-09-04T17:11:18.731009112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:18.731452 containerd[2024]: time="2024-09-04T17:11:18.731043846Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:18.783094 kubelet[3368]: I0904 17:11:18.782733 3368 topology_manager.go:215] "Topology Admit Handler" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" podNamespace="calico-system" podName="csi-node-driver-8pk7h" Sep 4 17:11:18.787071 kubelet[3368]: E0904 17:11:18.784967 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:18.797932 systemd[1]: Started cri-containerd-df712f2e1885bf1e9a897b433a787330367cf3755b37ae3cb9a52a0364f05c36.scope - libcontainer container df712f2e1885bf1e9a897b433a787330367cf3755b37ae3cb9a52a0364f05c36. Sep 4 17:11:18.850011 kubelet[3368]: E0904 17:11:18.848240 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.850011 kubelet[3368]: W0904 17:11:18.848280 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.850011 kubelet[3368]: E0904 17:11:18.848314 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.851766 kubelet[3368]: E0904 17:11:18.850098 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.851766 kubelet[3368]: W0904 17:11:18.850471 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.851766 kubelet[3368]: E0904 17:11:18.850531 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.851766 kubelet[3368]: E0904 17:11:18.851669 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.851766 kubelet[3368]: W0904 17:11:18.851702 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.851766 kubelet[3368]: E0904 17:11:18.851769 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.852707 kubelet[3368]: E0904 17:11:18.852453 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.852707 kubelet[3368]: W0904 17:11:18.852489 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.852707 kubelet[3368]: E0904 17:11:18.852549 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.854733 kubelet[3368]: E0904 17:11:18.854215 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.854733 kubelet[3368]: W0904 17:11:18.854257 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.854733 kubelet[3368]: E0904 17:11:18.854315 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.854979 kubelet[3368]: E0904 17:11:18.854852 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.854979 kubelet[3368]: W0904 17:11:18.854876 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.854979 kubelet[3368]: E0904 17:11:18.854904 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.856486 kubelet[3368]: E0904 17:11:18.855410 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.856486 kubelet[3368]: W0904 17:11:18.855441 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.856486 kubelet[3368]: E0904 17:11:18.855469 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.856486 kubelet[3368]: E0904 17:11:18.856407 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.856486 kubelet[3368]: W0904 17:11:18.856436 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.856486 kubelet[3368]: E0904 17:11:18.856472 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.857961 kubelet[3368]: E0904 17:11:18.857885 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.857961 kubelet[3368]: W0904 17:11:18.857922 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.857961 kubelet[3368]: E0904 17:11:18.857956 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.858411 kubelet[3368]: E0904 17:11:18.858368 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.858411 kubelet[3368]: W0904 17:11:18.858399 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.858547 kubelet[3368]: E0904 17:11:18.858425 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.860160 kubelet[3368]: E0904 17:11:18.860107 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.860160 kubelet[3368]: W0904 17:11:18.860145 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.860333 kubelet[3368]: E0904 17:11:18.860181 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.861780 kubelet[3368]: E0904 17:11:18.861719 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.861780 kubelet[3368]: W0904 17:11:18.861763 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.861987 kubelet[3368]: E0904 17:11:18.861798 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.863106 kubelet[3368]: E0904 17:11:18.862341 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.863106 kubelet[3368]: W0904 17:11:18.862534 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.863106 kubelet[3368]: E0904 17:11:18.862569 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.864148 kubelet[3368]: E0904 17:11:18.864075 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.864148 kubelet[3368]: W0904 17:11:18.864128 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.864429 kubelet[3368]: E0904 17:11:18.864162 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.864519 kubelet[3368]: E0904 17:11:18.864498 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.864572 kubelet[3368]: W0904 17:11:18.864517 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.864572 kubelet[3368]: E0904 17:11:18.864544 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.865118 kubelet[3368]: E0904 17:11:18.864909 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.865118 kubelet[3368]: W0904 17:11:18.864926 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.865118 kubelet[3368]: E0904 17:11:18.864945 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.866105 kubelet[3368]: E0904 17:11:18.865776 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.866105 kubelet[3368]: W0904 17:11:18.865815 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.866105 kubelet[3368]: E0904 17:11:18.865848 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.867341 kubelet[3368]: E0904 17:11:18.867296 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.867341 kubelet[3368]: W0904 17:11:18.867336 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.868549 kubelet[3368]: E0904 17:11:18.867372 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.868549 kubelet[3368]: E0904 17:11:18.868171 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.868549 kubelet[3368]: W0904 17:11:18.868198 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.868549 kubelet[3368]: E0904 17:11:18.868228 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.869536 kubelet[3368]: E0904 17:11:18.869478 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.869536 kubelet[3368]: W0904 17:11:18.869518 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.869796 kubelet[3368]: E0904 17:11:18.869555 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.870539 containerd[2024]: time="2024-09-04T17:11:18.870477473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sqn2w,Uid:394e24f8-5cc4-4dda-bb52-467093e0eb69,Namespace:calico-system,Attempt:0,}" Sep 4 17:11:18.873519 kubelet[3368]: E0904 17:11:18.873460 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.873519 kubelet[3368]: W0904 17:11:18.873505 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.873519 kubelet[3368]: E0904 17:11:18.873541 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.873797 kubelet[3368]: I0904 17:11:18.873585 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b761753e-49f2-48ff-98ce-f1b20c5a7621-varrun\") pod \"csi-node-driver-8pk7h\" (UID: \"b761753e-49f2-48ff-98ce-f1b20c5a7621\") " pod="calico-system/csi-node-driver-8pk7h" Sep 4 17:11:18.874473 kubelet[3368]: E0904 17:11:18.874272 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.874473 kubelet[3368]: W0904 17:11:18.874299 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.874473 kubelet[3368]: E0904 17:11:18.874348 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.875321 kubelet[3368]: E0904 17:11:18.874800 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.875321 kubelet[3368]: W0904 17:11:18.874823 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.875321 kubelet[3368]: E0904 17:11:18.874853 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.877117 kubelet[3368]: I0904 17:11:18.875578 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b761753e-49f2-48ff-98ce-f1b20c5a7621-kubelet-dir\") pod \"csi-node-driver-8pk7h\" (UID: \"b761753e-49f2-48ff-98ce-f1b20c5a7621\") " pod="calico-system/csi-node-driver-8pk7h" Sep 4 17:11:18.877117 kubelet[3368]: E0904 17:11:18.876014 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.877117 kubelet[3368]: W0904 17:11:18.876038 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.877414 kubelet[3368]: E0904 17:11:18.877240 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.878520 kubelet[3368]: E0904 17:11:18.878457 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.878520 kubelet[3368]: W0904 17:11:18.878496 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.878900 kubelet[3368]: E0904 17:11:18.878855 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.880341 kubelet[3368]: E0904 17:11:18.880280 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.880341 kubelet[3368]: W0904 17:11:18.880325 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.880556 kubelet[3368]: E0904 17:11:18.880371 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.880556 kubelet[3368]: I0904 17:11:18.880417 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfsll\" (UniqueName: \"kubernetes.io/projected/b761753e-49f2-48ff-98ce-f1b20c5a7621-kube-api-access-xfsll\") pod \"csi-node-driver-8pk7h\" (UID: \"b761753e-49f2-48ff-98ce-f1b20c5a7621\") " pod="calico-system/csi-node-driver-8pk7h" Sep 4 17:11:18.881334 kubelet[3368]: E0904 17:11:18.881268 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.881334 kubelet[3368]: W0904 17:11:18.881309 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.881334 kubelet[3368]: E0904 17:11:18.881342 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.882589 kubelet[3368]: E0904 17:11:18.882536 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.882589 kubelet[3368]: W0904 17:11:18.882575 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.882806 kubelet[3368]: E0904 17:11:18.882636 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.884333 kubelet[3368]: E0904 17:11:18.884281 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.884333 kubelet[3368]: W0904 17:11:18.884320 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.884333 kubelet[3368]: E0904 17:11:18.884510 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.886481 kubelet[3368]: E0904 17:11:18.885226 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.886481 kubelet[3368]: W0904 17:11:18.885256 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.886481 kubelet[3368]: E0904 17:11:18.885288 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.886481 kubelet[3368]: I0904 17:11:18.885340 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b761753e-49f2-48ff-98ce-f1b20c5a7621-socket-dir\") pod \"csi-node-driver-8pk7h\" (UID: \"b761753e-49f2-48ff-98ce-f1b20c5a7621\") " pod="calico-system/csi-node-driver-8pk7h" Sep 4 17:11:18.886481 kubelet[3368]: E0904 17:11:18.886227 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.886481 kubelet[3368]: W0904 17:11:18.886256 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.886481 kubelet[3368]: E0904 17:11:18.886295 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.886481 kubelet[3368]: I0904 17:11:18.886336 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b761753e-49f2-48ff-98ce-f1b20c5a7621-registration-dir\") pod \"csi-node-driver-8pk7h\" (UID: \"b761753e-49f2-48ff-98ce-f1b20c5a7621\") " pod="calico-system/csi-node-driver-8pk7h" Sep 4 17:11:18.886921 kubelet[3368]: E0904 17:11:18.886892 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.886921 kubelet[3368]: W0904 17:11:18.886914 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.887417 kubelet[3368]: E0904 17:11:18.887234 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.888631 kubelet[3368]: E0904 17:11:18.888023 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.888631 kubelet[3368]: W0904 17:11:18.888061 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.888631 kubelet[3368]: E0904 17:11:18.888101 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.888927 kubelet[3368]: E0904 17:11:18.888665 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.888927 kubelet[3368]: W0904 17:11:18.888693 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.888927 kubelet[3368]: E0904 17:11:18.888857 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.890673 kubelet[3368]: E0904 17:11:18.890573 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.890673 kubelet[3368]: W0904 17:11:18.890659 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.890864 kubelet[3368]: E0904 17:11:18.890694 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.947035 containerd[2024]: time="2024-09-04T17:11:18.946842236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:18.950810 containerd[2024]: time="2024-09-04T17:11:18.948697392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:18.950810 containerd[2024]: time="2024-09-04T17:11:18.948759223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:18.950810 containerd[2024]: time="2024-09-04T17:11:18.948784291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:18.988254 kubelet[3368]: E0904 17:11:18.988187 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.988254 kubelet[3368]: W0904 17:11:18.988249 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.988443 kubelet[3368]: E0904 17:11:18.988287 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.991506 kubelet[3368]: E0904 17:11:18.990891 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:18.991506 kubelet[3368]: W0904 17:11:18.990948 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:18.991506 kubelet[3368]: E0904 17:11:18.991120 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:18.990957 systemd[1]: Started cri-containerd-15b129e62eae1fb482f0646dc2ec6db2d0615d5b41410a574a714aec87febf2d.scope - libcontainer container 15b129e62eae1fb482f0646dc2ec6db2d0615d5b41410a574a714aec87febf2d. Sep 4 17:11:19.000328 kubelet[3368]: E0904 17:11:18.998671 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.001326 kubelet[3368]: W0904 17:11:19.001261 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.001476 kubelet[3368]: E0904 17:11:19.001435 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.004117 kubelet[3368]: E0904 17:11:19.004060 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.004842 kubelet[3368]: W0904 17:11:19.004100 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.006853 kubelet[3368]: E0904 17:11:19.006799 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.009103 kubelet[3368]: E0904 17:11:19.008936 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.009103 kubelet[3368]: W0904 17:11:19.009089 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.010568 kubelet[3368]: E0904 17:11:19.010489 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.010568 kubelet[3368]: W0904 17:11:19.011941 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.010568 kubelet[3368]: E0904 17:11:19.012677 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.010568 kubelet[3368]: W0904 17:11:19.012893 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.010568 kubelet[3368]: E0904 17:11:19.012934 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.015266 kubelet[3368]: E0904 17:11:19.015201 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.015266 kubelet[3368]: W0904 17:11:19.015246 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.015543 kubelet[3368]: E0904 17:11:19.015303 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.017278 kubelet[3368]: E0904 17:11:19.013475 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.018091 kubelet[3368]: E0904 17:11:19.018033 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.018231 kubelet[3368]: W0904 17:11:19.018073 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.018231 kubelet[3368]: E0904 17:11:19.018142 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.018776 kubelet[3368]: E0904 17:11:19.018720 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.018875 kubelet[3368]: W0904 17:11:19.018754 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.018875 kubelet[3368]: E0904 17:11:19.018813 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.019531 kubelet[3368]: E0904 17:11:19.019469 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.019531 kubelet[3368]: W0904 17:11:19.019516 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.019766 kubelet[3368]: E0904 17:11:19.019557 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.019766 kubelet[3368]: E0904 17:11:19.010666 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.020376 kubelet[3368]: E0904 17:11:19.020327 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.020376 kubelet[3368]: W0904 17:11:19.020362 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.020531 kubelet[3368]: E0904 17:11:19.020396 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.020888 kubelet[3368]: E0904 17:11:19.020845 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.020888 kubelet[3368]: W0904 17:11:19.020877 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.021084 kubelet[3368]: E0904 17:11:19.020907 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.022644 kubelet[3368]: E0904 17:11:19.021448 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.022644 kubelet[3368]: W0904 17:11:19.021499 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.022644 kubelet[3368]: E0904 17:11:19.021531 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.022644 kubelet[3368]: E0904 17:11:19.022160 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.022644 kubelet[3368]: W0904 17:11:19.022188 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.022644 kubelet[3368]: E0904 17:11:19.022220 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.023045 kubelet[3368]: E0904 17:11:19.022669 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.023045 kubelet[3368]: W0904 17:11:19.022692 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.023045 kubelet[3368]: E0904 17:11:19.022718 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.026453 kubelet[3368]: E0904 17:11:19.023432 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.026453 kubelet[3368]: W0904 17:11:19.023469 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.026453 kubelet[3368]: E0904 17:11:19.023503 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.028883 kubelet[3368]: E0904 17:11:19.027224 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.028883 kubelet[3368]: W0904 17:11:19.027257 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.028883 kubelet[3368]: E0904 17:11:19.027770 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.028883 kubelet[3368]: W0904 17:11:19.027794 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.028883 kubelet[3368]: E0904 17:11:19.027822 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.028883 kubelet[3368]: E0904 17:11:19.028178 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.028883 kubelet[3368]: W0904 17:11:19.028202 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.028883 kubelet[3368]: E0904 17:11:19.028230 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.028883 kubelet[3368]: E0904 17:11:19.028571 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.028883 kubelet[3368]: W0904 17:11:19.028615 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.029654 kubelet[3368]: E0904 17:11:19.028645 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.029719 kubelet[3368]: E0904 17:11:19.029672 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.029719 kubelet[3368]: W0904 17:11:19.029700 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.029813 kubelet[3368]: E0904 17:11:19.029732 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.030730 kubelet[3368]: E0904 17:11:19.029883 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.031501 kubelet[3368]: E0904 17:11:19.031445 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.031685 kubelet[3368]: W0904 17:11:19.031520 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.031685 kubelet[3368]: E0904 17:11:19.031559 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.033896 kubelet[3368]: E0904 17:11:19.033829 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.033896 kubelet[3368]: W0904 17:11:19.033871 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.034072 kubelet[3368]: E0904 17:11:19.033907 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.034486 kubelet[3368]: E0904 17:11:19.034451 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.034486 kubelet[3368]: W0904 17:11:19.034480 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.035827 kubelet[3368]: E0904 17:11:19.034507 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.053589 kubelet[3368]: E0904 17:11:19.053436 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:19.053589 kubelet[3368]: W0904 17:11:19.053467 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:19.053589 kubelet[3368]: E0904 17:11:19.053498 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:19.096191 containerd[2024]: time="2024-09-04T17:11:19.096125634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-584468bdfc-msp6l,Uid:1891c0aa-56f9-4d6b-83d0-e25b1f19680e,Namespace:calico-system,Attempt:0,} returns sandbox id \"df712f2e1885bf1e9a897b433a787330367cf3755b37ae3cb9a52a0364f05c36\"" Sep 4 17:11:19.104064 containerd[2024]: time="2024-09-04T17:11:19.103632878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:11:19.155166 containerd[2024]: time="2024-09-04T17:11:19.154921129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sqn2w,Uid:394e24f8-5cc4-4dda-bb52-467093e0eb69,Namespace:calico-system,Attempt:0,} returns sandbox id \"15b129e62eae1fb482f0646dc2ec6db2d0615d5b41410a574a714aec87febf2d\"" Sep 4 17:11:20.205546 kubelet[3368]: E0904 17:11:20.205138 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:21.716643 containerd[2024]: time="2024-09-04T17:11:21.716514536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:21.719983 containerd[2024]: time="2024-09-04T17:11:21.719538895Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:11:21.722151 containerd[2024]: time="2024-09-04T17:11:21.721811607Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:21.731465 containerd[2024]: time="2024-09-04T17:11:21.731349966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:21.733213 containerd[2024]: time="2024-09-04T17:11:21.733135846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.6294378s" Sep 4 17:11:21.733553 containerd[2024]: time="2024-09-04T17:11:21.733379892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:11:21.739212 containerd[2024]: time="2024-09-04T17:11:21.737940336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:11:21.771981 containerd[2024]: time="2024-09-04T17:11:21.771895969Z" level=info msg="CreateContainer within sandbox \"df712f2e1885bf1e9a897b433a787330367cf3755b37ae3cb9a52a0364f05c36\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:11:21.852895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2481926990.mount: Deactivated successfully. Sep 4 17:11:21.856099 containerd[2024]: time="2024-09-04T17:11:21.856033633Z" level=info msg="CreateContainer within sandbox \"df712f2e1885bf1e9a897b433a787330367cf3755b37ae3cb9a52a0364f05c36\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e899327ec1720858936084ae1abac0cb29ae51378cc3ec65fcccbe3b6ec605a3\"" Sep 4 17:11:21.861302 containerd[2024]: time="2024-09-04T17:11:21.858582867Z" level=info msg="StartContainer for \"e899327ec1720858936084ae1abac0cb29ae51378cc3ec65fcccbe3b6ec605a3\"" Sep 4 17:11:21.947961 systemd[1]: Started cri-containerd-e899327ec1720858936084ae1abac0cb29ae51378cc3ec65fcccbe3b6ec605a3.scope - libcontainer container e899327ec1720858936084ae1abac0cb29ae51378cc3ec65fcccbe3b6ec605a3. Sep 4 17:11:22.080351 containerd[2024]: time="2024-09-04T17:11:22.080270898Z" level=info msg="StartContainer for \"e899327ec1720858936084ae1abac0cb29ae51378cc3ec65fcccbe3b6ec605a3\" returns successfully" Sep 4 17:11:22.205294 kubelet[3368]: E0904 17:11:22.205081 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:22.503860 kubelet[3368]: E0904 17:11:22.503794 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.503860 kubelet[3368]: W0904 17:11:22.503842 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.504060 kubelet[3368]: E0904 17:11:22.503878 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.504386 kubelet[3368]: E0904 17:11:22.504335 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.504386 kubelet[3368]: W0904 17:11:22.504374 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.504631 kubelet[3368]: E0904 17:11:22.504405 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.504905 kubelet[3368]: E0904 17:11:22.504836 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.504905 kubelet[3368]: W0904 17:11:22.504874 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.504905 kubelet[3368]: E0904 17:11:22.504907 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.505654 kubelet[3368]: E0904 17:11:22.505513 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.505654 kubelet[3368]: W0904 17:11:22.505555 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.505654 kubelet[3368]: E0904 17:11:22.505616 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.507909 kubelet[3368]: E0904 17:11:22.506185 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.507909 kubelet[3368]: W0904 17:11:22.506215 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.507909 kubelet[3368]: E0904 17:11:22.506248 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.507909 kubelet[3368]: E0904 17:11:22.506777 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.507909 kubelet[3368]: W0904 17:11:22.506807 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.507909 kubelet[3368]: E0904 17:11:22.506838 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.507909 kubelet[3368]: E0904 17:11:22.507284 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.507909 kubelet[3368]: W0904 17:11:22.507311 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.507909 kubelet[3368]: E0904 17:11:22.507338 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.507909 kubelet[3368]: E0904 17:11:22.507866 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.508780 kubelet[3368]: W0904 17:11:22.507903 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.508780 kubelet[3368]: E0904 17:11:22.507936 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.508780 kubelet[3368]: E0904 17:11:22.508403 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.508780 kubelet[3368]: W0904 17:11:22.508432 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.508780 kubelet[3368]: E0904 17:11:22.508466 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.509058 kubelet[3368]: E0904 17:11:22.509026 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.509160 kubelet[3368]: W0904 17:11:22.509055 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.509160 kubelet[3368]: E0904 17:11:22.509106 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.511663 kubelet[3368]: E0904 17:11:22.509561 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.511663 kubelet[3368]: W0904 17:11:22.509622 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.511663 kubelet[3368]: E0904 17:11:22.509699 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.511663 kubelet[3368]: E0904 17:11:22.510200 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.511663 kubelet[3368]: W0904 17:11:22.510235 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.511663 kubelet[3368]: E0904 17:11:22.510267 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.511663 kubelet[3368]: E0904 17:11:22.510824 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.511663 kubelet[3368]: W0904 17:11:22.510856 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.511663 kubelet[3368]: E0904 17:11:22.510887 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.511663 kubelet[3368]: E0904 17:11:22.511361 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.512406 kubelet[3368]: W0904 17:11:22.511390 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.512406 kubelet[3368]: E0904 17:11:22.511423 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.512406 kubelet[3368]: E0904 17:11:22.511879 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.512406 kubelet[3368]: W0904 17:11:22.511903 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.512406 kubelet[3368]: E0904 17:11:22.511929 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.549537 kubelet[3368]: E0904 17:11:22.549442 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.549537 kubelet[3368]: W0904 17:11:22.549525 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.549826 kubelet[3368]: E0904 17:11:22.549568 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.551277 kubelet[3368]: E0904 17:11:22.551210 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.551277 kubelet[3368]: W0904 17:11:22.551256 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.551529 kubelet[3368]: E0904 17:11:22.551305 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.552583 kubelet[3368]: E0904 17:11:22.552527 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.552583 kubelet[3368]: W0904 17:11:22.552570 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.553971 kubelet[3368]: E0904 17:11:22.552742 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.553971 kubelet[3368]: E0904 17:11:22.553075 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.553971 kubelet[3368]: W0904 17:11:22.553098 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.553971 kubelet[3368]: E0904 17:11:22.553524 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.554703 kubelet[3368]: E0904 17:11:22.554616 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.554703 kubelet[3368]: W0904 17:11:22.554670 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.554703 kubelet[3368]: E0904 17:11:22.554757 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.556113 kubelet[3368]: E0904 17:11:22.556052 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.556113 kubelet[3368]: W0904 17:11:22.556096 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.557052 kubelet[3368]: E0904 17:11:22.556328 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.557198 kubelet[3368]: E0904 17:11:22.557048 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.557198 kubelet[3368]: W0904 17:11:22.557076 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.558960 kubelet[3368]: E0904 17:11:22.558684 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.558960 kubelet[3368]: E0904 17:11:22.558869 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.558960 kubelet[3368]: W0904 17:11:22.558893 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.559853 kubelet[3368]: E0904 17:11:22.559277 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.561122 kubelet[3368]: E0904 17:11:22.561053 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.561122 kubelet[3368]: W0904 17:11:22.561099 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.561845 kubelet[3368]: E0904 17:11:22.561534 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.561968 kubelet[3368]: E0904 17:11:22.561925 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.561968 kubelet[3368]: W0904 17:11:22.561947 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.563677 kubelet[3368]: E0904 17:11:22.562108 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.563677 kubelet[3368]: E0904 17:11:22.562754 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.563677 kubelet[3368]: W0904 17:11:22.562781 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.565705 kubelet[3368]: E0904 17:11:22.565653 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.565705 kubelet[3368]: W0904 17:11:22.565692 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.568292 kubelet[3368]: E0904 17:11:22.568227 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.568292 kubelet[3368]: W0904 17:11:22.568272 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.568515 kubelet[3368]: E0904 17:11:22.568307 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.569736 kubelet[3368]: E0904 17:11:22.569682 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.569871 kubelet[3368]: E0904 17:11:22.569777 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.570166 kubelet[3368]: E0904 17:11:22.570074 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.570166 kubelet[3368]: W0904 17:11:22.570115 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.570343 kubelet[3368]: E0904 17:11:22.570179 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.572055 kubelet[3368]: E0904 17:11:22.572004 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.572055 kubelet[3368]: W0904 17:11:22.572035 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.572195 kubelet[3368]: E0904 17:11:22.572081 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.574239 kubelet[3368]: E0904 17:11:22.574178 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.574239 kubelet[3368]: W0904 17:11:22.574222 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.574459 kubelet[3368]: E0904 17:11:22.574266 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.575630 kubelet[3368]: E0904 17:11:22.575530 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.575630 kubelet[3368]: W0904 17:11:22.575577 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.576735 kubelet[3368]: E0904 17:11:22.576664 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:22.577323 kubelet[3368]: E0904 17:11:22.577273 3368 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:11:22.577323 kubelet[3368]: W0904 17:11:22.577320 3368 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:11:22.577497 kubelet[3368]: E0904 17:11:22.577357 3368 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:11:23.106873 containerd[2024]: time="2024-09-04T17:11:23.106776593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:23.108866 containerd[2024]: time="2024-09-04T17:11:23.108761041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:11:23.110876 containerd[2024]: time="2024-09-04T17:11:23.110726699Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:23.119400 containerd[2024]: time="2024-09-04T17:11:23.119304050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:23.121749 containerd[2024]: time="2024-09-04T17:11:23.121490655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.383422347s" Sep 4 17:11:23.121749 containerd[2024]: time="2024-09-04T17:11:23.121560530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:11:23.130747 containerd[2024]: time="2024-09-04T17:11:23.129843733Z" level=info msg="CreateContainer within sandbox \"15b129e62eae1fb482f0646dc2ec6db2d0615d5b41410a574a714aec87febf2d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:11:23.162391 containerd[2024]: time="2024-09-04T17:11:23.162296372Z" level=info msg="CreateContainer within sandbox \"15b129e62eae1fb482f0646dc2ec6db2d0615d5b41410a574a714aec87febf2d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d7f5631269335d06ba2837ef3bcfeaec259ba5b3e4f164ae369ffaa9f3e6d06b\"" Sep 4 17:11:23.168166 containerd[2024]: time="2024-09-04T17:11:23.168013917Z" level=info msg="StartContainer for \"d7f5631269335d06ba2837ef3bcfeaec259ba5b3e4f164ae369ffaa9f3e6d06b\"" Sep 4 17:11:23.169341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3597750666.mount: Deactivated successfully. Sep 4 17:11:23.250832 systemd[1]: Started cri-containerd-d7f5631269335d06ba2837ef3bcfeaec259ba5b3e4f164ae369ffaa9f3e6d06b.scope - libcontainer container d7f5631269335d06ba2837ef3bcfeaec259ba5b3e4f164ae369ffaa9f3e6d06b. Sep 4 17:11:23.371295 containerd[2024]: time="2024-09-04T17:11:23.371089922Z" level=info msg="StartContainer for \"d7f5631269335d06ba2837ef3bcfeaec259ba5b3e4f164ae369ffaa9f3e6d06b\" returns successfully" Sep 4 17:11:23.397781 kubelet[3368]: I0904 17:11:23.396966 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-584468bdfc-msp6l" podStartSLOduration=2.763417654 podStartE2EDuration="5.396941517s" podCreationTimestamp="2024-09-04 17:11:18 +0000 UTC" firstStartedPulling="2024-09-04 17:11:19.102637532 +0000 UTC m=+24.110463391" lastFinishedPulling="2024-09-04 17:11:21.736161406 +0000 UTC m=+26.743987254" observedRunningTime="2024-09-04 17:11:22.446385538 +0000 UTC m=+27.454211409" watchObservedRunningTime="2024-09-04 17:11:23.396941517 +0000 UTC m=+28.404767376" Sep 4 17:11:23.503376 systemd[1]: cri-containerd-d7f5631269335d06ba2837ef3bcfeaec259ba5b3e4f164ae369ffaa9f3e6d06b.scope: Deactivated successfully. Sep 4 17:11:23.752370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7f5631269335d06ba2837ef3bcfeaec259ba5b3e4f164ae369ffaa9f3e6d06b-rootfs.mount: Deactivated successfully. Sep 4 17:11:24.206241 kubelet[3368]: E0904 17:11:24.205252 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:24.607552 containerd[2024]: time="2024-09-04T17:11:24.607331230Z" level=info msg="shim disconnected" id=d7f5631269335d06ba2837ef3bcfeaec259ba5b3e4f164ae369ffaa9f3e6d06b namespace=k8s.io Sep 4 17:11:24.608989 containerd[2024]: time="2024-09-04T17:11:24.608358451Z" level=warning msg="cleaning up after shim disconnected" id=d7f5631269335d06ba2837ef3bcfeaec259ba5b3e4f164ae369ffaa9f3e6d06b namespace=k8s.io Sep 4 17:11:24.608989 containerd[2024]: time="2024-09-04T17:11:24.608424760Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:11:25.437329 containerd[2024]: time="2024-09-04T17:11:25.437252073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:11:26.206182 kubelet[3368]: E0904 17:11:26.205694 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:28.205879 kubelet[3368]: E0904 17:11:28.205770 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:29.608051 containerd[2024]: time="2024-09-04T17:11:29.607977650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:29.609502 containerd[2024]: time="2024-09-04T17:11:29.609427891Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:11:29.611097 containerd[2024]: time="2024-09-04T17:11:29.611017101Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:29.615336 containerd[2024]: time="2024-09-04T17:11:29.615253875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:29.616773 containerd[2024]: time="2024-09-04T17:11:29.616726255Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.179389839s" Sep 4 17:11:29.617350 containerd[2024]: time="2024-09-04T17:11:29.616933418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:11:29.622007 containerd[2024]: time="2024-09-04T17:11:29.621850475Z" level=info msg="CreateContainer within sandbox \"15b129e62eae1fb482f0646dc2ec6db2d0615d5b41410a574a714aec87febf2d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:11:29.687959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296513183.mount: Deactivated successfully. Sep 4 17:11:29.689031 containerd[2024]: time="2024-09-04T17:11:29.688837461Z" level=info msg="CreateContainer within sandbox \"15b129e62eae1fb482f0646dc2ec6db2d0615d5b41410a574a714aec87febf2d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6\"" Sep 4 17:11:29.690946 containerd[2024]: time="2024-09-04T17:11:29.690281638Z" level=info msg="StartContainer for \"b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6\"" Sep 4 17:11:29.750366 systemd[1]: run-containerd-runc-k8s.io-b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6-runc.gQgE4r.mount: Deactivated successfully. Sep 4 17:11:29.762932 systemd[1]: Started cri-containerd-b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6.scope - libcontainer container b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6. Sep 4 17:11:29.814958 containerd[2024]: time="2024-09-04T17:11:29.814879907Z" level=info msg="StartContainer for \"b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6\" returns successfully" Sep 4 17:11:30.204795 kubelet[3368]: E0904 17:11:30.204719 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:32.204714 kubelet[3368]: E0904 17:11:32.204635 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:32.273937 containerd[2024]: time="2024-09-04T17:11:32.273809659Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:11:32.278103 systemd[1]: cri-containerd-b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6.scope: Deactivated successfully. Sep 4 17:11:32.316195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6-rootfs.mount: Deactivated successfully. Sep 4 17:11:32.369009 kubelet[3368]: I0904 17:11:32.368712 3368 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:11:32.405474 kubelet[3368]: I0904 17:11:32.403759 3368 topology_manager.go:215] "Topology Admit Handler" podUID="6a2166df-664f-4f02-bf19-b368d9927d59" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cr2qq" Sep 4 17:11:32.414622 kubelet[3368]: I0904 17:11:32.414563 3368 topology_manager.go:215] "Topology Admit Handler" podUID="6789a9bc-3db8-46e0-a107-717bb75f1944" podNamespace="kube-system" podName="coredns-7db6d8ff4d-zbmjh" Sep 4 17:11:32.416762 kubelet[3368]: I0904 17:11:32.416713 3368 topology_manager.go:215] "Topology Admit Handler" podUID="c21af7c7-5c73-4d45-82b9-064bb8ecb13d" podNamespace="calico-system" podName="calico-kube-controllers-596bd58bc-l9kdw" Sep 4 17:11:32.432342 systemd[1]: Created slice kubepods-burstable-pod6a2166df_664f_4f02_bf19_b368d9927d59.slice - libcontainer container kubepods-burstable-pod6a2166df_664f_4f02_bf19_b368d9927d59.slice. Sep 4 17:11:32.450242 systemd[1]: Created slice kubepods-burstable-pod6789a9bc_3db8_46e0_a107_717bb75f1944.slice - libcontainer container kubepods-burstable-pod6789a9bc_3db8_46e0_a107_717bb75f1944.slice. Sep 4 17:11:32.470477 systemd[1]: Created slice kubepods-besteffort-podc21af7c7_5c73_4d45_82b9_064bb8ecb13d.slice - libcontainer container kubepods-besteffort-podc21af7c7_5c73_4d45_82b9_064bb8ecb13d.slice. Sep 4 17:11:32.538027 kubelet[3368]: I0904 17:11:32.537968 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6789a9bc-3db8-46e0-a107-717bb75f1944-config-volume\") pod \"coredns-7db6d8ff4d-zbmjh\" (UID: \"6789a9bc-3db8-46e0-a107-717bb75f1944\") " pod="kube-system/coredns-7db6d8ff4d-zbmjh" Sep 4 17:11:32.538027 kubelet[3368]: I0904 17:11:32.538061 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c21af7c7-5c73-4d45-82b9-064bb8ecb13d-tigera-ca-bundle\") pod \"calico-kube-controllers-596bd58bc-l9kdw\" (UID: \"c21af7c7-5c73-4d45-82b9-064bb8ecb13d\") " pod="calico-system/calico-kube-controllers-596bd58bc-l9kdw" Sep 4 17:11:32.538546 kubelet[3368]: I0904 17:11:32.538112 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7kdk\" (UniqueName: \"kubernetes.io/projected/6789a9bc-3db8-46e0-a107-717bb75f1944-kube-api-access-h7kdk\") pod \"coredns-7db6d8ff4d-zbmjh\" (UID: \"6789a9bc-3db8-46e0-a107-717bb75f1944\") " pod="kube-system/coredns-7db6d8ff4d-zbmjh" Sep 4 17:11:32.538546 kubelet[3368]: I0904 17:11:32.538164 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn9xh\" (UniqueName: \"kubernetes.io/projected/c21af7c7-5c73-4d45-82b9-064bb8ecb13d-kube-api-access-bn9xh\") pod \"calico-kube-controllers-596bd58bc-l9kdw\" (UID: \"c21af7c7-5c73-4d45-82b9-064bb8ecb13d\") " pod="calico-system/calico-kube-controllers-596bd58bc-l9kdw" Sep 4 17:11:32.538546 kubelet[3368]: I0904 17:11:32.538218 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a2166df-664f-4f02-bf19-b368d9927d59-config-volume\") pod \"coredns-7db6d8ff4d-cr2qq\" (UID: \"6a2166df-664f-4f02-bf19-b368d9927d59\") " pod="kube-system/coredns-7db6d8ff4d-cr2qq" Sep 4 17:11:32.538546 kubelet[3368]: I0904 17:11:32.538258 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vzld\" (UniqueName: \"kubernetes.io/projected/6a2166df-664f-4f02-bf19-b368d9927d59-kube-api-access-9vzld\") pod \"coredns-7db6d8ff4d-cr2qq\" (UID: \"6a2166df-664f-4f02-bf19-b368d9927d59\") " pod="kube-system/coredns-7db6d8ff4d-cr2qq" Sep 4 17:11:32.745866 containerd[2024]: time="2024-09-04T17:11:32.745468794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cr2qq,Uid:6a2166df-664f-4f02-bf19-b368d9927d59,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:32.761410 containerd[2024]: time="2024-09-04T17:11:32.760755025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zbmjh,Uid:6789a9bc-3db8-46e0-a107-717bb75f1944,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:32.777845 containerd[2024]: time="2024-09-04T17:11:32.777781814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596bd58bc-l9kdw,Uid:c21af7c7-5c73-4d45-82b9-064bb8ecb13d,Namespace:calico-system,Attempt:0,}" Sep 4 17:11:33.950022 containerd[2024]: time="2024-09-04T17:11:33.949937832Z" level=error msg="Failed to destroy network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:33.950869 containerd[2024]: time="2024-09-04T17:11:33.950642308Z" level=error msg="encountered an error cleaning up failed sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:33.950869 containerd[2024]: time="2024-09-04T17:11:33.950726050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zbmjh,Uid:6789a9bc-3db8-46e0-a107-717bb75f1944,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:33.955160 kubelet[3368]: E0904 17:11:33.955065 3368 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:33.955921 kubelet[3368]: E0904 17:11:33.955192 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zbmjh" Sep 4 17:11:33.955921 kubelet[3368]: E0904 17:11:33.955241 3368 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-zbmjh" Sep 4 17:11:33.955921 kubelet[3368]: E0904 17:11:33.955343 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-zbmjh_kube-system(6789a9bc-3db8-46e0-a107-717bb75f1944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-zbmjh_kube-system(6789a9bc-3db8-46e0-a107-717bb75f1944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zbmjh" podUID="6789a9bc-3db8-46e0-a107-717bb75f1944" Sep 4 17:11:34.049149 containerd[2024]: time="2024-09-04T17:11:34.048805177Z" level=info msg="shim disconnected" id=b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6 namespace=k8s.io Sep 4 17:11:34.049149 containerd[2024]: time="2024-09-04T17:11:34.048887899Z" level=warning msg="cleaning up after shim disconnected" id=b209b2ca0819709bd3b044ac5be2c08513175b153de389390c69b9e27728a7a6 namespace=k8s.io Sep 4 17:11:34.049149 containerd[2024]: time="2024-09-04T17:11:34.048909317Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:11:34.049772 containerd[2024]: time="2024-09-04T17:11:34.049631478Z" level=error msg="Failed to destroy network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.055653 containerd[2024]: time="2024-09-04T17:11:34.054883467Z" level=error msg="encountered an error cleaning up failed sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.055653 containerd[2024]: time="2024-09-04T17:11:34.055031849Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cr2qq,Uid:6a2166df-664f-4f02-bf19-b368d9927d59,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.058069 kubelet[3368]: E0904 17:11:34.057798 3368 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.058069 kubelet[3368]: E0904 17:11:34.057884 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cr2qq" Sep 4 17:11:34.058069 kubelet[3368]: E0904 17:11:34.057917 3368 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cr2qq" Sep 4 17:11:34.058508 kubelet[3368]: E0904 17:11:34.057979 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cr2qq_kube-system(6a2166df-664f-4f02-bf19-b368d9927d59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cr2qq_kube-system(6a2166df-664f-4f02-bf19-b368d9927d59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cr2qq" podUID="6a2166df-664f-4f02-bf19-b368d9927d59" Sep 4 17:11:34.199286 containerd[2024]: time="2024-09-04T17:11:34.198802331Z" level=error msg="Failed to destroy network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.199538 containerd[2024]: time="2024-09-04T17:11:34.199475376Z" level=error msg="encountered an error cleaning up failed sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.199668 containerd[2024]: time="2024-09-04T17:11:34.199578387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596bd58bc-l9kdw,Uid:c21af7c7-5c73-4d45-82b9-064bb8ecb13d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.201976 kubelet[3368]: E0904 17:11:34.199925 3368 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.201976 kubelet[3368]: E0904 17:11:34.200002 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596bd58bc-l9kdw" Sep 4 17:11:34.201976 kubelet[3368]: E0904 17:11:34.200035 3368 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596bd58bc-l9kdw" Sep 4 17:11:34.202258 kubelet[3368]: E0904 17:11:34.200123 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596bd58bc-l9kdw_calico-system(c21af7c7-5c73-4d45-82b9-064bb8ecb13d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596bd58bc-l9kdw_calico-system(c21af7c7-5c73-4d45-82b9-064bb8ecb13d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596bd58bc-l9kdw" podUID="c21af7c7-5c73-4d45-82b9-064bb8ecb13d" Sep 4 17:11:34.220819 systemd[1]: Started sshd@7-172.31.21.183:22-139.178.89.65:60210.service - OpenSSH per-connection server daemon (139.178.89.65:60210). Sep 4 17:11:34.252088 systemd[1]: Created slice kubepods-besteffort-podb761753e_49f2_48ff_98ce_f1b20c5a7621.slice - libcontainer container kubepods-besteffort-podb761753e_49f2_48ff_98ce_f1b20c5a7621.slice. Sep 4 17:11:34.259925 containerd[2024]: time="2024-09-04T17:11:34.259862495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pk7h,Uid:b761753e-49f2-48ff-98ce-f1b20c5a7621,Namespace:calico-system,Attempt:0,}" Sep 4 17:11:34.393762 containerd[2024]: time="2024-09-04T17:11:34.392970184Z" level=error msg="Failed to destroy network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.394674 containerd[2024]: time="2024-09-04T17:11:34.394414782Z" level=error msg="encountered an error cleaning up failed sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.394674 containerd[2024]: time="2024-09-04T17:11:34.394554832Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pk7h,Uid:b761753e-49f2-48ff-98ce-f1b20c5a7621,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.395095 kubelet[3368]: E0904 17:11:34.394994 3368 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.395188 kubelet[3368]: E0904 17:11:34.395086 3368 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8pk7h" Sep 4 17:11:34.395188 kubelet[3368]: E0904 17:11:34.395123 3368 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8pk7h" Sep 4 17:11:34.395314 kubelet[3368]: E0904 17:11:34.395185 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8pk7h_calico-system(b761753e-49f2-48ff-98ce-f1b20c5a7621)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8pk7h_calico-system(b761753e-49f2-48ff-98ce-f1b20c5a7621)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:34.431727 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db-shm.mount: Deactivated successfully. Sep 4 17:11:34.432123 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32-shm.mount: Deactivated successfully. Sep 4 17:11:34.432265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875-shm.mount: Deactivated successfully. Sep 4 17:11:34.444784 sshd[4230]: Accepted publickey for core from 139.178.89.65 port 60210 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:34.447300 sshd[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:34.454536 systemd-logind[2001]: New session 8 of user core. Sep 4 17:11:34.462871 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:11:34.474481 kubelet[3368]: I0904 17:11:34.474420 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:34.479531 containerd[2024]: time="2024-09-04T17:11:34.479453833Z" level=info msg="StopPodSandbox for \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\"" Sep 4 17:11:34.480768 containerd[2024]: time="2024-09-04T17:11:34.480675059Z" level=info msg="Ensure that sandbox 32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8 in task-service has been cleanup successfully" Sep 4 17:11:34.486191 containerd[2024]: time="2024-09-04T17:11:34.486133791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:11:34.491362 kubelet[3368]: I0904 17:11:34.490746 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:34.492006 containerd[2024]: time="2024-09-04T17:11:34.491956665Z" level=info msg="StopPodSandbox for \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\"" Sep 4 17:11:34.492666 containerd[2024]: time="2024-09-04T17:11:34.492589598Z" level=info msg="Ensure that sandbox ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db in task-service has been cleanup successfully" Sep 4 17:11:34.503994 kubelet[3368]: I0904 17:11:34.502838 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:34.504239 containerd[2024]: time="2024-09-04T17:11:34.504147030Z" level=info msg="StopPodSandbox for \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\"" Sep 4 17:11:34.504646 containerd[2024]: time="2024-09-04T17:11:34.504579594Z" level=info msg="Ensure that sandbox 2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875 in task-service has been cleanup successfully" Sep 4 17:11:34.514243 kubelet[3368]: I0904 17:11:34.514192 3368 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:34.518346 containerd[2024]: time="2024-09-04T17:11:34.518273722Z" level=info msg="StopPodSandbox for \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\"" Sep 4 17:11:34.519343 containerd[2024]: time="2024-09-04T17:11:34.518647684Z" level=info msg="Ensure that sandbox f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32 in task-service has been cleanup successfully" Sep 4 17:11:34.676700 containerd[2024]: time="2024-09-04T17:11:34.675516353Z" level=error msg="StopPodSandbox for \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\" failed" error="failed to destroy network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.676929 kubelet[3368]: E0904 17:11:34.675899 3368 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:34.676929 kubelet[3368]: E0904 17:11:34.675997 3368 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db"} Sep 4 17:11:34.676929 kubelet[3368]: E0904 17:11:34.676708 3368 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c21af7c7-5c73-4d45-82b9-064bb8ecb13d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:11:34.676929 kubelet[3368]: E0904 17:11:34.676785 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c21af7c7-5c73-4d45-82b9-064bb8ecb13d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596bd58bc-l9kdw" podUID="c21af7c7-5c73-4d45-82b9-064bb8ecb13d" Sep 4 17:11:34.709200 containerd[2024]: time="2024-09-04T17:11:34.708996717Z" level=error msg="StopPodSandbox for \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\" failed" error="failed to destroy network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.709353 containerd[2024]: time="2024-09-04T17:11:34.709218888Z" level=error msg="StopPodSandbox for \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\" failed" error="failed to destroy network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.710458 kubelet[3368]: E0904 17:11:34.710043 3368 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:34.710458 kubelet[3368]: E0904 17:11:34.710119 3368 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8"} Sep 4 17:11:34.710458 kubelet[3368]: E0904 17:11:34.710177 3368 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b761753e-49f2-48ff-98ce-f1b20c5a7621\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:11:34.710458 kubelet[3368]: E0904 17:11:34.710217 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b761753e-49f2-48ff-98ce-f1b20c5a7621\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8pk7h" podUID="b761753e-49f2-48ff-98ce-f1b20c5a7621" Sep 4 17:11:34.710888 kubelet[3368]: E0904 17:11:34.710043 3368 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:34.710888 kubelet[3368]: E0904 17:11:34.710300 3368 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32"} Sep 4 17:11:34.710888 kubelet[3368]: E0904 17:11:34.710354 3368 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6a2166df-664f-4f02-bf19-b368d9927d59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:11:34.710888 kubelet[3368]: E0904 17:11:34.710389 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6a2166df-664f-4f02-bf19-b368d9927d59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cr2qq" podUID="6a2166df-664f-4f02-bf19-b368d9927d59" Sep 4 17:11:34.716497 containerd[2024]: time="2024-09-04T17:11:34.716426918Z" level=error msg="StopPodSandbox for \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\" failed" error="failed to destroy network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:11:34.717079 kubelet[3368]: E0904 17:11:34.716749 3368 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:34.717079 kubelet[3368]: E0904 17:11:34.716820 3368 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875"} Sep 4 17:11:34.717079 kubelet[3368]: E0904 17:11:34.716882 3368 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6789a9bc-3db8-46e0-a107-717bb75f1944\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:11:34.717079 kubelet[3368]: E0904 17:11:34.716921 3368 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6789a9bc-3db8-46e0-a107-717bb75f1944\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-zbmjh" podUID="6789a9bc-3db8-46e0-a107-717bb75f1944" Sep 4 17:11:34.781065 sshd[4230]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:34.787898 systemd[1]: sshd@7-172.31.21.183:22-139.178.89.65:60210.service: Deactivated successfully. Sep 4 17:11:34.791526 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:11:34.793364 systemd-logind[2001]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:11:34.796286 systemd-logind[2001]: Removed session 8. Sep 4 17:11:39.828104 systemd[1]: Started sshd@8-172.31.21.183:22-139.178.89.65:54692.service - OpenSSH per-connection server daemon (139.178.89.65:54692). Sep 4 17:11:40.023631 sshd[4352]: Accepted publickey for core from 139.178.89.65 port 54692 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:40.029576 sshd[4352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:40.043551 systemd-logind[2001]: New session 9 of user core. Sep 4 17:11:40.051002 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:11:40.297227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2117686220.mount: Deactivated successfully. Sep 4 17:11:40.338360 sshd[4352]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:40.346485 systemd[1]: sshd@8-172.31.21.183:22-139.178.89.65:54692.service: Deactivated successfully. Sep 4 17:11:40.351940 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:11:40.353747 systemd-logind[2001]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:11:40.355713 systemd-logind[2001]: Removed session 9. Sep 4 17:11:40.665253 containerd[2024]: time="2024-09-04T17:11:40.665040272Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:40.667759 containerd[2024]: time="2024-09-04T17:11:40.667690860Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:11:40.669329 containerd[2024]: time="2024-09-04T17:11:40.669251244Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:40.674899 containerd[2024]: time="2024-09-04T17:11:40.674777294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:40.677003 containerd[2024]: time="2024-09-04T17:11:40.676152329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 6.18995206s" Sep 4 17:11:40.677003 containerd[2024]: time="2024-09-04T17:11:40.676215613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:11:40.705332 containerd[2024]: time="2024-09-04T17:11:40.705263493Z" level=info msg="CreateContainer within sandbox \"15b129e62eae1fb482f0646dc2ec6db2d0615d5b41410a574a714aec87febf2d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:11:40.740820 containerd[2024]: time="2024-09-04T17:11:40.740734236Z" level=info msg="CreateContainer within sandbox \"15b129e62eae1fb482f0646dc2ec6db2d0615d5b41410a574a714aec87febf2d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"47edc88301f81741228fc27f52827cb18de9c66af4dace7c9d1d08612e4b6997\"" Sep 4 17:11:40.742410 containerd[2024]: time="2024-09-04T17:11:40.742310180Z" level=info msg="StartContainer for \"47edc88301f81741228fc27f52827cb18de9c66af4dace7c9d1d08612e4b6997\"" Sep 4 17:11:40.801926 systemd[1]: Started cri-containerd-47edc88301f81741228fc27f52827cb18de9c66af4dace7c9d1d08612e4b6997.scope - libcontainer container 47edc88301f81741228fc27f52827cb18de9c66af4dace7c9d1d08612e4b6997. Sep 4 17:11:40.901006 containerd[2024]: time="2024-09-04T17:11:40.900897999Z" level=info msg="StartContainer for \"47edc88301f81741228fc27f52827cb18de9c66af4dace7c9d1d08612e4b6997\" returns successfully" Sep 4 17:11:41.074142 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:11:41.074350 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:11:41.591161 kubelet[3368]: I0904 17:11:41.590763 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sqn2w" podStartSLOduration=2.071726285 podStartE2EDuration="23.590471066s" podCreationTimestamp="2024-09-04 17:11:18 +0000 UTC" firstStartedPulling="2024-09-04 17:11:19.159212062 +0000 UTC m=+24.167037921" lastFinishedPulling="2024-09-04 17:11:40.677956855 +0000 UTC m=+45.685782702" observedRunningTime="2024-09-04 17:11:41.586794193 +0000 UTC m=+46.594620076" watchObservedRunningTime="2024-09-04 17:11:41.590471066 +0000 UTC m=+46.598296961" Sep 4 17:11:43.338665 kernel: bpftool[4601]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:11:43.667285 (udev-worker)[4408]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:43.675191 systemd-networkd[1934]: vxlan.calico: Link UP Sep 4 17:11:43.675207 systemd-networkd[1934]: vxlan.calico: Gained carrier Sep 4 17:11:43.720430 (udev-worker)[4407]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:44.953964 systemd-networkd[1934]: vxlan.calico: Gained IPv6LL Sep 4 17:11:45.381206 systemd[1]: Started sshd@9-172.31.21.183:22-139.178.89.65:54706.service - OpenSSH per-connection server daemon (139.178.89.65:54706). Sep 4 17:11:45.576967 sshd[4671]: Accepted publickey for core from 139.178.89.65 port 54706 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:45.582742 sshd[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:45.596490 systemd-logind[2001]: New session 10 of user core. Sep 4 17:11:45.601905 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:11:45.872739 sshd[4671]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:45.879930 systemd[1]: sshd@9-172.31.21.183:22-139.178.89.65:54706.service: Deactivated successfully. Sep 4 17:11:45.885085 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:11:45.887105 systemd-logind[2001]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:11:45.889157 systemd-logind[2001]: Removed session 10. Sep 4 17:11:45.910161 systemd[1]: Started sshd@10-172.31.21.183:22-139.178.89.65:54712.service - OpenSSH per-connection server daemon (139.178.89.65:54712). Sep 4 17:11:46.089761 sshd[4687]: Accepted publickey for core from 139.178.89.65 port 54712 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:46.092428 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:46.099912 systemd-logind[2001]: New session 11 of user core. Sep 4 17:11:46.111923 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:11:46.208729 containerd[2024]: time="2024-09-04T17:11:46.206442459Z" level=info msg="StopPodSandbox for \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\"" Sep 4 17:11:46.208729 containerd[2024]: time="2024-09-04T17:11:46.206442423Z" level=info msg="StopPodSandbox for \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\"" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.465 [INFO][4721] k8s.go 608: Cleaning up netns ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.468 [INFO][4721] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" iface="eth0" netns="/var/run/netns/cni-3ea96a50-6101-f7c2-8b7a-9eea48a3d360" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.469 [INFO][4721] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" iface="eth0" netns="/var/run/netns/cni-3ea96a50-6101-f7c2-8b7a-9eea48a3d360" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.470 [INFO][4721] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" iface="eth0" netns="/var/run/netns/cni-3ea96a50-6101-f7c2-8b7a-9eea48a3d360" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.470 [INFO][4721] k8s.go 615: Releasing IP address(es) ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.470 [INFO][4721] utils.go 188: Calico CNI releasing IP address ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.574 [INFO][4737] ipam_plugin.go 417: Releasing address using handleID ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" HandleID="k8s-pod-network.32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.574 [INFO][4737] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.574 [INFO][4737] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.593 [WARNING][4737] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" HandleID="k8s-pod-network.32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.593 [INFO][4737] ipam_plugin.go 445: Releasing address using workloadID ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" HandleID="k8s-pod-network.32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.601 [INFO][4737] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:46.621837 containerd[2024]: 2024-09-04 17:11:46.616 [INFO][4721] k8s.go 621: Teardown processing complete. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:46.625711 containerd[2024]: time="2024-09-04T17:11:46.624659466Z" level=info msg="TearDown network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\" successfully" Sep 4 17:11:46.625711 containerd[2024]: time="2024-09-04T17:11:46.624747241Z" level=info msg="StopPodSandbox for \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\" returns successfully" Sep 4 17:11:46.631151 containerd[2024]: time="2024-09-04T17:11:46.631071738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pk7h,Uid:b761753e-49f2-48ff-98ce-f1b20c5a7621,Namespace:calico-system,Attempt:1,}" Sep 4 17:11:46.639141 systemd[1]: run-netns-cni\x2d3ea96a50\x2d6101\x2df7c2\x2d8b7a\x2d9eea48a3d360.mount: Deactivated successfully. Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.431 [INFO][4720] k8s.go 608: Cleaning up netns ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.431 [INFO][4720] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" iface="eth0" netns="/var/run/netns/cni-b43fd952-eb66-1560-cabf-521f702b8cb6" Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.432 [INFO][4720] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" iface="eth0" netns="/var/run/netns/cni-b43fd952-eb66-1560-cabf-521f702b8cb6" Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.433 [INFO][4720] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" iface="eth0" netns="/var/run/netns/cni-b43fd952-eb66-1560-cabf-521f702b8cb6" Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.433 [INFO][4720] k8s.go 615: Releasing IP address(es) ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.433 [INFO][4720] utils.go 188: Calico CNI releasing IP address ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.582 [INFO][4733] ipam_plugin.go 417: Releasing address using handleID ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" HandleID="k8s-pod-network.f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.585 [INFO][4733] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.602 [INFO][4733] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.648 [WARNING][4733] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" HandleID="k8s-pod-network.f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.651 [INFO][4733] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" HandleID="k8s-pod-network.f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.657 [INFO][4733] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:46.666998 containerd[2024]: 2024-09-04 17:11:46.662 [INFO][4720] k8s.go 621: Teardown processing complete. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:46.673327 containerd[2024]: time="2024-09-04T17:11:46.673127904Z" level=info msg="TearDown network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\" successfully" Sep 4 17:11:46.673327 containerd[2024]: time="2024-09-04T17:11:46.673182147Z" level=info msg="StopPodSandbox for \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\" returns successfully" Sep 4 17:11:46.675806 containerd[2024]: time="2024-09-04T17:11:46.675501802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cr2qq,Uid:6a2166df-664f-4f02-bf19-b368d9927d59,Namespace:kube-system,Attempt:1,}" Sep 4 17:11:46.682098 sshd[4687]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:46.685801 systemd[1]: run-netns-cni\x2db43fd952\x2deb66\x2d1560\x2dcabf\x2d521f702b8cb6.mount: Deactivated successfully. Sep 4 17:11:46.702367 systemd[1]: sshd@10-172.31.21.183:22-139.178.89.65:54712.service: Deactivated successfully. Sep 4 17:11:46.734124 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:11:46.744904 systemd-logind[2001]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:11:46.821886 systemd[1]: Started sshd@11-172.31.21.183:22-139.178.89.65:54718.service - OpenSSH per-connection server daemon (139.178.89.65:54718). Sep 4 17:11:46.826286 systemd-logind[2001]: Removed session 11. Sep 4 17:11:47.061949 sshd[4767]: Accepted publickey for core from 139.178.89.65 port 54718 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:47.063927 sshd[4767]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:47.077083 systemd-logind[2001]: New session 12 of user core. Sep 4 17:11:47.088395 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:11:47.191713 systemd-networkd[1934]: cali45ccfb3af13: Link UP Sep 4 17:11:47.197057 systemd-networkd[1934]: cali45ccfb3af13: Gained carrier Sep 4 17:11:47.207510 (udev-worker)[4791]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:46.880 [INFO][4747] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0 csi-node-driver- calico-system b761753e-49f2-48ff-98ce-f1b20c5a7621 795 0 2024-09-04 17:11:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65cb9bb8f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-21-183 csi-node-driver-8pk7h eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali45ccfb3af13 [] []}} ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Namespace="calico-system" Pod="csi-node-driver-8pk7h" WorkloadEndpoint="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:46.880 [INFO][4747] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Namespace="calico-system" Pod="csi-node-driver-8pk7h" WorkloadEndpoint="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.034 [INFO][4774] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" HandleID="k8s-pod-network.763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.103 [INFO][4774] ipam_plugin.go 270: Auto assigning IP ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" HandleID="k8s-pod-network.763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dae0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-183", "pod":"csi-node-driver-8pk7h", "timestamp":"2024-09-04 17:11:47.03408299 +0000 UTC"}, Hostname:"ip-172-31-21-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.105 [INFO][4774] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.105 [INFO][4774] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.106 [INFO][4774] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-183' Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.110 [INFO][4774] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" host="ip-172-31-21-183" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.121 [INFO][4774] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-183" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.131 [INFO][4774] ipam.go 489: Trying affinity for 192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.134 [INFO][4774] ipam.go 155: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.141 [INFO][4774] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.141 [INFO][4774] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" host="ip-172-31-21-183" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.144 [INFO][4774] ipam.go 1685: Creating new handle: k8s-pod-network.763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4 Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.154 [INFO][4774] ipam.go 1203: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" host="ip-172-31-21-183" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.168 [INFO][4774] ipam.go 1216: Successfully claimed IPs: [192.168.82.1/26] block=192.168.82.0/26 handle="k8s-pod-network.763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" host="ip-172-31-21-183" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.168 [INFO][4774] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.1/26] handle="k8s-pod-network.763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" host="ip-172-31-21-183" Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.168 [INFO][4774] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:47.295290 containerd[2024]: 2024-09-04 17:11:47.168 [INFO][4774] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.82.1/26] IPv6=[] ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" HandleID="k8s-pod-network.763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:47.304510 containerd[2024]: 2024-09-04 17:11:47.172 [INFO][4747] k8s.go 386: Populated endpoint ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Namespace="calico-system" Pod="csi-node-driver-8pk7h" WorkloadEndpoint="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b761753e-49f2-48ff-98ce-f1b20c5a7621", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"", Pod:"csi-node-driver-8pk7h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali45ccfb3af13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:47.304510 containerd[2024]: 2024-09-04 17:11:47.173 [INFO][4747] k8s.go 387: Calico CNI using IPs: [192.168.82.1/32] ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Namespace="calico-system" Pod="csi-node-driver-8pk7h" WorkloadEndpoint="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:47.304510 containerd[2024]: 2024-09-04 17:11:47.174 [INFO][4747] dataplane_linux.go 68: Setting the host side veth name to cali45ccfb3af13 ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Namespace="calico-system" Pod="csi-node-driver-8pk7h" WorkloadEndpoint="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:47.304510 containerd[2024]: 2024-09-04 17:11:47.196 [INFO][4747] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Namespace="calico-system" Pod="csi-node-driver-8pk7h" WorkloadEndpoint="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:47.304510 containerd[2024]: 2024-09-04 17:11:47.200 [INFO][4747] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Namespace="calico-system" Pod="csi-node-driver-8pk7h" WorkloadEndpoint="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b761753e-49f2-48ff-98ce-f1b20c5a7621", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4", Pod:"csi-node-driver-8pk7h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali45ccfb3af13", MAC:"b2:e3:51:f2:fa:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:47.304510 containerd[2024]: 2024-09-04 17:11:47.290 [INFO][4747] k8s.go 500: Wrote updated endpoint to datastore ContainerID="763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4" Namespace="calico-system" Pod="csi-node-driver-8pk7h" WorkloadEndpoint="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:47.420889 containerd[2024]: time="2024-09-04T17:11:47.420219532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:47.420889 containerd[2024]: time="2024-09-04T17:11:47.420348345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:47.420889 containerd[2024]: time="2024-09-04T17:11:47.420395048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:47.420889 containerd[2024]: time="2024-09-04T17:11:47.420429289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:47.428163 systemd-networkd[1934]: califc3105a5331: Link UP Sep 4 17:11:47.433934 systemd-networkd[1934]: califc3105a5331: Gained carrier Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:46.933 [INFO][4763] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0 coredns-7db6d8ff4d- kube-system 6a2166df-664f-4f02-bf19-b368d9927d59 794 0 2024-09-04 17:11:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-183 coredns-7db6d8ff4d-cr2qq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califc3105a5331 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cr2qq" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:46.934 [INFO][4763] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cr2qq" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.066 [INFO][4778] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" HandleID="k8s-pod-network.64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.114 [INFO][4778] ipam_plugin.go 270: Auto assigning IP ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" HandleID="k8s-pod-network.64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002604b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-183", "pod":"coredns-7db6d8ff4d-cr2qq", "timestamp":"2024-09-04 17:11:47.066297021 +0000 UTC"}, Hostname:"ip-172-31-21-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.115 [INFO][4778] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.168 [INFO][4778] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.168 [INFO][4778] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-183' Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.176 [INFO][4778] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" host="ip-172-31-21-183" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.231 [INFO][4778] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-183" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.298 [INFO][4778] ipam.go 489: Trying affinity for 192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.308 [INFO][4778] ipam.go 155: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.317 [INFO][4778] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.317 [INFO][4778] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" host="ip-172-31-21-183" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.324 [INFO][4778] ipam.go 1685: Creating new handle: k8s-pod-network.64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.353 [INFO][4778] ipam.go 1203: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" host="ip-172-31-21-183" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.371 [INFO][4778] ipam.go 1216: Successfully claimed IPs: [192.168.82.2/26] block=192.168.82.0/26 handle="k8s-pod-network.64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" host="ip-172-31-21-183" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.371 [INFO][4778] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.2/26] handle="k8s-pod-network.64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" host="ip-172-31-21-183" Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.371 [INFO][4778] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:47.480785 containerd[2024]: 2024-09-04 17:11:47.375 [INFO][4778] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.82.2/26] IPv6=[] ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" HandleID="k8s-pod-network.64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:47.482228 containerd[2024]: 2024-09-04 17:11:47.394 [INFO][4763] k8s.go 386: Populated endpoint ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cr2qq" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6a2166df-664f-4f02-bf19-b368d9927d59", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"", Pod:"coredns-7db6d8ff4d-cr2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc3105a5331", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:47.482228 containerd[2024]: 2024-09-04 17:11:47.394 [INFO][4763] k8s.go 387: Calico CNI using IPs: [192.168.82.2/32] ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cr2qq" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:47.482228 containerd[2024]: 2024-09-04 17:11:47.394 [INFO][4763] dataplane_linux.go 68: Setting the host side veth name to califc3105a5331 ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cr2qq" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:47.482228 containerd[2024]: 2024-09-04 17:11:47.427 [INFO][4763] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cr2qq" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:47.482228 containerd[2024]: 2024-09-04 17:11:47.437 [INFO][4763] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cr2qq" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6a2166df-664f-4f02-bf19-b368d9927d59", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b", Pod:"coredns-7db6d8ff4d-cr2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc3105a5331", MAC:"92:33:5a:40:25:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:47.482228 containerd[2024]: 2024-09-04 17:11:47.471 [INFO][4763] k8s.go 500: Wrote updated endpoint to datastore ContainerID="64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cr2qq" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:47.492935 sshd[4767]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:47.505936 systemd[1]: Started cri-containerd-763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4.scope - libcontainer container 763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4. Sep 4 17:11:47.506887 systemd[1]: sshd@11-172.31.21.183:22-139.178.89.65:54718.service: Deactivated successfully. Sep 4 17:11:47.516394 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:11:47.522696 systemd-logind[2001]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:11:47.531235 systemd-logind[2001]: Removed session 12. Sep 4 17:11:47.562934 containerd[2024]: time="2024-09-04T17:11:47.562349923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:47.562934 containerd[2024]: time="2024-09-04T17:11:47.562503072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:47.562934 containerd[2024]: time="2024-09-04T17:11:47.562582467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:47.562934 containerd[2024]: time="2024-09-04T17:11:47.562669199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:47.611680 systemd[1]: Started cri-containerd-64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b.scope - libcontainer container 64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b. Sep 4 17:11:47.644730 containerd[2024]: time="2024-09-04T17:11:47.644659830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8pk7h,Uid:b761753e-49f2-48ff-98ce-f1b20c5a7621,Namespace:calico-system,Attempt:1,} returns sandbox id \"763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4\"" Sep 4 17:11:47.650189 containerd[2024]: time="2024-09-04T17:11:47.650114864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:11:47.706505 containerd[2024]: time="2024-09-04T17:11:47.706324579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cr2qq,Uid:6a2166df-664f-4f02-bf19-b368d9927d59,Namespace:kube-system,Attempt:1,} returns sandbox id \"64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b\"" Sep 4 17:11:47.714092 containerd[2024]: time="2024-09-04T17:11:47.713972245Z" level=info msg="CreateContainer within sandbox \"64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:11:47.755484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2005942095.mount: Deactivated successfully. Sep 4 17:11:47.795849 containerd[2024]: time="2024-09-04T17:11:47.795640864Z" level=info msg="CreateContainer within sandbox \"64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0fc826158d8044b89012a68faf8f17efe3539cf7572ebf058441bb919faaa34d\"" Sep 4 17:11:47.797670 containerd[2024]: time="2024-09-04T17:11:47.796672371Z" level=info msg="StartContainer for \"0fc826158d8044b89012a68faf8f17efe3539cf7572ebf058441bb919faaa34d\"" Sep 4 17:11:47.860910 systemd[1]: Started cri-containerd-0fc826158d8044b89012a68faf8f17efe3539cf7572ebf058441bb919faaa34d.scope - libcontainer container 0fc826158d8044b89012a68faf8f17efe3539cf7572ebf058441bb919faaa34d. Sep 4 17:11:47.913035 containerd[2024]: time="2024-09-04T17:11:47.912845406Z" level=info msg="StartContainer for \"0fc826158d8044b89012a68faf8f17efe3539cf7572ebf058441bb919faaa34d\" returns successfully" Sep 4 17:11:48.205973 containerd[2024]: time="2024-09-04T17:11:48.205903608Z" level=info msg="StopPodSandbox for \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\"" Sep 4 17:11:48.346577 systemd-networkd[1934]: cali45ccfb3af13: Gained IPv6LL Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.311 [INFO][4959] k8s.go 608: Cleaning up netns ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.314 [INFO][4959] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" iface="eth0" netns="/var/run/netns/cni-c6bcf484-2a3e-6db8-d183-b624cbf62cb9" Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.314 [INFO][4959] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" iface="eth0" netns="/var/run/netns/cni-c6bcf484-2a3e-6db8-d183-b624cbf62cb9" Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.315 [INFO][4959] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" iface="eth0" netns="/var/run/netns/cni-c6bcf484-2a3e-6db8-d183-b624cbf62cb9" Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.315 [INFO][4959] k8s.go 615: Releasing IP address(es) ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.315 [INFO][4959] utils.go 188: Calico CNI releasing IP address ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.359 [INFO][4965] ipam_plugin.go 417: Releasing address using handleID ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" HandleID="k8s-pod-network.2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.359 [INFO][4965] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.359 [INFO][4965] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.372 [WARNING][4965] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" HandleID="k8s-pod-network.2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.372 [INFO][4965] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" HandleID="k8s-pod-network.2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.376 [INFO][4965] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:48.382851 containerd[2024]: 2024-09-04 17:11:48.379 [INFO][4959] k8s.go 621: Teardown processing complete. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:48.385791 containerd[2024]: time="2024-09-04T17:11:48.383192237Z" level=info msg="TearDown network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\" successfully" Sep 4 17:11:48.385791 containerd[2024]: time="2024-09-04T17:11:48.383248365Z" level=info msg="StopPodSandbox for \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\" returns successfully" Sep 4 17:11:48.385791 containerd[2024]: time="2024-09-04T17:11:48.385085199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zbmjh,Uid:6789a9bc-3db8-46e0-a107-717bb75f1944,Namespace:kube-system,Attempt:1,}" Sep 4 17:11:48.614669 kubelet[3368]: I0904 17:11:48.613833 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cr2qq" podStartSLOduration=39.613807064 podStartE2EDuration="39.613807064s" podCreationTimestamp="2024-09-04 17:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:48.608652948 +0000 UTC m=+53.616478807" watchObservedRunningTime="2024-09-04 17:11:48.613807064 +0000 UTC m=+53.621632911" Sep 4 17:11:48.633995 systemd-networkd[1934]: cali7e525b28930: Link UP Sep 4 17:11:48.634484 systemd-networkd[1934]: cali7e525b28930: Gained carrier Sep 4 17:11:48.635420 systemd[1]: run-containerd-runc-k8s.io-0fc826158d8044b89012a68faf8f17efe3539cf7572ebf058441bb919faaa34d-runc.Gf9R1x.mount: Deactivated successfully. Sep 4 17:11:48.635613 systemd[1]: run-netns-cni\x2dc6bcf484\x2d2a3e\x2d6db8\x2dd183\x2db624cbf62cb9.mount: Deactivated successfully. Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.472 [INFO][4971] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0 coredns-7db6d8ff4d- kube-system 6789a9bc-3db8-46e0-a107-717bb75f1944 830 0 2024-09-04 17:11:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-183 coredns-7db6d8ff4d-zbmjh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7e525b28930 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zbmjh" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.472 [INFO][4971] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zbmjh" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.521 [INFO][4984] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" HandleID="k8s-pod-network.3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.538 [INFO][4984] ipam_plugin.go 270: Auto assigning IP ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" HandleID="k8s-pod-network.3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000263d00), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-183", "pod":"coredns-7db6d8ff4d-zbmjh", "timestamp":"2024-09-04 17:11:48.521728679 +0000 UTC"}, Hostname:"ip-172-31-21-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.539 [INFO][4984] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.539 [INFO][4984] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.539 [INFO][4984] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-183' Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.542 [INFO][4984] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" host="ip-172-31-21-183" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.550 [INFO][4984] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-183" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.560 [INFO][4984] ipam.go 489: Trying affinity for 192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.564 [INFO][4984] ipam.go 155: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.570 [INFO][4984] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.570 [INFO][4984] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" host="ip-172-31-21-183" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.574 [INFO][4984] ipam.go 1685: Creating new handle: k8s-pod-network.3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.582 [INFO][4984] ipam.go 1203: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" host="ip-172-31-21-183" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.603 [INFO][4984] ipam.go 1216: Successfully claimed IPs: [192.168.82.3/26] block=192.168.82.0/26 handle="k8s-pod-network.3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" host="ip-172-31-21-183" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.603 [INFO][4984] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.3/26] handle="k8s-pod-network.3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" host="ip-172-31-21-183" Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.603 [INFO][4984] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:48.684939 containerd[2024]: 2024-09-04 17:11:48.603 [INFO][4984] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.82.3/26] IPv6=[] ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" HandleID="k8s-pod-network.3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.689269 containerd[2024]: 2024-09-04 17:11:48.612 [INFO][4971] k8s.go 386: Populated endpoint ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zbmjh" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6789a9bc-3db8-46e0-a107-717bb75f1944", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"", Pod:"coredns-7db6d8ff4d-zbmjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e525b28930", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:48.689269 containerd[2024]: 2024-09-04 17:11:48.614 [INFO][4971] k8s.go 387: Calico CNI using IPs: [192.168.82.3/32] ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zbmjh" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.689269 containerd[2024]: 2024-09-04 17:11:48.618 [INFO][4971] dataplane_linux.go 68: Setting the host side veth name to cali7e525b28930 ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zbmjh" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.689269 containerd[2024]: 2024-09-04 17:11:48.632 [INFO][4971] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zbmjh" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.689269 containerd[2024]: 2024-09-04 17:11:48.647 [INFO][4971] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zbmjh" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6789a9bc-3db8-46e0-a107-717bb75f1944", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df", Pod:"coredns-7db6d8ff4d-zbmjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e525b28930", MAC:"52:9d:c3:55:72:10", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:48.689269 containerd[2024]: 2024-09-04 17:11:48.677 [INFO][4971] k8s.go 500: Wrote updated endpoint to datastore ContainerID="3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df" Namespace="kube-system" Pod="coredns-7db6d8ff4d-zbmjh" WorkloadEndpoint="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:48.771831 containerd[2024]: time="2024-09-04T17:11:48.765385709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:48.771831 containerd[2024]: time="2024-09-04T17:11:48.765501387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:48.771831 containerd[2024]: time="2024-09-04T17:11:48.765549362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:48.771831 containerd[2024]: time="2024-09-04T17:11:48.765582787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:48.843957 systemd[1]: Started cri-containerd-3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df.scope - libcontainer container 3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df. Sep 4 17:11:48.947864 containerd[2024]: time="2024-09-04T17:11:48.947118587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-zbmjh,Uid:6789a9bc-3db8-46e0-a107-717bb75f1944,Namespace:kube-system,Attempt:1,} returns sandbox id \"3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df\"" Sep 4 17:11:48.958142 containerd[2024]: time="2024-09-04T17:11:48.957547503Z" level=info msg="CreateContainer within sandbox \"3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:11:48.997261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999473236.mount: Deactivated successfully. Sep 4 17:11:49.007426 containerd[2024]: time="2024-09-04T17:11:49.006656906Z" level=info msg="CreateContainer within sandbox \"3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80da8d7529928e128cc3e923b7c4a8d40aa548b0160dbf86d1c6056513e1564b\"" Sep 4 17:11:49.010292 containerd[2024]: time="2024-09-04T17:11:49.009925407Z" level=info msg="StartContainer for \"80da8d7529928e128cc3e923b7c4a8d40aa548b0160dbf86d1c6056513e1564b\"" Sep 4 17:11:49.050934 systemd-networkd[1934]: califc3105a5331: Gained IPv6LL Sep 4 17:11:49.132080 systemd[1]: Started cri-containerd-80da8d7529928e128cc3e923b7c4a8d40aa548b0160dbf86d1c6056513e1564b.scope - libcontainer container 80da8d7529928e128cc3e923b7c4a8d40aa548b0160dbf86d1c6056513e1564b. Sep 4 17:11:49.217194 containerd[2024]: time="2024-09-04T17:11:49.216230846Z" level=info msg="StopPodSandbox for \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\"" Sep 4 17:11:49.352867 containerd[2024]: time="2024-09-04T17:11:49.352677068Z" level=info msg="StartContainer for \"80da8d7529928e128cc3e923b7c4a8d40aa548b0160dbf86d1c6056513e1564b\" returns successfully" Sep 4 17:11:49.373869 containerd[2024]: time="2024-09-04T17:11:49.373708218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:49.382161 containerd[2024]: time="2024-09-04T17:11:49.381983606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:11:49.392686 containerd[2024]: time="2024-09-04T17:11:49.392625556Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:49.412135 containerd[2024]: time="2024-09-04T17:11:49.412037349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:49.416476 containerd[2024]: time="2024-09-04T17:11:49.416004780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.76562552s" Sep 4 17:11:49.416476 containerd[2024]: time="2024-09-04T17:11:49.416066059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:11:49.423358 containerd[2024]: time="2024-09-04T17:11:49.423141291Z" level=info msg="CreateContainer within sandbox \"763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:11:49.468109 containerd[2024]: time="2024-09-04T17:11:49.466092480Z" level=info msg="CreateContainer within sandbox \"763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a494a861ae6720b68a6410c3560aaeba809b06543d6107ac491598b44e4c74ee\"" Sep 4 17:11:49.474132 containerd[2024]: time="2024-09-04T17:11:49.471390680Z" level=info msg="StartContainer for \"a494a861ae6720b68a6410c3560aaeba809b06543d6107ac491598b44e4c74ee\"" Sep 4 17:11:49.590908 systemd[1]: Started cri-containerd-a494a861ae6720b68a6410c3560aaeba809b06543d6107ac491598b44e4c74ee.scope - libcontainer container a494a861ae6720b68a6410c3560aaeba809b06543d6107ac491598b44e4c74ee. Sep 4 17:11:49.731628 kubelet[3368]: I0904 17:11:49.729510 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-zbmjh" podStartSLOduration=40.729487878 podStartE2EDuration="40.729487878s" podCreationTimestamp="2024-09-04 17:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:11:49.678433527 +0000 UTC m=+54.686259386" watchObservedRunningTime="2024-09-04 17:11:49.729487878 +0000 UTC m=+54.737313737" Sep 4 17:11:49.736168 containerd[2024]: time="2024-09-04T17:11:49.735822796Z" level=info msg="StartContainer for \"a494a861ae6720b68a6410c3560aaeba809b06543d6107ac491598b44e4c74ee\" returns successfully" Sep 4 17:11:49.740815 containerd[2024]: time="2024-09-04T17:11:49.739851673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.484 [INFO][5099] k8s.go 608: Cleaning up netns ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.485 [INFO][5099] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" iface="eth0" netns="/var/run/netns/cni-3d055704-14c2-306e-2d1b-3baa12b5f850" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.485 [INFO][5099] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" iface="eth0" netns="/var/run/netns/cni-3d055704-14c2-306e-2d1b-3baa12b5f850" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.487 [INFO][5099] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" iface="eth0" netns="/var/run/netns/cni-3d055704-14c2-306e-2d1b-3baa12b5f850" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.488 [INFO][5099] k8s.go 615: Releasing IP address(es) ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.488 [INFO][5099] utils.go 188: Calico CNI releasing IP address ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.656 [INFO][5120] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" HandleID="k8s-pod-network.ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.656 [INFO][5120] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.659 [INFO][5120] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.705 [WARNING][5120] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" HandleID="k8s-pod-network.ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.706 [INFO][5120] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" HandleID="k8s-pod-network.ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.726 [INFO][5120] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:49.744818 containerd[2024]: 2024-09-04 17:11:49.736 [INFO][5099] k8s.go 621: Teardown processing complete. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:49.752089 systemd[1]: run-netns-cni\x2d3d055704\x2d14c2\x2d306e\x2d2d1b\x2d3baa12b5f850.mount: Deactivated successfully. Sep 4 17:11:49.755937 containerd[2024]: time="2024-09-04T17:11:49.755806422Z" level=info msg="TearDown network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\" successfully" Sep 4 17:11:49.756191 containerd[2024]: time="2024-09-04T17:11:49.755914092Z" level=info msg="StopPodSandbox for \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\" returns successfully" Sep 4 17:11:49.757982 containerd[2024]: time="2024-09-04T17:11:49.757735907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596bd58bc-l9kdw,Uid:c21af7c7-5c73-4d45-82b9-064bb8ecb13d,Namespace:calico-system,Attempt:1,}" Sep 4 17:11:50.119097 systemd-networkd[1934]: cali5eeb5cbc941: Link UP Sep 4 17:11:50.119564 systemd-networkd[1934]: cali5eeb5cbc941: Gained carrier Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:49.947 [INFO][5166] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0 calico-kube-controllers-596bd58bc- calico-system c21af7c7-5c73-4d45-82b9-064bb8ecb13d 849 0 2024-09-04 17:11:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:596bd58bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-183 calico-kube-controllers-596bd58bc-l9kdw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5eeb5cbc941 [] []}} ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Namespace="calico-system" Pod="calico-kube-controllers-596bd58bc-l9kdw" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:49.947 [INFO][5166] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Namespace="calico-system" Pod="calico-kube-controllers-596bd58bc-l9kdw" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.008 [INFO][5174] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" HandleID="k8s-pod-network.b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.033 [INFO][5174] ipam_plugin.go 270: Auto assigning IP ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" HandleID="k8s-pod-network.b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318300), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-183", "pod":"calico-kube-controllers-596bd58bc-l9kdw", "timestamp":"2024-09-04 17:11:50.008529807 +0000 UTC"}, Hostname:"ip-172-31-21-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.033 [INFO][5174] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.033 [INFO][5174] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.033 [INFO][5174] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-183' Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.037 [INFO][5174] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" host="ip-172-31-21-183" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.048 [INFO][5174] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-183" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.058 [INFO][5174] ipam.go 489: Trying affinity for 192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.061 [INFO][5174] ipam.go 155: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.066 [INFO][5174] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.067 [INFO][5174] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" host="ip-172-31-21-183" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.073 [INFO][5174] ipam.go 1685: Creating new handle: k8s-pod-network.b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243 Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.084 [INFO][5174] ipam.go 1203: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" host="ip-172-31-21-183" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.099 [INFO][5174] ipam.go 1216: Successfully claimed IPs: [192.168.82.4/26] block=192.168.82.0/26 handle="k8s-pod-network.b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" host="ip-172-31-21-183" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.099 [INFO][5174] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.4/26] handle="k8s-pod-network.b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" host="ip-172-31-21-183" Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.099 [INFO][5174] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:50.173637 containerd[2024]: 2024-09-04 17:11:50.099 [INFO][5174] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.82.4/26] IPv6=[] ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" HandleID="k8s-pod-network.b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:50.175451 containerd[2024]: 2024-09-04 17:11:50.106 [INFO][5166] k8s.go 386: Populated endpoint ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Namespace="calico-system" Pod="calico-kube-controllers-596bd58bc-l9kdw" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0", GenerateName:"calico-kube-controllers-596bd58bc-", Namespace:"calico-system", SelfLink:"", UID:"c21af7c7-5c73-4d45-82b9-064bb8ecb13d", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596bd58bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"", Pod:"calico-kube-controllers-596bd58bc-l9kdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5eeb5cbc941", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:50.175451 containerd[2024]: 2024-09-04 17:11:50.106 [INFO][5166] k8s.go 387: Calico CNI using IPs: [192.168.82.4/32] ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Namespace="calico-system" Pod="calico-kube-controllers-596bd58bc-l9kdw" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:50.175451 containerd[2024]: 2024-09-04 17:11:50.106 [INFO][5166] dataplane_linux.go 68: Setting the host side veth name to cali5eeb5cbc941 ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Namespace="calico-system" Pod="calico-kube-controllers-596bd58bc-l9kdw" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:50.175451 containerd[2024]: 2024-09-04 17:11:50.118 [INFO][5166] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Namespace="calico-system" Pod="calico-kube-controllers-596bd58bc-l9kdw" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:50.175451 containerd[2024]: 2024-09-04 17:11:50.130 [INFO][5166] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Namespace="calico-system" Pod="calico-kube-controllers-596bd58bc-l9kdw" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0", GenerateName:"calico-kube-controllers-596bd58bc-", Namespace:"calico-system", SelfLink:"", UID:"c21af7c7-5c73-4d45-82b9-064bb8ecb13d", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596bd58bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243", Pod:"calico-kube-controllers-596bd58bc-l9kdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5eeb5cbc941", MAC:"5a:f4:c6:ee:69:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:50.175451 containerd[2024]: 2024-09-04 17:11:50.166 [INFO][5166] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243" Namespace="calico-system" Pod="calico-kube-controllers-596bd58bc-l9kdw" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:50.266923 systemd-networkd[1934]: cali7e525b28930: Gained IPv6LL Sep 4 17:11:50.268778 containerd[2024]: time="2024-09-04T17:11:50.249531481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:50.268778 containerd[2024]: time="2024-09-04T17:11:50.252028645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:50.268778 containerd[2024]: time="2024-09-04T17:11:50.252118942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:50.268778 containerd[2024]: time="2024-09-04T17:11:50.252145704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:50.309899 systemd[1]: Started cri-containerd-b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243.scope - libcontainer container b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243. Sep 4 17:11:50.429520 containerd[2024]: time="2024-09-04T17:11:50.429202833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596bd58bc-l9kdw,Uid:c21af7c7-5c73-4d45-82b9-064bb8ecb13d,Namespace:calico-system,Attempt:1,} returns sandbox id \"b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243\"" Sep 4 17:11:51.292267 systemd-networkd[1934]: cali5eeb5cbc941: Gained IPv6LL Sep 4 17:11:51.555878 containerd[2024]: time="2024-09-04T17:11:51.555569023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:51.559659 containerd[2024]: time="2024-09-04T17:11:51.559537162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:11:51.561866 containerd[2024]: time="2024-09-04T17:11:51.561648669Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:51.570213 containerd[2024]: time="2024-09-04T17:11:51.570031798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:51.573502 containerd[2024]: time="2024-09-04T17:11:51.573413408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.833495943s" Sep 4 17:11:51.573502 containerd[2024]: time="2024-09-04T17:11:51.573483920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:11:51.604400 containerd[2024]: time="2024-09-04T17:11:51.604099124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:11:51.610414 containerd[2024]: time="2024-09-04T17:11:51.610341487Z" level=info msg="CreateContainer within sandbox \"763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:11:51.692113 containerd[2024]: time="2024-09-04T17:11:51.692031681Z" level=info msg="CreateContainer within sandbox \"763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cce87df08270c704ead99b03a42a9b7ead10ace99494ff5ca2e7b9743c2c74db\"" Sep 4 17:11:51.693119 containerd[2024]: time="2024-09-04T17:11:51.693036139Z" level=info msg="StartContainer for \"cce87df08270c704ead99b03a42a9b7ead10ace99494ff5ca2e7b9743c2c74db\"" Sep 4 17:11:51.777931 systemd[1]: Started cri-containerd-cce87df08270c704ead99b03a42a9b7ead10ace99494ff5ca2e7b9743c2c74db.scope - libcontainer container cce87df08270c704ead99b03a42a9b7ead10ace99494ff5ca2e7b9743c2c74db. Sep 4 17:11:51.861172 containerd[2024]: time="2024-09-04T17:11:51.860638183Z" level=info msg="StartContainer for \"cce87df08270c704ead99b03a42a9b7ead10ace99494ff5ca2e7b9743c2c74db\" returns successfully" Sep 4 17:11:52.460219 kubelet[3368]: I0904 17:11:52.460156 3368 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:11:52.460219 kubelet[3368]: I0904 17:11:52.460216 3368 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:11:52.534202 systemd[1]: Started sshd@12-172.31.21.183:22-139.178.89.65:50362.service - OpenSSH per-connection server daemon (139.178.89.65:50362). Sep 4 17:11:52.735889 sshd[5283]: Accepted publickey for core from 139.178.89.65 port 50362 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:52.741304 sshd[5283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:52.758735 systemd-logind[2001]: New session 13 of user core. Sep 4 17:11:52.770948 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:11:53.111982 sshd[5283]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:53.121675 systemd[1]: sshd@12-172.31.21.183:22-139.178.89.65:50362.service: Deactivated successfully. Sep 4 17:11:53.128588 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:11:53.133767 systemd-logind[2001]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:11:53.137353 systemd-logind[2001]: Removed session 13. Sep 4 17:11:55.261424 containerd[2024]: time="2024-09-04T17:11:55.261251179Z" level=info msg="StopPodSandbox for \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\"" Sep 4 17:11:55.386540 ntpd[1995]: Listen normally on 7 vxlan.calico 192.168.82.0:123 Sep 4 17:11:55.389264 ntpd[1995]: 4 Sep 17:11:55 ntpd[1995]: Listen normally on 7 vxlan.calico 192.168.82.0:123 Sep 4 17:11:55.389264 ntpd[1995]: 4 Sep 17:11:55 ntpd[1995]: Listen normally on 8 vxlan.calico [fe80::6409:78ff:fe0f:9cb9%4]:123 Sep 4 17:11:55.389264 ntpd[1995]: 4 Sep 17:11:55 ntpd[1995]: Listen normally on 9 cali45ccfb3af13 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:11:55.389264 ntpd[1995]: 4 Sep 17:11:55 ntpd[1995]: Listen normally on 10 califc3105a5331 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:11:55.389264 ntpd[1995]: 4 Sep 17:11:55 ntpd[1995]: Listen normally on 11 cali7e525b28930 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:11:55.389264 ntpd[1995]: 4 Sep 17:11:55 ntpd[1995]: Listen normally on 12 cali5eeb5cbc941 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:11:55.387424 ntpd[1995]: Listen normally on 8 vxlan.calico [fe80::6409:78ff:fe0f:9cb9%4]:123 Sep 4 17:11:55.387722 ntpd[1995]: Listen normally on 9 cali45ccfb3af13 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:11:55.387921 ntpd[1995]: Listen normally on 10 califc3105a5331 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:11:55.388324 ntpd[1995]: Listen normally on 11 cali7e525b28930 [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:11:55.388622 ntpd[1995]: Listen normally on 12 cali5eeb5cbc941 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.492 [WARNING][5326] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6789a9bc-3db8-46e0-a107-717bb75f1944", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df", Pod:"coredns-7db6d8ff4d-zbmjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e525b28930", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.493 [INFO][5326] k8s.go 608: Cleaning up netns ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.493 [INFO][5326] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" iface="eth0" netns="" Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.493 [INFO][5326] k8s.go 615: Releasing IP address(es) ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.493 [INFO][5326] utils.go 188: Calico CNI releasing IP address ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.628 [INFO][5335] ipam_plugin.go 417: Releasing address using handleID ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" HandleID="k8s-pod-network.2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.629 [INFO][5335] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.629 [INFO][5335] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.658 [WARNING][5335] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" HandleID="k8s-pod-network.2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.662 [INFO][5335] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" HandleID="k8s-pod-network.2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.670 [INFO][5335] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:55.695760 containerd[2024]: 2024-09-04 17:11:55.681 [INFO][5326] k8s.go 621: Teardown processing complete. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:55.695760 containerd[2024]: time="2024-09-04T17:11:55.694131222Z" level=info msg="TearDown network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\" successfully" Sep 4 17:11:55.695760 containerd[2024]: time="2024-09-04T17:11:55.694168393Z" level=info msg="StopPodSandbox for \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\" returns successfully" Sep 4 17:11:55.698815 containerd[2024]: time="2024-09-04T17:11:55.697537108Z" level=info msg="RemovePodSandbox for \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\"" Sep 4 17:11:55.698815 containerd[2024]: time="2024-09-04T17:11:55.697683918Z" level=info msg="Forcibly stopping sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\"" Sep 4 17:11:55.724151 containerd[2024]: time="2024-09-04T17:11:55.724031625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:55.730461 containerd[2024]: time="2024-09-04T17:11:55.728947770Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:11:55.732356 containerd[2024]: time="2024-09-04T17:11:55.731948850Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:55.750181 containerd[2024]: time="2024-09-04T17:11:55.748484978Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:55.750181 containerd[2024]: time="2024-09-04T17:11:55.750088211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 4.145891383s" Sep 4 17:11:55.752869 containerd[2024]: time="2024-09-04T17:11:55.752795492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:11:55.790442 containerd[2024]: time="2024-09-04T17:11:55.790387035Z" level=info msg="CreateContainer within sandbox \"b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:11:55.887433 containerd[2024]: time="2024-09-04T17:11:55.887330023Z" level=info msg="CreateContainer within sandbox \"b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"97d3ffd14a82b63c0350abfcc7fcc838accba9160a56e70ffe05d9d3253b1493\"" Sep 4 17:11:55.894334 containerd[2024]: time="2024-09-04T17:11:55.894282241Z" level=info msg="StartContainer for \"97d3ffd14a82b63c0350abfcc7fcc838accba9160a56e70ffe05d9d3253b1493\"" Sep 4 17:11:56.054827 systemd[1]: Started cri-containerd-97d3ffd14a82b63c0350abfcc7fcc838accba9160a56e70ffe05d9d3253b1493.scope - libcontainer container 97d3ffd14a82b63c0350abfcc7fcc838accba9160a56e70ffe05d9d3253b1493. Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:55.880 [WARNING][5356] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6789a9bc-3db8-46e0-a107-717bb75f1944", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"3e0dbcefbf3ab539c2611a1cffbdbdb085ad8e90d5196f0f86e37cbaf05fb8df", Pod:"coredns-7db6d8ff4d-zbmjh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7e525b28930", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:55.881 [INFO][5356] k8s.go 608: Cleaning up netns ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:55.881 [INFO][5356] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" iface="eth0" netns="" Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:55.881 [INFO][5356] k8s.go 615: Releasing IP address(es) ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:55.881 [INFO][5356] utils.go 188: Calico CNI releasing IP address ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:56.051 [INFO][5367] ipam_plugin.go 417: Releasing address using handleID ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" HandleID="k8s-pod-network.2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:56.052 [INFO][5367] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:56.052 [INFO][5367] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:56.081 [WARNING][5367] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" HandleID="k8s-pod-network.2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:56.082 [INFO][5367] ipam_plugin.go 445: Releasing address using workloadID ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" HandleID="k8s-pod-network.2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--zbmjh-eth0" Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:56.087 [INFO][5367] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:56.098013 containerd[2024]: 2024-09-04 17:11:56.093 [INFO][5356] k8s.go 621: Teardown processing complete. ContainerID="2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875" Sep 4 17:11:56.098927 containerd[2024]: time="2024-09-04T17:11:56.098636681Z" level=info msg="TearDown network for sandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\" successfully" Sep 4 17:11:56.106444 containerd[2024]: time="2024-09-04T17:11:56.106357223Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:11:56.106740 containerd[2024]: time="2024-09-04T17:11:56.106686908Z" level=info msg="RemovePodSandbox \"2c528f7e061eaf65bc166d19ed1b4ed984213be03a4f5ed71bdaa19c6ffb3875\" returns successfully" Sep 4 17:11:56.107833 containerd[2024]: time="2024-09-04T17:11:56.107763054Z" level=info msg="StopPodSandbox for \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\"" Sep 4 17:11:56.300974 containerd[2024]: time="2024-09-04T17:11:56.299813708Z" level=info msg="StartContainer for \"97d3ffd14a82b63c0350abfcc7fcc838accba9160a56e70ffe05d9d3253b1493\" returns successfully" Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.223 [WARNING][5410] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6a2166df-664f-4f02-bf19-b368d9927d59", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b", Pod:"coredns-7db6d8ff4d-cr2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc3105a5331", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.225 [INFO][5410] k8s.go 608: Cleaning up netns ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.225 [INFO][5410] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" iface="eth0" netns="" Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.225 [INFO][5410] k8s.go 615: Releasing IP address(es) ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.225 [INFO][5410] utils.go 188: Calico CNI releasing IP address ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.321 [INFO][5416] ipam_plugin.go 417: Releasing address using handleID ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" HandleID="k8s-pod-network.f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.322 [INFO][5416] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.322 [INFO][5416] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.338 [WARNING][5416] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" HandleID="k8s-pod-network.f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.338 [INFO][5416] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" HandleID="k8s-pod-network.f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.341 [INFO][5416] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:56.348233 containerd[2024]: 2024-09-04 17:11:56.344 [INFO][5410] k8s.go 621: Teardown processing complete. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:56.349626 containerd[2024]: time="2024-09-04T17:11:56.349361546Z" level=info msg="TearDown network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\" successfully" Sep 4 17:11:56.349626 containerd[2024]: time="2024-09-04T17:11:56.349431457Z" level=info msg="StopPodSandbox for \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\" returns successfully" Sep 4 17:11:56.350683 containerd[2024]: time="2024-09-04T17:11:56.350535697Z" level=info msg="RemovePodSandbox for \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\"" Sep 4 17:11:56.350838 containerd[2024]: time="2024-09-04T17:11:56.350707191Z" level=info msg="Forcibly stopping sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\"" Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.442 [WARNING][5445] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6a2166df-664f-4f02-bf19-b368d9927d59", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"64dd7b4be9ddf74d9e3d9081606c37bbddeb706d703a5a6801b9a4e30952700b", Pod:"coredns-7db6d8ff4d-cr2qq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califc3105a5331", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.442 [INFO][5445] k8s.go 608: Cleaning up netns ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.442 [INFO][5445] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" iface="eth0" netns="" Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.442 [INFO][5445] k8s.go 615: Releasing IP address(es) ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.442 [INFO][5445] utils.go 188: Calico CNI releasing IP address ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.519 [INFO][5451] ipam_plugin.go 417: Releasing address using handleID ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" HandleID="k8s-pod-network.f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.521 [INFO][5451] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.521 [INFO][5451] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.537 [WARNING][5451] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" HandleID="k8s-pod-network.f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.537 [INFO][5451] ipam_plugin.go 445: Releasing address using workloadID ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" HandleID="k8s-pod-network.f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Workload="ip--172--31--21--183-k8s-coredns--7db6d8ff4d--cr2qq-eth0" Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.541 [INFO][5451] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:56.552637 containerd[2024]: 2024-09-04 17:11:56.547 [INFO][5445] k8s.go 621: Teardown processing complete. ContainerID="f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32" Sep 4 17:11:56.556337 containerd[2024]: time="2024-09-04T17:11:56.553842445Z" level=info msg="TearDown network for sandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\" successfully" Sep 4 17:11:56.564863 containerd[2024]: time="2024-09-04T17:11:56.563885632Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:11:56.564863 containerd[2024]: time="2024-09-04T17:11:56.563987131Z" level=info msg="RemovePodSandbox \"f6e3f2532c1e5fab9361574713e5e33e0b3363683a6ae8a73fa8cca9c614cb32\" returns successfully" Sep 4 17:11:56.566753 containerd[2024]: time="2024-09-04T17:11:56.566208133Z" level=info msg="StopPodSandbox for \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\"" Sep 4 17:11:56.774704 kubelet[3368]: I0904 17:11:56.773146 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-596bd58bc-l9kdw" podStartSLOduration=33.454546323 podStartE2EDuration="38.773127414s" podCreationTimestamp="2024-09-04 17:11:18 +0000 UTC" firstStartedPulling="2024-09-04 17:11:50.437284744 +0000 UTC m=+55.445110603" lastFinishedPulling="2024-09-04 17:11:55.755865847 +0000 UTC m=+60.763691694" observedRunningTime="2024-09-04 17:11:56.771454438 +0000 UTC m=+61.779280297" watchObservedRunningTime="2024-09-04 17:11:56.773127414 +0000 UTC m=+61.780953285" Sep 4 17:11:56.776870 kubelet[3368]: I0904 17:11:56.775939 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8pk7h" podStartSLOduration=34.833062191 podStartE2EDuration="38.775916552s" podCreationTimestamp="2024-09-04 17:11:18 +0000 UTC" firstStartedPulling="2024-09-04 17:11:47.648155149 +0000 UTC m=+52.655981008" lastFinishedPulling="2024-09-04 17:11:51.591009462 +0000 UTC m=+56.598835369" observedRunningTime="2024-09-04 17:11:52.764580327 +0000 UTC m=+57.772406174" watchObservedRunningTime="2024-09-04 17:11:56.775916552 +0000 UTC m=+61.783742411" Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.675 [WARNING][5471] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b761753e-49f2-48ff-98ce-f1b20c5a7621", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4", Pod:"csi-node-driver-8pk7h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali45ccfb3af13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.676 [INFO][5471] k8s.go 608: Cleaning up netns ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.676 [INFO][5471] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" iface="eth0" netns="" Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.676 [INFO][5471] k8s.go 615: Releasing IP address(es) ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.676 [INFO][5471] utils.go 188: Calico CNI releasing IP address ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.766 [INFO][5477] ipam_plugin.go 417: Releasing address using handleID ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" HandleID="k8s-pod-network.32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.766 [INFO][5477] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.766 [INFO][5477] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.795 [WARNING][5477] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" HandleID="k8s-pod-network.32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.795 [INFO][5477] ipam_plugin.go 445: Releasing address using workloadID ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" HandleID="k8s-pod-network.32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.806 [INFO][5477] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:56.826503 containerd[2024]: 2024-09-04 17:11:56.822 [INFO][5471] k8s.go 621: Teardown processing complete. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:56.828268 containerd[2024]: time="2024-09-04T17:11:56.826556444Z" level=info msg="TearDown network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\" successfully" Sep 4 17:11:56.828268 containerd[2024]: time="2024-09-04T17:11:56.826613989Z" level=info msg="StopPodSandbox for \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\" returns successfully" Sep 4 17:11:56.828268 containerd[2024]: time="2024-09-04T17:11:56.827365312Z" level=info msg="RemovePodSandbox for \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\"" Sep 4 17:11:56.828268 containerd[2024]: time="2024-09-04T17:11:56.827424778Z" level=info msg="Forcibly stopping sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\"" Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:56.949 [WARNING][5514] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b761753e-49f2-48ff-98ce-f1b20c5a7621", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65cb9bb8f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"763171397f18ec48ac6d66a2686633b92b3c41ff584a5b60426f8ed7698f6dd4", Pod:"csi-node-driver-8pk7h", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali45ccfb3af13", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:56.951 [INFO][5514] k8s.go 608: Cleaning up netns ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:56.951 [INFO][5514] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" iface="eth0" netns="" Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:56.951 [INFO][5514] k8s.go 615: Releasing IP address(es) ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:56.951 [INFO][5514] utils.go 188: Calico CNI releasing IP address ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:57.016 [INFO][5523] ipam_plugin.go 417: Releasing address using handleID ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" HandleID="k8s-pod-network.32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:57.017 [INFO][5523] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:57.017 [INFO][5523] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:57.034 [WARNING][5523] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" HandleID="k8s-pod-network.32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:57.034 [INFO][5523] ipam_plugin.go 445: Releasing address using workloadID ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" HandleID="k8s-pod-network.32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Workload="ip--172--31--21--183-k8s-csi--node--driver--8pk7h-eth0" Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:57.038 [INFO][5523] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:57.048303 containerd[2024]: 2024-09-04 17:11:57.041 [INFO][5514] k8s.go 621: Teardown processing complete. ContainerID="32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8" Sep 4 17:11:57.048303 containerd[2024]: time="2024-09-04T17:11:57.047458453Z" level=info msg="TearDown network for sandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\" successfully" Sep 4 17:11:57.057119 containerd[2024]: time="2024-09-04T17:11:57.056679578Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:11:57.057119 containerd[2024]: time="2024-09-04T17:11:57.056810863Z" level=info msg="RemovePodSandbox \"32bc28f80da8fdf67b9ec36d5579c0dcf0aa1cde2ec71b227637c936e01d13c8\" returns successfully" Sep 4 17:11:57.058406 containerd[2024]: time="2024-09-04T17:11:57.057764536Z" level=info msg="StopPodSandbox for \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\"" Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.158 [WARNING][5542] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0", GenerateName:"calico-kube-controllers-596bd58bc-", Namespace:"calico-system", SelfLink:"", UID:"c21af7c7-5c73-4d45-82b9-064bb8ecb13d", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596bd58bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243", Pod:"calico-kube-controllers-596bd58bc-l9kdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5eeb5cbc941", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.159 [INFO][5542] k8s.go 608: Cleaning up netns ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.159 [INFO][5542] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" iface="eth0" netns="" Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.159 [INFO][5542] k8s.go 615: Releasing IP address(es) ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.160 [INFO][5542] utils.go 188: Calico CNI releasing IP address ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.218 [INFO][5549] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" HandleID="k8s-pod-network.ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.219 [INFO][5549] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.219 [INFO][5549] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.233 [WARNING][5549] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" HandleID="k8s-pod-network.ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.233 [INFO][5549] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" HandleID="k8s-pod-network.ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.236 [INFO][5549] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:57.244849 containerd[2024]: 2024-09-04 17:11:57.240 [INFO][5542] k8s.go 621: Teardown processing complete. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:57.246016 containerd[2024]: time="2024-09-04T17:11:57.245829042Z" level=info msg="TearDown network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\" successfully" Sep 4 17:11:57.246016 containerd[2024]: time="2024-09-04T17:11:57.245882637Z" level=info msg="StopPodSandbox for \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\" returns successfully" Sep 4 17:11:57.248196 containerd[2024]: time="2024-09-04T17:11:57.247279535Z" level=info msg="RemovePodSandbox for \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\"" Sep 4 17:11:57.248196 containerd[2024]: time="2024-09-04T17:11:57.247342927Z" level=info msg="Forcibly stopping sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\"" Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.346 [WARNING][5567] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0", GenerateName:"calico-kube-controllers-596bd58bc-", Namespace:"calico-system", SelfLink:"", UID:"c21af7c7-5c73-4d45-82b9-064bb8ecb13d", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 11, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596bd58bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"b715361e744c502966fa1919facb10810d9f864a9ddcf1b6c8c9de0863fe8243", Pod:"calico-kube-controllers-596bd58bc-l9kdw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5eeb5cbc941", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.347 [INFO][5567] k8s.go 608: Cleaning up netns ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.347 [INFO][5567] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" iface="eth0" netns="" Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.348 [INFO][5567] k8s.go 615: Releasing IP address(es) ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.348 [INFO][5567] utils.go 188: Calico CNI releasing IP address ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.412 [INFO][5585] ipam_plugin.go 417: Releasing address using handleID ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" HandleID="k8s-pod-network.ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.413 [INFO][5585] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.414 [INFO][5585] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.431 [WARNING][5585] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" HandleID="k8s-pod-network.ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.431 [INFO][5585] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" HandleID="k8s-pod-network.ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Workload="ip--172--31--21--183-k8s-calico--kube--controllers--596bd58bc--l9kdw-eth0" Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.434 [INFO][5585] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:11:57.441189 containerd[2024]: 2024-09-04 17:11:57.437 [INFO][5567] k8s.go 621: Teardown processing complete. ContainerID="ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db" Sep 4 17:11:57.445543 containerd[2024]: time="2024-09-04T17:11:57.443648616Z" level=info msg="TearDown network for sandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\" successfully" Sep 4 17:11:57.455072 containerd[2024]: time="2024-09-04T17:11:57.454916787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:11:57.456282 containerd[2024]: time="2024-09-04T17:11:57.455832305Z" level=info msg="RemovePodSandbox \"ee9f34fc8c7dced6f522aec3d890f7a84277899979104c0b6bf31c28c89391db\" returns successfully" Sep 4 17:11:58.159142 systemd[1]: Started sshd@13-172.31.21.183:22-139.178.89.65:36572.service - OpenSSH per-connection server daemon (139.178.89.65:36572). Sep 4 17:11:58.334798 sshd[5603]: Accepted publickey for core from 139.178.89.65 port 36572 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:58.338042 sshd[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:58.348801 systemd-logind[2001]: New session 14 of user core. Sep 4 17:11:58.356892 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:11:58.613080 sshd[5603]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:58.618194 systemd[1]: sshd@13-172.31.21.183:22-139.178.89.65:36572.service: Deactivated successfully. Sep 4 17:11:58.621486 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:11:58.626294 systemd-logind[2001]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:11:58.628492 systemd-logind[2001]: Removed session 14. Sep 4 17:12:03.658173 systemd[1]: Started sshd@14-172.31.21.183:22-139.178.89.65:36578.service - OpenSSH per-connection server daemon (139.178.89.65:36578). Sep 4 17:12:03.834157 sshd[5637]: Accepted publickey for core from 139.178.89.65 port 36578 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:03.837215 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:03.845248 systemd-logind[2001]: New session 15 of user core. Sep 4 17:12:03.856882 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:12:04.117720 sshd[5637]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:04.124199 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:12:04.126108 systemd[1]: sshd@14-172.31.21.183:22-139.178.89.65:36578.service: Deactivated successfully. Sep 4 17:12:04.132527 systemd-logind[2001]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:12:04.134843 systemd-logind[2001]: Removed session 15. Sep 4 17:12:09.162189 systemd[1]: Started sshd@15-172.31.21.183:22-139.178.89.65:45732.service - OpenSSH per-connection server daemon (139.178.89.65:45732). Sep 4 17:12:09.348394 sshd[5661]: Accepted publickey for core from 139.178.89.65 port 45732 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:09.350926 sshd[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:09.359811 systemd-logind[2001]: New session 16 of user core. Sep 4 17:12:09.366875 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:12:09.629259 sshd[5661]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:09.636331 systemd[1]: sshd@15-172.31.21.183:22-139.178.89.65:45732.service: Deactivated successfully. Sep 4 17:12:09.640156 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:12:09.641429 systemd-logind[2001]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:12:09.643459 systemd-logind[2001]: Removed session 16. Sep 4 17:12:09.665144 systemd[1]: Started sshd@16-172.31.21.183:22-139.178.89.65:45746.service - OpenSSH per-connection server daemon (139.178.89.65:45746). Sep 4 17:12:09.845611 sshd[5674]: Accepted publickey for core from 139.178.89.65 port 45746 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:09.848321 sshd[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:09.856656 systemd-logind[2001]: New session 17 of user core. Sep 4 17:12:09.864928 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:12:10.301644 sshd[5674]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:10.310187 systemd[1]: sshd@16-172.31.21.183:22-139.178.89.65:45746.service: Deactivated successfully. Sep 4 17:12:10.314493 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:12:10.317195 systemd-logind[2001]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:12:10.319319 systemd-logind[2001]: Removed session 17. Sep 4 17:12:10.348101 systemd[1]: Started sshd@17-172.31.21.183:22-139.178.89.65:45752.service - OpenSSH per-connection server daemon (139.178.89.65:45752). Sep 4 17:12:10.527390 sshd[5689]: Accepted publickey for core from 139.178.89.65 port 45752 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:10.530780 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:10.540031 systemd-logind[2001]: New session 18 of user core. Sep 4 17:12:10.546967 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:12:13.508678 sshd[5689]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:13.519947 systemd[1]: sshd@17-172.31.21.183:22-139.178.89.65:45752.service: Deactivated successfully. Sep 4 17:12:13.526147 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:12:13.527879 systemd[1]: session-18.scope: Consumed 1.041s CPU time. Sep 4 17:12:13.530134 systemd-logind[2001]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:12:13.555153 systemd[1]: Started sshd@18-172.31.21.183:22-139.178.89.65:45758.service - OpenSSH per-connection server daemon (139.178.89.65:45758). Sep 4 17:12:13.556472 systemd-logind[2001]: Removed session 18. Sep 4 17:12:13.745016 sshd[5704]: Accepted publickey for core from 139.178.89.65 port 45758 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:13.747953 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:13.757010 systemd-logind[2001]: New session 19 of user core. Sep 4 17:12:13.769912 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:12:14.272569 sshd[5704]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:14.281866 systemd[1]: sshd@18-172.31.21.183:22-139.178.89.65:45758.service: Deactivated successfully. Sep 4 17:12:14.287684 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:12:14.289270 systemd-logind[2001]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:12:14.291101 systemd-logind[2001]: Removed session 19. Sep 4 17:12:14.305050 systemd[1]: Started sshd@19-172.31.21.183:22-139.178.89.65:45764.service - OpenSSH per-connection server daemon (139.178.89.65:45764). Sep 4 17:12:14.492575 sshd[5718]: Accepted publickey for core from 139.178.89.65 port 45764 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:14.495277 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:14.503008 systemd-logind[2001]: New session 20 of user core. Sep 4 17:12:14.514919 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:12:14.763468 sshd[5718]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:14.771086 systemd[1]: sshd@19-172.31.21.183:22-139.178.89.65:45764.service: Deactivated successfully. Sep 4 17:12:14.775643 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:12:14.778437 systemd-logind[2001]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:12:14.781834 systemd-logind[2001]: Removed session 20. Sep 4 17:12:19.810136 systemd[1]: Started sshd@20-172.31.21.183:22-139.178.89.65:38912.service - OpenSSH per-connection server daemon (139.178.89.65:38912). Sep 4 17:12:19.998747 sshd[5736]: Accepted publickey for core from 139.178.89.65 port 38912 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:20.002176 sshd[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:20.014480 systemd-logind[2001]: New session 21 of user core. Sep 4 17:12:20.020927 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:12:20.390542 sshd[5736]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:20.395914 systemd-logind[2001]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:12:20.397377 systemd[1]: sshd@20-172.31.21.183:22-139.178.89.65:38912.service: Deactivated successfully. Sep 4 17:12:20.404730 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:12:20.414722 systemd-logind[2001]: Removed session 21. Sep 4 17:12:25.444152 systemd[1]: Started sshd@21-172.31.21.183:22-139.178.89.65:38928.service - OpenSSH per-connection server daemon (139.178.89.65:38928). Sep 4 17:12:25.639224 sshd[5781]: Accepted publickey for core from 139.178.89.65 port 38928 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:25.642539 sshd[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:25.651235 systemd-logind[2001]: New session 22 of user core. Sep 4 17:12:25.657899 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:12:25.908962 sshd[5781]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:25.915122 systemd[1]: sshd@21-172.31.21.183:22-139.178.89.65:38928.service: Deactivated successfully. Sep 4 17:12:25.921168 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:12:25.925229 systemd-logind[2001]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:12:25.928984 systemd-logind[2001]: Removed session 22. Sep 4 17:12:30.952536 systemd[1]: Started sshd@22-172.31.21.183:22-139.178.89.65:37582.service - OpenSSH per-connection server daemon (139.178.89.65:37582). Sep 4 17:12:31.140331 sshd[5818]: Accepted publickey for core from 139.178.89.65 port 37582 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:31.140203 sshd[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:31.156085 systemd-logind[2001]: New session 23 of user core. Sep 4 17:12:31.161944 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:12:31.470469 sshd[5818]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:31.482129 systemd[1]: sshd@22-172.31.21.183:22-139.178.89.65:37582.service: Deactivated successfully. Sep 4 17:12:31.493100 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:12:31.497088 systemd-logind[2001]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:12:31.501003 systemd-logind[2001]: Removed session 23. Sep 4 17:12:32.786238 kubelet[3368]: I0904 17:12:32.786147 3368 topology_manager.go:215] "Topology Admit Handler" podUID="5d323fae-7402-4b74-a675-7f6c3322282a" podNamespace="calico-apiserver" podName="calico-apiserver-685b4947f8-m47vm" Sep 4 17:12:32.796530 kubelet[3368]: I0904 17:12:32.796456 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d323fae-7402-4b74-a675-7f6c3322282a-calico-apiserver-certs\") pod \"calico-apiserver-685b4947f8-m47vm\" (UID: \"5d323fae-7402-4b74-a675-7f6c3322282a\") " pod="calico-apiserver/calico-apiserver-685b4947f8-m47vm" Sep 4 17:12:32.796530 kubelet[3368]: I0904 17:12:32.796529 3368 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcqmh\" (UniqueName: \"kubernetes.io/projected/5d323fae-7402-4b74-a675-7f6c3322282a-kube-api-access-vcqmh\") pod \"calico-apiserver-685b4947f8-m47vm\" (UID: \"5d323fae-7402-4b74-a675-7f6c3322282a\") " pod="calico-apiserver/calico-apiserver-685b4947f8-m47vm" Sep 4 17:12:32.819630 systemd[1]: Created slice kubepods-besteffort-pod5d323fae_7402_4b74_a675_7f6c3322282a.slice - libcontainer container kubepods-besteffort-pod5d323fae_7402_4b74_a675_7f6c3322282a.slice. Sep 4 17:12:32.897631 kubelet[3368]: E0904 17:12:32.896947 3368 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:12:32.897631 kubelet[3368]: E0904 17:12:32.897088 3368 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d323fae-7402-4b74-a675-7f6c3322282a-calico-apiserver-certs podName:5d323fae-7402-4b74-a675-7f6c3322282a nodeName:}" failed. No retries permitted until 2024-09-04 17:12:33.39704198 +0000 UTC m=+98.404867839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/5d323fae-7402-4b74-a675-7f6c3322282a-calico-apiserver-certs") pod "calico-apiserver-685b4947f8-m47vm" (UID: "5d323fae-7402-4b74-a675-7f6c3322282a") : secret "calico-apiserver-certs" not found Sep 4 17:12:33.430389 containerd[2024]: time="2024-09-04T17:12:33.430309268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b4947f8-m47vm,Uid:5d323fae-7402-4b74-a675-7f6c3322282a,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:12:33.667318 systemd-networkd[1934]: cali97236a4765a: Link UP Sep 4 17:12:33.668839 systemd-networkd[1934]: cali97236a4765a: Gained carrier Sep 4 17:12:33.677803 (udev-worker)[5874]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.535 [INFO][5856] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0 calico-apiserver-685b4947f8- calico-apiserver 5d323fae-7402-4b74-a675-7f6c3322282a 1108 0 2024-09-04 17:12:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:685b4947f8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-183 calico-apiserver-685b4947f8-m47vm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali97236a4765a [] []}} ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Namespace="calico-apiserver" Pod="calico-apiserver-685b4947f8-m47vm" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.535 [INFO][5856] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Namespace="calico-apiserver" Pod="calico-apiserver-685b4947f8-m47vm" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.589 [INFO][5867] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" HandleID="k8s-pod-network.36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Workload="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.612 [INFO][5867] ipam_plugin.go 270: Auto assigning IP ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" HandleID="k8s-pod-network.36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Workload="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028e340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-183", "pod":"calico-apiserver-685b4947f8-m47vm", "timestamp":"2024-09-04 17:12:33.589658988 +0000 UTC"}, Hostname:"ip-172-31-21-183", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.612 [INFO][5867] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.613 [INFO][5867] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.613 [INFO][5867] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-183' Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.617 [INFO][5867] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" host="ip-172-31-21-183" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.623 [INFO][5867] ipam.go 372: Looking up existing affinities for host host="ip-172-31-21-183" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.631 [INFO][5867] ipam.go 489: Trying affinity for 192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.634 [INFO][5867] ipam.go 155: Attempting to load block cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.638 [INFO][5867] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ip-172-31-21-183" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.638 [INFO][5867] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" host="ip-172-31-21-183" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.641 [INFO][5867] ipam.go 1685: Creating new handle: k8s-pod-network.36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.647 [INFO][5867] ipam.go 1203: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" host="ip-172-31-21-183" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.657 [INFO][5867] ipam.go 1216: Successfully claimed IPs: [192.168.82.5/26] block=192.168.82.0/26 handle="k8s-pod-network.36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" host="ip-172-31-21-183" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.658 [INFO][5867] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.82.5/26] handle="k8s-pod-network.36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" host="ip-172-31-21-183" Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.658 [INFO][5867] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:33.695782 containerd[2024]: 2024-09-04 17:12:33.658 [INFO][5867] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.82.5/26] IPv6=[] ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" HandleID="k8s-pod-network.36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Workload="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" Sep 4 17:12:33.698070 containerd[2024]: 2024-09-04 17:12:33.661 [INFO][5856] k8s.go 386: Populated endpoint ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Namespace="calico-apiserver" Pod="calico-apiserver-685b4947f8-m47vm" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0", GenerateName:"calico-apiserver-685b4947f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d323fae-7402-4b74-a675-7f6c3322282a", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b4947f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"", Pod:"calico-apiserver-685b4947f8-m47vm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97236a4765a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:33.698070 containerd[2024]: 2024-09-04 17:12:33.661 [INFO][5856] k8s.go 387: Calico CNI using IPs: [192.168.82.5/32] ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Namespace="calico-apiserver" Pod="calico-apiserver-685b4947f8-m47vm" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" Sep 4 17:12:33.698070 containerd[2024]: 2024-09-04 17:12:33.661 [INFO][5856] dataplane_linux.go 68: Setting the host side veth name to cali97236a4765a ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Namespace="calico-apiserver" Pod="calico-apiserver-685b4947f8-m47vm" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" Sep 4 17:12:33.698070 containerd[2024]: 2024-09-04 17:12:33.669 [INFO][5856] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Namespace="calico-apiserver" Pod="calico-apiserver-685b4947f8-m47vm" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" Sep 4 17:12:33.698070 containerd[2024]: 2024-09-04 17:12:33.671 [INFO][5856] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Namespace="calico-apiserver" Pod="calico-apiserver-685b4947f8-m47vm" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0", GenerateName:"calico-apiserver-685b4947f8-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d323fae-7402-4b74-a675-7f6c3322282a", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"685b4947f8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-183", ContainerID:"36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce", Pod:"calico-apiserver-685b4947f8-m47vm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali97236a4765a", MAC:"36:62:6a:79:9b:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:33.698070 containerd[2024]: 2024-09-04 17:12:33.689 [INFO][5856] k8s.go 500: Wrote updated endpoint to datastore ContainerID="36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce" Namespace="calico-apiserver" Pod="calico-apiserver-685b4947f8-m47vm" WorkloadEndpoint="ip--172--31--21--183-k8s-calico--apiserver--685b4947f8--m47vm-eth0" Sep 4 17:12:33.762548 containerd[2024]: time="2024-09-04T17:12:33.762359957Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:33.763522 containerd[2024]: time="2024-09-04T17:12:33.763121978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:33.763711 containerd[2024]: time="2024-09-04T17:12:33.763617369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:33.763812 containerd[2024]: time="2024-09-04T17:12:33.763724415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:33.809966 systemd[1]: Started cri-containerd-36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce.scope - libcontainer container 36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce. Sep 4 17:12:33.912049 containerd[2024]: time="2024-09-04T17:12:33.911967252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-685b4947f8-m47vm,Uid:5d323fae-7402-4b74-a675-7f6c3322282a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce\"" Sep 4 17:12:33.915730 containerd[2024]: time="2024-09-04T17:12:33.915003390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:12:35.067099 systemd-networkd[1934]: cali97236a4765a: Gained IPv6LL Sep 4 17:12:36.516160 systemd[1]: Started sshd@23-172.31.21.183:22-139.178.89.65:37594.service - OpenSSH per-connection server daemon (139.178.89.65:37594). Sep 4 17:12:36.739687 sshd[5935]: Accepted publickey for core from 139.178.89.65 port 37594 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:36.742178 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:36.757904 systemd-logind[2001]: New session 24 of user core. Sep 4 17:12:36.762502 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:12:37.155800 sshd[5935]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:37.168159 systemd[1]: sshd@23-172.31.21.183:22-139.178.89.65:37594.service: Deactivated successfully. Sep 4 17:12:37.175360 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:12:37.178934 systemd-logind[2001]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:12:37.182775 systemd-logind[2001]: Removed session 24. Sep 4 17:12:37.386707 ntpd[1995]: Listen normally on 13 cali97236a4765a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:12:37.387707 ntpd[1995]: 4 Sep 17:12:37 ntpd[1995]: Listen normally on 13 cali97236a4765a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:12:37.833063 containerd[2024]: time="2024-09-04T17:12:37.833004130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:37.835917 containerd[2024]: time="2024-09-04T17:12:37.835834725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Sep 4 17:12:37.839631 containerd[2024]: time="2024-09-04T17:12:37.838325274Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:37.848816 containerd[2024]: time="2024-09-04T17:12:37.848723634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:37.853462 containerd[2024]: time="2024-09-04T17:12:37.853352884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 3.937612818s" Sep 4 17:12:37.853462 containerd[2024]: time="2024-09-04T17:12:37.853450108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:12:37.857854 containerd[2024]: time="2024-09-04T17:12:37.857677073Z" level=info msg="CreateContainer within sandbox \"36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:12:37.887320 containerd[2024]: time="2024-09-04T17:12:37.887198770Z" level=info msg="CreateContainer within sandbox \"36b373e9c4b5aebd40def4c6c56b70e608fba15e4cc15d7cc92ca60ec0c4e7ce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a9702b32a738913ece9f0e54506382382a839e4fa23f126d3a6b05ba87aad313\"" Sep 4 17:12:37.889410 containerd[2024]: time="2024-09-04T17:12:37.889220173Z" level=info msg="StartContainer for \"a9702b32a738913ece9f0e54506382382a839e4fa23f126d3a6b05ba87aad313\"" Sep 4 17:12:37.951982 systemd[1]: Started cri-containerd-a9702b32a738913ece9f0e54506382382a839e4fa23f126d3a6b05ba87aad313.scope - libcontainer container a9702b32a738913ece9f0e54506382382a839e4fa23f126d3a6b05ba87aad313. Sep 4 17:12:38.027042 containerd[2024]: time="2024-09-04T17:12:38.026791525Z" level=info msg="StartContainer for \"a9702b32a738913ece9f0e54506382382a839e4fa23f126d3a6b05ba87aad313\" returns successfully" Sep 4 17:12:39.635327 kubelet[3368]: I0904 17:12:39.635212 3368 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-685b4947f8-m47vm" podStartSLOduration=3.695053722 podStartE2EDuration="7.635158073s" podCreationTimestamp="2024-09-04 17:12:32 +0000 UTC" firstStartedPulling="2024-09-04 17:12:33.914569517 +0000 UTC m=+98.922395376" lastFinishedPulling="2024-09-04 17:12:37.85467388 +0000 UTC m=+102.862499727" observedRunningTime="2024-09-04 17:12:38.935065865 +0000 UTC m=+103.942891736" watchObservedRunningTime="2024-09-04 17:12:39.635158073 +0000 UTC m=+104.642983944" Sep 4 17:12:42.201226 systemd[1]: Started sshd@24-172.31.21.183:22-139.178.89.65:43108.service - OpenSSH per-connection server daemon (139.178.89.65:43108). Sep 4 17:12:42.396786 sshd[6008]: Accepted publickey for core from 139.178.89.65 port 43108 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:42.399096 sshd[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:42.411675 systemd-logind[2001]: New session 25 of user core. Sep 4 17:12:42.423228 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:12:42.669324 sshd[6008]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:42.676534 systemd[1]: sshd@24-172.31.21.183:22-139.178.89.65:43108.service: Deactivated successfully. Sep 4 17:12:42.681313 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:12:42.685007 systemd-logind[2001]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:12:42.689199 systemd-logind[2001]: Removed session 25. Sep 4 17:12:47.713131 systemd[1]: Started sshd@25-172.31.21.183:22-139.178.89.65:52908.service - OpenSSH per-connection server daemon (139.178.89.65:52908). Sep 4 17:12:47.889215 sshd[6029]: Accepted publickey for core from 139.178.89.65 port 52908 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:47.892976 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:47.900900 systemd-logind[2001]: New session 26 of user core. Sep 4 17:12:47.907861 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:12:48.143492 sshd[6029]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:48.148588 systemd-logind[2001]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:12:48.149234 systemd[1]: sshd@25-172.31.21.183:22-139.178.89.65:52908.service: Deactivated successfully. Sep 4 17:12:48.156238 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:12:48.161210 systemd-logind[2001]: Removed session 26. Sep 4 17:12:57.339388 systemd[1]: run-containerd-runc-k8s.io-47edc88301f81741228fc27f52827cb18de9c66af4dace7c9d1d08612e4b6997-runc.JxJ3TH.mount: Deactivated successfully. Sep 4 17:13:02.736505 systemd[1]: cri-containerd-bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9.scope: Deactivated successfully. Sep 4 17:13:02.738834 systemd[1]: cri-containerd-bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9.scope: Consumed 10.611s CPU time. Sep 4 17:13:02.780853 containerd[2024]: time="2024-09-04T17:13:02.780461663Z" level=info msg="shim disconnected" id=bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9 namespace=k8s.io Sep 4 17:13:02.780853 containerd[2024]: time="2024-09-04T17:13:02.780556006Z" level=warning msg="cleaning up after shim disconnected" id=bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9 namespace=k8s.io Sep 4 17:13:02.780853 containerd[2024]: time="2024-09-04T17:13:02.780575960Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:02.784499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9-rootfs.mount: Deactivated successfully. Sep 4 17:13:02.810301 containerd[2024]: time="2024-09-04T17:13:02.810210430Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:13:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:13:02.964233 kubelet[3368]: I0904 17:13:02.963730 3368 scope.go:117] "RemoveContainer" containerID="bc975028ff1a382de27ee2046e43db9fa313341a05b6f260e15e69c21a5723f9" Sep 4 17:13:02.968694 containerd[2024]: time="2024-09-04T17:13:02.968492841Z" level=info msg="CreateContainer within sandbox \"30a95acb06f6dba47e4517d29a97c5dc00f4e16d4d6872608dfa9b24d5003530\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:13:02.997106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount651103720.mount: Deactivated successfully. Sep 4 17:13:02.998649 containerd[2024]: time="2024-09-04T17:13:02.997851389Z" level=info msg="CreateContainer within sandbox \"30a95acb06f6dba47e4517d29a97c5dc00f4e16d4d6872608dfa9b24d5003530\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ad646bf987ee461f50db57508bc7ae6babdad34060908d3b7fa3c4c9ee494329\"" Sep 4 17:13:03.000281 containerd[2024]: time="2024-09-04T17:13:02.999970604Z" level=info msg="StartContainer for \"ad646bf987ee461f50db57508bc7ae6babdad34060908d3b7fa3c4c9ee494329\"" Sep 4 17:13:03.055919 systemd[1]: Started cri-containerd-ad646bf987ee461f50db57508bc7ae6babdad34060908d3b7fa3c4c9ee494329.scope - libcontainer container ad646bf987ee461f50db57508bc7ae6babdad34060908d3b7fa3c4c9ee494329. Sep 4 17:13:03.104232 containerd[2024]: time="2024-09-04T17:13:03.104156247Z" level=info msg="StartContainer for \"ad646bf987ee461f50db57508bc7ae6babdad34060908d3b7fa3c4c9ee494329\" returns successfully" Sep 4 17:13:03.245421 systemd[1]: cri-containerd-a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f.scope: Deactivated successfully. Sep 4 17:13:03.247996 systemd[1]: cri-containerd-a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f.scope: Consumed 5.955s CPU time, 22.3M memory peak, 0B memory swap peak. Sep 4 17:13:03.286239 containerd[2024]: time="2024-09-04T17:13:03.286137626Z" level=info msg="shim disconnected" id=a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f namespace=k8s.io Sep 4 17:13:03.286239 containerd[2024]: time="2024-09-04T17:13:03.286212663Z" level=warning msg="cleaning up after shim disconnected" id=a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f namespace=k8s.io Sep 4 17:13:03.286239 containerd[2024]: time="2024-09-04T17:13:03.286234682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:03.782956 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f-rootfs.mount: Deactivated successfully. Sep 4 17:13:03.973416 kubelet[3368]: I0904 17:13:03.972560 3368 scope.go:117] "RemoveContainer" containerID="a3903ac1ca50681519e0c21a95a1395eef330bbbb037c53f00a4a42f29d0c48f" Sep 4 17:13:03.978148 containerd[2024]: time="2024-09-04T17:13:03.977941948Z" level=info msg="CreateContainer within sandbox \"a7ee6056cda65d76f5c450d6aaf5aed8789a3f1589e12622cc922a35a53d53f0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:13:04.006551 containerd[2024]: time="2024-09-04T17:13:04.006465455Z" level=info msg="CreateContainer within sandbox \"a7ee6056cda65d76f5c450d6aaf5aed8789a3f1589e12622cc922a35a53d53f0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"4f92299e566aa948673e92cc3b080f3a90df7101cfa9eb7979b97a50f215e995\"" Sep 4 17:13:04.007298 containerd[2024]: time="2024-09-04T17:13:04.007236444Z" level=info msg="StartContainer for \"4f92299e566aa948673e92cc3b080f3a90df7101cfa9eb7979b97a50f215e995\"" Sep 4 17:13:04.070904 systemd[1]: Started cri-containerd-4f92299e566aa948673e92cc3b080f3a90df7101cfa9eb7979b97a50f215e995.scope - libcontainer container 4f92299e566aa948673e92cc3b080f3a90df7101cfa9eb7979b97a50f215e995. Sep 4 17:13:04.153665 containerd[2024]: time="2024-09-04T17:13:04.153483032Z" level=info msg="StartContainer for \"4f92299e566aa948673e92cc3b080f3a90df7101cfa9eb7979b97a50f215e995\" returns successfully" Sep 4 17:13:07.469588 systemd[1]: cri-containerd-ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472.scope: Deactivated successfully. Sep 4 17:13:07.472096 systemd[1]: cri-containerd-ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472.scope: Consumed 4.640s CPU time, 15.7M memory peak, 0B memory swap peak. Sep 4 17:13:07.536830 containerd[2024]: time="2024-09-04T17:13:07.533929118Z" level=info msg="shim disconnected" id=ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472 namespace=k8s.io Sep 4 17:13:07.536830 containerd[2024]: time="2024-09-04T17:13:07.534009210Z" level=warning msg="cleaning up after shim disconnected" id=ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472 namespace=k8s.io Sep 4 17:13:07.536830 containerd[2024]: time="2024-09-04T17:13:07.534029152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:13:07.540512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472-rootfs.mount: Deactivated successfully. Sep 4 17:13:07.730175 kubelet[3368]: E0904 17:13:07.729390 3368 kubelet_node_status.go:544] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-09-04T17:12:57Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-04T17:12:57Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-04T17:12:57Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-09-04T17:12:57Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\\\",\\\"ghcr.io/flatcar/calico/node:v3.28.1\\\"],\\\"sizeBytes\\\":113057162},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\\\",\\\"ghcr.io/flatcar/calico/cni:v3.28.1\\\"],\\\"sizeBytes\\\":88227406},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\\\",\\\"registry.k8s.io/etcd:3.5.12-0\\\"],\\\"sizeBytes\\\":66189079},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.28.1\\\"],\\\"sizeBytes\\\":39217419},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\\\",\\\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\\\"],\\\"sizeBytes\\\":32729240},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:7b0c4a959aaee5660e1234452dc3123310231b9f92d29ebd175c86dc9f797ee7\\\",\\\"registry.k8s.io/kube-apiserver:v1.30.4\\\"],\\\"sizeBytes\\\":29940540},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\\\",\\\"ghcr.io/flatcar/calico/typha:v3.28.1\\\"],\\\"sizeBytes\\\":28841990},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:992cccbf652fa951c1a3d41b0c1033ae0bf64f33da03d50395282c551900af9e\\\",\\\"registry.k8s.io/kube-controller-manager:v1.30.4\\\"],\\\"sizeBytes\\\":28368399},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:33ee1df1ba70e41bf9506d54bb5e64ef5f3ba9fc1b3021aaa4468606a7802acc\\\",\\\"registry.k8s.io/kube-proxy:v1.30.4\\\"],\\\"sizeBytes\\\":25645066},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\\\",\\\"quay.io/tigera/operator:v1.34.3\\\"],\\\"sizeBytes\\\":19480102},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:37eaeee5bca8da34ad3d36e37586dd29f5edb1e2927e7644dfb113e70062bda8\\\",\\\"registry.k8s.io/kube-scheduler:v1.30.4\\\"],\\\"sizeBytes\\\":17641348},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\\\",\\\"registry.k8s.io/coredns/coredns:v1.11.1\\\"],\\\"sizeBytes\\\":16482581},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\\\"],\\\"sizeBytes\\\":13484341},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\\\",\\\"ghcr.io/flatcar/calico/csi:v3.28.1\\\"],\\\"sizeBytes\\\":8578579},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\\\"],\\\"sizeBytes\\\":6284436},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":268403},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\\\",\\\"registry.k8s.io/pause:3.9\\\"],\\\"sizeBytes\\\":268051}]}}\" for node \"ip-172-31-21-183\": Patch \"https://172.31.21.183:6443/api/v1/nodes/ip-172-31-21-183/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:13:07.990657 kubelet[3368]: I0904 17:13:07.990484 3368 scope.go:117] "RemoveContainer" containerID="ccdb197452cd605ca48e1d1829dff5f2012a22d8154d93e4b8f48a73edcac472" Sep 4 17:13:07.996030 containerd[2024]: time="2024-09-04T17:13:07.995970930Z" level=info msg="CreateContainer within sandbox \"6c43c2854a179f17f31808f825b3f1e9f08b0c5a067eda9ee5b26409bdb6ae71\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:13:08.026231 containerd[2024]: time="2024-09-04T17:13:08.021321323Z" level=info msg="CreateContainer within sandbox \"6c43c2854a179f17f31808f825b3f1e9f08b0c5a067eda9ee5b26409bdb6ae71\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"167b527da83c1075d64f27130a5080c87f465c48db107eeeb60de47ba533fe88\"" Sep 4 17:13:08.026231 containerd[2024]: time="2024-09-04T17:13:08.023563756Z" level=info msg="StartContainer for \"167b527da83c1075d64f27130a5080c87f465c48db107eeeb60de47ba533fe88\"" Sep 4 17:13:08.031546 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1092221029.mount: Deactivated successfully. Sep 4 17:13:08.076677 kubelet[3368]: E0904 17:13:08.075011 3368 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-183?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:13:08.095929 systemd[1]: Started cri-containerd-167b527da83c1075d64f27130a5080c87f465c48db107eeeb60de47ba533fe88.scope - libcontainer container 167b527da83c1075d64f27130a5080c87f465c48db107eeeb60de47ba533fe88. Sep 4 17:13:08.259193 containerd[2024]: time="2024-09-04T17:13:08.259036007Z" level=info msg="StartContainer for \"167b527da83c1075d64f27130a5080c87f465c48db107eeeb60de47ba533fe88\" returns successfully" Sep 4 17:13:17.730477 kubelet[3368]: E0904 17:13:17.730245 3368 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"ip-172-31-21-183\": Get \"https://172.31.21.183:6443/api/v1/nodes/ip-172-31-21-183?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:13:18.075931 kubelet[3368]: E0904 17:13:18.075555 3368 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-183?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"