Sep 4 17:16:37.188781 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 17:16:37.188828 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Sep 4 15:58:01 -00 2024 Sep 4 17:16:37.188854 kernel: KASLR disabled due to lack of seed Sep 4 17:16:37.188872 kernel: efi: EFI v2.7 by EDK II Sep 4 17:16:37.188909 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Sep 4 17:16:37.188927 kernel: ACPI: Early table checksum verification disabled Sep 4 17:16:37.188946 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 17:16:37.188963 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 17:16:37.188980 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:16:37.188997 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 17:16:37.189022 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:16:37.189039 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 17:16:37.189056 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 17:16:37.189073 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 17:16:37.189092 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:16:37.189115 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 17:16:37.189133 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 17:16:37.189152 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 17:16:37.189169 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 17:16:37.189187 kernel: printk: bootconsole [uart0] enabled Sep 4 17:16:37.189205 kernel: NUMA: Failed to initialise from firmware Sep 4 17:16:37.189223 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:16:37.189268 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 17:16:37.189288 kernel: Zone ranges: Sep 4 17:16:37.189305 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 17:16:37.189323 kernel: DMA32 empty Sep 4 17:16:37.189347 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 17:16:37.189365 kernel: Movable zone start for each node Sep 4 17:16:37.189383 kernel: Early memory node ranges Sep 4 17:16:37.189399 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 17:16:37.189417 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 17:16:37.189434 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 17:16:37.189452 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 17:16:37.189470 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 17:16:37.189488 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 17:16:37.189505 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 17:16:37.189522 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 17:16:37.189540 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:16:37.189562 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 17:16:37.189587 kernel: psci: probing for conduit method from ACPI. Sep 4 17:16:37.189612 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 17:16:37.189631 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:16:37.189651 kernel: psci: Trusted OS migration not required Sep 4 17:16:37.189673 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:16:37.189691 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:16:37.189709 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:16:37.189727 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:16:37.189746 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:16:37.189763 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:16:37.189781 kernel: CPU features: detected: Spectre-v2 Sep 4 17:16:37.189800 kernel: CPU features: detected: Spectre-v3a Sep 4 17:16:37.189818 kernel: CPU features: detected: Spectre-BHB Sep 4 17:16:37.189836 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 17:16:37.189855 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 17:16:37.189878 kernel: alternatives: applying boot alternatives Sep 4 17:16:37.189898 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:16:37.189919 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:16:37.189938 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:16:37.189957 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:16:37.189975 kernel: Fallback order for Node 0: 0 Sep 4 17:16:37.189995 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 17:16:37.190013 kernel: Policy zone: Normal Sep 4 17:16:37.190032 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:16:37.190051 kernel: software IO TLB: area num 2. Sep 4 17:16:37.190069 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 17:16:37.190093 kernel: Memory: 3820280K/4030464K available (10240K kernel code, 2184K rwdata, 8084K rodata, 39296K init, 897K bss, 210184K reserved, 0K cma-reserved) Sep 4 17:16:37.190111 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:16:37.190129 kernel: trace event string verifier disabled Sep 4 17:16:37.190146 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:16:37.190165 kernel: rcu: RCU event tracing is enabled. Sep 4 17:16:37.190184 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:16:37.190202 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:16:37.190220 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:16:37.192290 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:16:37.192331 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:16:37.192351 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:16:37.192379 kernel: GICv3: 96 SPIs implemented Sep 4 17:16:37.192398 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:16:37.192416 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:16:37.192434 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 17:16:37.192452 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 17:16:37.192469 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 17:16:37.192487 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:16:37.192506 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:16:37.192524 kernel: GICv3: using LPI property table @0x00000004000e0000 Sep 4 17:16:37.192542 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 17:16:37.192560 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Sep 4 17:16:37.192578 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:16:37.192601 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 17:16:37.192619 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 17:16:37.192637 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 17:16:37.192656 kernel: Console: colour dummy device 80x25 Sep 4 17:16:37.192674 kernel: printk: console [tty1] enabled Sep 4 17:16:37.192693 kernel: ACPI: Core revision 20230628 Sep 4 17:16:37.192712 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 17:16:37.192730 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:16:37.192748 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:16:37.192766 kernel: landlock: Up and running. Sep 4 17:16:37.192789 kernel: SELinux: Initializing. Sep 4 17:16:37.192808 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:16:37.192826 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:16:37.192845 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:16:37.192863 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:16:37.192899 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:16:37.192923 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:16:37.192942 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 17:16:37.192960 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 17:16:37.192985 kernel: Remapping and enabling EFI services. Sep 4 17:16:37.193004 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:16:37.193022 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:16:37.193040 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 17:16:37.193059 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Sep 4 17:16:37.193078 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 17:16:37.193096 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:16:37.193114 kernel: SMP: Total of 2 processors activated. Sep 4 17:16:37.193133 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:16:37.193156 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 17:16:37.193175 kernel: CPU features: detected: CRC32 instructions Sep 4 17:16:37.193205 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:16:37.193228 kernel: alternatives: applying system-wide alternatives Sep 4 17:16:37.193283 kernel: devtmpfs: initialized Sep 4 17:16:37.193303 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:16:37.193323 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:16:37.193342 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:16:37.193362 kernel: SMBIOS 3.0.0 present. Sep 4 17:16:37.193388 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 17:16:37.193407 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:16:37.193426 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:16:37.193445 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:16:37.193465 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:16:37.193484 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:16:37.193503 kernel: audit: type=2000 audit(0.289:1): state=initialized audit_enabled=0 res=1 Sep 4 17:16:37.193528 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:16:37.193547 kernel: cpuidle: using governor menu Sep 4 17:16:37.193566 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:16:37.193585 kernel: ASID allocator initialised with 65536 entries Sep 4 17:16:37.193604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:16:37.193623 kernel: Serial: AMBA PL011 UART driver Sep 4 17:16:37.193642 kernel: Modules: 17536 pages in range for non-PLT usage Sep 4 17:16:37.193661 kernel: Modules: 509056 pages in range for PLT usage Sep 4 17:16:37.193681 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:16:37.193704 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:16:37.193724 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:16:37.193743 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:16:37.193762 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:16:37.193781 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:16:37.193801 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:16:37.193820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:16:37.193839 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:16:37.193858 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:16:37.193881 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:16:37.193900 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:16:37.193919 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:16:37.193938 kernel: ACPI: Interpreter enabled Sep 4 17:16:37.193957 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:16:37.193976 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:16:37.193995 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 17:16:37.196358 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:16:37.196631 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:16:37.196841 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:16:37.197080 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 17:16:37.197338 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 17:16:37.197369 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 17:16:37.197390 kernel: acpiphp: Slot [1] registered Sep 4 17:16:37.197409 kernel: acpiphp: Slot [2] registered Sep 4 17:16:37.197428 kernel: acpiphp: Slot [3] registered Sep 4 17:16:37.197447 kernel: acpiphp: Slot [4] registered Sep 4 17:16:37.197475 kernel: acpiphp: Slot [5] registered Sep 4 17:16:37.197495 kernel: acpiphp: Slot [6] registered Sep 4 17:16:37.197514 kernel: acpiphp: Slot [7] registered Sep 4 17:16:37.197532 kernel: acpiphp: Slot [8] registered Sep 4 17:16:37.197551 kernel: acpiphp: Slot [9] registered Sep 4 17:16:37.197570 kernel: acpiphp: Slot [10] registered Sep 4 17:16:37.197589 kernel: acpiphp: Slot [11] registered Sep 4 17:16:37.197608 kernel: acpiphp: Slot [12] registered Sep 4 17:16:37.197627 kernel: acpiphp: Slot [13] registered Sep 4 17:16:37.197651 kernel: acpiphp: Slot [14] registered Sep 4 17:16:37.197670 kernel: acpiphp: Slot [15] registered Sep 4 17:16:37.197689 kernel: acpiphp: Slot [16] registered Sep 4 17:16:37.197708 kernel: acpiphp: Slot [17] registered Sep 4 17:16:37.197727 kernel: acpiphp: Slot [18] registered Sep 4 17:16:37.197745 kernel: acpiphp: Slot [19] registered Sep 4 17:16:37.197764 kernel: acpiphp: Slot [20] registered Sep 4 17:16:37.197783 kernel: acpiphp: Slot [21] registered Sep 4 17:16:37.197803 kernel: acpiphp: Slot [22] registered Sep 4 17:16:37.197822 kernel: acpiphp: Slot [23] registered Sep 4 17:16:37.197846 kernel: acpiphp: Slot [24] registered Sep 4 17:16:37.197865 kernel: acpiphp: Slot [25] registered Sep 4 17:16:37.197884 kernel: acpiphp: Slot [26] registered Sep 4 17:16:37.197903 kernel: acpiphp: Slot [27] registered Sep 4 17:16:37.197921 kernel: acpiphp: Slot [28] registered Sep 4 17:16:37.197940 kernel: acpiphp: Slot [29] registered Sep 4 17:16:37.197958 kernel: acpiphp: Slot [30] registered Sep 4 17:16:37.197978 kernel: acpiphp: Slot [31] registered Sep 4 17:16:37.197996 kernel: PCI host bridge to bus 0000:00 Sep 4 17:16:37.198219 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 17:16:37.202594 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:16:37.202811 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 17:16:37.203022 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 17:16:37.203333 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 17:16:37.203628 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 17:16:37.203906 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 17:16:37.204167 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:16:37.208460 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 17:16:37.208719 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:16:37.209005 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:16:37.209281 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 17:16:37.209543 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 17:16:37.209788 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 17:16:37.210001 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:16:37.210208 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 17:16:37.216435 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 17:16:37.216691 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 17:16:37.216957 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 17:16:37.217193 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 17:16:37.217469 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 17:16:37.217659 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:16:37.217844 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 17:16:37.217870 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:16:37.217891 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:16:37.217911 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:16:37.217931 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:16:37.217950 kernel: iommu: Default domain type: Translated Sep 4 17:16:37.217978 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:16:37.217997 kernel: efivars: Registered efivars operations Sep 4 17:16:37.218016 kernel: vgaarb: loaded Sep 4 17:16:37.218035 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:16:37.218055 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:16:37.218074 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:16:37.218093 kernel: pnp: PnP ACPI init Sep 4 17:16:37.218347 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 17:16:37.218384 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:16:37.218405 kernel: NET: Registered PF_INET protocol family Sep 4 17:16:37.218424 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:16:37.218444 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:16:37.218463 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:16:37.218482 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:16:37.218502 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:16:37.218521 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:16:37.218540 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:16:37.218564 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:16:37.218583 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:16:37.218602 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:16:37.218621 kernel: kvm [1]: HYP mode not available Sep 4 17:16:37.218640 kernel: Initialise system trusted keyrings Sep 4 17:16:37.218659 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:16:37.218678 kernel: Key type asymmetric registered Sep 4 17:16:37.218697 kernel: Asymmetric key parser 'x509' registered Sep 4 17:16:37.218716 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:16:37.218739 kernel: io scheduler mq-deadline registered Sep 4 17:16:37.218758 kernel: io scheduler kyber registered Sep 4 17:16:37.218777 kernel: io scheduler bfq registered Sep 4 17:16:37.219005 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 17:16:37.219034 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:16:37.219054 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:16:37.219073 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 17:16:37.219093 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 17:16:37.219112 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:16:37.219138 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 17:16:37.220555 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 17:16:37.220606 kernel: printk: console [ttyS0] disabled Sep 4 17:16:37.220627 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 17:16:37.220648 kernel: printk: console [ttyS0] enabled Sep 4 17:16:37.220668 kernel: printk: bootconsole [uart0] disabled Sep 4 17:16:37.220687 kernel: thunder_xcv, ver 1.0 Sep 4 17:16:37.220707 kernel: thunder_bgx, ver 1.0 Sep 4 17:16:37.220728 kernel: nicpf, ver 1.0 Sep 4 17:16:37.220761 kernel: nicvf, ver 1.0 Sep 4 17:16:37.221060 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:16:37.221334 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:16:36 UTC (1725470196) Sep 4 17:16:37.221369 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:16:37.221389 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 17:16:37.221410 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:16:37.221431 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:16:37.221451 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:16:37.221482 kernel: Segment Routing with IPv6 Sep 4 17:16:37.221502 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:16:37.221528 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:16:37.221548 kernel: Key type dns_resolver registered Sep 4 17:16:37.221568 kernel: registered taskstats version 1 Sep 4 17:16:37.221588 kernel: Loading compiled-in X.509 certificates Sep 4 17:16:37.221608 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 6782952639b29daf968f5d0c3e73fb25e5af1d5e' Sep 4 17:16:37.221627 kernel: Key type .fscrypt registered Sep 4 17:16:37.221648 kernel: Key type fscrypt-provisioning registered Sep 4 17:16:37.221674 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:16:37.221696 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:16:37.221717 kernel: ima: No architecture policies found Sep 4 17:16:37.221737 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:16:37.221755 kernel: clk: Disabling unused clocks Sep 4 17:16:37.221775 kernel: Freeing unused kernel memory: 39296K Sep 4 17:16:37.221794 kernel: Run /init as init process Sep 4 17:16:37.221814 kernel: with arguments: Sep 4 17:16:37.221834 kernel: /init Sep 4 17:16:37.221859 kernel: with environment: Sep 4 17:16:37.221878 kernel: HOME=/ Sep 4 17:16:37.221898 kernel: TERM=linux Sep 4 17:16:37.221917 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:16:37.221943 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:16:37.221969 systemd[1]: Detected virtualization amazon. Sep 4 17:16:37.221991 systemd[1]: Detected architecture arm64. Sep 4 17:16:37.222011 systemd[1]: Running in initrd. Sep 4 17:16:37.222037 systemd[1]: No hostname configured, using default hostname. Sep 4 17:16:37.222058 systemd[1]: Hostname set to . Sep 4 17:16:37.222079 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:16:37.222100 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:16:37.222121 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:16:37.222142 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:16:37.222165 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:16:37.222187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:16:37.222213 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:16:37.223408 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:16:37.223464 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:16:37.223487 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:16:37.223510 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:16:37.223531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:16:37.223562 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:16:37.223584 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:16:37.223606 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:16:37.223627 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:16:37.223649 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:16:37.223670 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:16:37.223691 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:16:37.223712 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:16:37.223733 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:16:37.223760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:16:37.223781 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:16:37.223802 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:16:37.223823 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:16:37.223845 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:16:37.223866 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:16:37.223887 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:16:37.223909 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:16:37.223931 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:16:37.223958 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:37.223979 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:16:37.224001 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:16:37.224022 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:16:37.224046 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:16:37.224131 systemd-journald[251]: Collecting audit messages is disabled. Sep 4 17:16:37.224181 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:16:37.224205 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:16:37.224270 systemd-journald[251]: Journal started Sep 4 17:16:37.224317 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2c55e155cb42533d3008ae3ce2d576) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:16:37.207375 systemd-modules-load[252]: Inserted module 'overlay' Sep 4 17:16:37.229856 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:16:37.232315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:37.245295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:16:37.255277 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:16:37.256586 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 4 17:16:37.258434 kernel: Bridge firewalling registered Sep 4 17:16:37.258484 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:16:37.271657 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:16:37.272651 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:16:37.288392 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:16:37.312478 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:16:37.324864 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:16:37.336661 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:16:37.351030 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:37.365541 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:16:37.395607 dracut-cmdline[291]: dracut-dracut-053 Sep 4 17:16:37.400855 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:16:37.426987 systemd-resolved[282]: Positive Trust Anchors: Sep 4 17:16:37.427018 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:16:37.427077 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:16:37.572268 kernel: SCSI subsystem initialized Sep 4 17:16:37.579275 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:16:37.592280 kernel: iscsi: registered transport (tcp) Sep 4 17:16:37.614386 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:16:37.614464 kernel: QLogic iSCSI HBA Driver Sep 4 17:16:37.678287 kernel: random: crng init done Sep 4 17:16:37.678657 systemd-resolved[282]: Defaulting to hostname 'linux'. Sep 4 17:16:37.681954 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:16:37.682554 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:16:37.710715 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:16:37.720595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:16:37.762006 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:16:37.762086 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:16:37.762115 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:16:37.829296 kernel: raid6: neonx8 gen() 6711 MB/s Sep 4 17:16:37.846296 kernel: raid6: neonx4 gen() 6468 MB/s Sep 4 17:16:37.863280 kernel: raid6: neonx2 gen() 5412 MB/s Sep 4 17:16:37.880273 kernel: raid6: neonx1 gen() 3947 MB/s Sep 4 17:16:37.897274 kernel: raid6: int64x8 gen() 3804 MB/s Sep 4 17:16:37.914271 kernel: raid6: int64x4 gen() 3725 MB/s Sep 4 17:16:37.931267 kernel: raid6: int64x2 gen() 3603 MB/s Sep 4 17:16:37.949076 kernel: raid6: int64x1 gen() 2775 MB/s Sep 4 17:16:37.949141 kernel: raid6: using algorithm neonx8 gen() 6711 MB/s Sep 4 17:16:37.967023 kernel: raid6: .... xor() 4925 MB/s, rmw enabled Sep 4 17:16:37.967070 kernel: raid6: using neon recovery algorithm Sep 4 17:16:37.975277 kernel: xor: measuring software checksum speed Sep 4 17:16:37.975334 kernel: 8regs : 11030 MB/sec Sep 4 17:16:37.978268 kernel: 32regs : 11923 MB/sec Sep 4 17:16:37.980571 kernel: arm64_neon : 9627 MB/sec Sep 4 17:16:37.980611 kernel: xor: using function: 32regs (11923 MB/sec) Sep 4 17:16:38.063283 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:16:38.083129 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:16:38.094582 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:16:38.134553 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 4 17:16:38.144184 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:16:38.154258 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:16:38.192544 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Sep 4 17:16:38.251484 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:16:38.268526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:16:38.387449 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:16:38.401571 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:16:38.455209 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:16:38.462872 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:16:38.477159 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:16:38.488318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:16:38.499516 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:16:38.542986 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:16:38.606284 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:16:38.606364 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 17:16:38.616826 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:16:38.621075 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:16:38.623227 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:16:38.617173 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:38.622388 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:16:38.624467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:16:38.624747 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:38.626976 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:38.647276 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:89:d9:b3:3c:fb Sep 4 17:16:38.648965 (udev-worker)[531]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:16:38.650657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:38.681586 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 17:16:38.681658 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:16:38.686735 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:38.695334 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:16:38.701541 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:16:38.701611 kernel: GPT:9289727 != 16777215 Sep 4 17:16:38.701638 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:16:38.701664 kernel: GPT:9289727 != 16777215 Sep 4 17:16:38.701689 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:16:38.701714 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:38.700612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:16:38.734271 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:38.830432 kernel: BTRFS: device fsid 3e706a0f-a579-4862-bc52-e66e95e66d87 devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (520) Sep 4 17:16:38.853283 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (524) Sep 4 17:16:38.904307 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:16:38.923648 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:16:38.950860 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:16:38.956288 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:16:38.974616 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:16:38.990592 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:16:39.003105 disk-uuid[662]: Primary Header is updated. Sep 4 17:16:39.003105 disk-uuid[662]: Secondary Entries is updated. Sep 4 17:16:39.003105 disk-uuid[662]: Secondary Header is updated. Sep 4 17:16:39.011288 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:39.018272 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:39.028267 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:40.031357 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:40.033372 disk-uuid[663]: The operation has completed successfully. Sep 4 17:16:40.209586 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:16:40.209784 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:16:40.268562 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:16:40.279970 sh[1006]: Success Sep 4 17:16:40.299342 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:16:40.406382 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:16:40.418424 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:16:40.433794 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:16:40.463600 kernel: BTRFS info (device dm-0): first mount of filesystem 3e706a0f-a579-4862-bc52-e66e95e66d87 Sep 4 17:16:40.463668 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:40.463707 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:16:40.465228 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:16:40.466416 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:16:40.551288 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:16:40.574343 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:16:40.578164 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:16:40.590482 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:16:40.597684 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:16:40.624730 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:40.624803 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:40.626066 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:16:40.633723 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:16:40.651385 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:16:40.653584 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:40.662918 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:16:40.674518 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:16:40.778540 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:16:40.793581 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:16:40.849627 systemd-networkd[1199]: lo: Link UP Sep 4 17:16:40.849649 systemd-networkd[1199]: lo: Gained carrier Sep 4 17:16:40.853178 systemd-networkd[1199]: Enumeration completed Sep 4 17:16:40.853931 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:40.853938 systemd-networkd[1199]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:16:40.855602 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:16:40.865959 systemd[1]: Reached target network.target - Network. Sep 4 17:16:40.867120 systemd-networkd[1199]: eth0: Link UP Sep 4 17:16:40.867128 systemd-networkd[1199]: eth0: Gained carrier Sep 4 17:16:40.867147 systemd-networkd[1199]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:40.900360 systemd-networkd[1199]: eth0: DHCPv4 address 172.31.22.219/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:16:40.997682 ignition[1109]: Ignition 2.19.0 Sep 4 17:16:40.997705 ignition[1109]: Stage: fetch-offline Sep 4 17:16:40.998217 ignition[1109]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:40.998280 ignition[1109]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:40.998761 ignition[1109]: Ignition finished successfully Sep 4 17:16:41.007874 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:16:41.034724 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:16:41.058103 ignition[1209]: Ignition 2.19.0 Sep 4 17:16:41.058643 ignition[1209]: Stage: fetch Sep 4 17:16:41.059288 ignition[1209]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:41.059315 ignition[1209]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:41.059493 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:41.075034 ignition[1209]: PUT result: OK Sep 4 17:16:41.078002 ignition[1209]: parsed url from cmdline: "" Sep 4 17:16:41.078017 ignition[1209]: no config URL provided Sep 4 17:16:41.078034 ignition[1209]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:16:41.078062 ignition[1209]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:16:41.078107 ignition[1209]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:41.079664 ignition[1209]: PUT result: OK Sep 4 17:16:41.079742 ignition[1209]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:16:41.086599 ignition[1209]: GET result: OK Sep 4 17:16:41.097082 unknown[1209]: fetched base config from "system" Sep 4 17:16:41.086748 ignition[1209]: parsing config with SHA512: bd9876ca3a78fb6af81406f5807b615864440077a950d3de0838add199d5b9c27e97e76b72253caa279615ad58be6ef5ac61149353506cf3c8761e0686cbb113 Sep 4 17:16:41.097099 unknown[1209]: fetched base config from "system" Sep 4 17:16:41.097748 ignition[1209]: fetch: fetch complete Sep 4 17:16:41.097114 unknown[1209]: fetched user config from "aws" Sep 4 17:16:41.097760 ignition[1209]: fetch: fetch passed Sep 4 17:16:41.102847 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:16:41.097842 ignition[1209]: Ignition finished successfully Sep 4 17:16:41.121491 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:16:41.147588 ignition[1215]: Ignition 2.19.0 Sep 4 17:16:41.147619 ignition[1215]: Stage: kargs Sep 4 17:16:41.148330 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:41.148356 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:41.148506 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:41.152561 ignition[1215]: PUT result: OK Sep 4 17:16:41.160419 ignition[1215]: kargs: kargs passed Sep 4 17:16:41.160520 ignition[1215]: Ignition finished successfully Sep 4 17:16:41.165287 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:16:41.173569 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:16:41.206435 ignition[1222]: Ignition 2.19.0 Sep 4 17:16:41.206463 ignition[1222]: Stage: disks Sep 4 17:16:41.208253 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:41.208286 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:41.208453 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:41.213944 ignition[1222]: PUT result: OK Sep 4 17:16:41.219886 ignition[1222]: disks: disks passed Sep 4 17:16:41.220042 ignition[1222]: Ignition finished successfully Sep 4 17:16:41.225363 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:16:41.228010 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:16:41.230782 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:16:41.236774 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:16:41.242791 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:16:41.244716 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:16:41.258640 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:16:41.309558 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:16:41.315883 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:16:41.327428 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:16:41.426279 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 901d46b0-2319-4536-8a6d-46889db73e8c r/w with ordered data mode. Quota mode: none. Sep 4 17:16:41.427378 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:16:41.428675 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:16:41.447416 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:16:41.453423 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:16:41.458571 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:16:41.458799 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:16:41.458850 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:16:41.484356 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:16:41.495631 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:16:41.504297 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Sep 4 17:16:41.508932 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:41.509004 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:41.509032 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:16:41.524780 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:16:41.526753 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:16:41.876622 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:16:41.894266 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:16:41.904156 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:16:41.914203 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:16:42.203526 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:16:42.213483 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:16:42.234562 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:16:42.251179 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:16:42.255272 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:42.302870 ignition[1363]: INFO : Ignition 2.19.0 Sep 4 17:16:42.302870 ignition[1363]: INFO : Stage: mount Sep 4 17:16:42.302870 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:42.302870 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:42.302870 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:42.315887 ignition[1363]: INFO : PUT result: OK Sep 4 17:16:42.306063 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:16:42.320384 ignition[1363]: INFO : mount: mount passed Sep 4 17:16:42.322420 ignition[1363]: INFO : Ignition finished successfully Sep 4 17:16:42.325464 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:16:42.335439 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:16:42.365676 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:16:42.396260 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1374) Sep 4 17:16:42.400812 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:42.400857 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:42.400899 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:16:42.407262 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:16:42.411214 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:16:42.447823 ignition[1391]: INFO : Ignition 2.19.0 Sep 4 17:16:42.447823 ignition[1391]: INFO : Stage: files Sep 4 17:16:42.451033 ignition[1391]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:42.451033 ignition[1391]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:42.451033 ignition[1391]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:42.457310 ignition[1391]: INFO : PUT result: OK Sep 4 17:16:42.461991 ignition[1391]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:16:42.465452 ignition[1391]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:16:42.465452 ignition[1391]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:16:42.478792 ignition[1391]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:16:42.481544 ignition[1391]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:16:42.484708 unknown[1391]: wrote ssh authorized keys file for user: core Sep 4 17:16:42.486865 ignition[1391]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:16:42.491289 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:16:42.494753 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:16:42.553949 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:16:42.636336 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:16:42.636336 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:16:42.642870 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:16:42.642870 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:16:42.642870 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:16:42.642870 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:16:42.642870 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:16:42.642870 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:16:42.642870 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:16:42.665328 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:16:42.665328 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:16:42.665328 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:16:42.665328 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:16:42.665328 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:16:42.665328 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Sep 4 17:16:42.801561 systemd-networkd[1199]: eth0: Gained IPv6LL Sep 4 17:16:43.155652 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:16:43.506695 ignition[1391]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:16:43.506695 ignition[1391]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:16:43.514512 ignition[1391]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:16:43.514512 ignition[1391]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:16:43.514512 ignition[1391]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:16:43.514512 ignition[1391]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:16:43.514512 ignition[1391]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:16:43.514512 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:16:43.514512 ignition[1391]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:16:43.514512 ignition[1391]: INFO : files: files passed Sep 4 17:16:43.514512 ignition[1391]: INFO : Ignition finished successfully Sep 4 17:16:43.538384 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:16:43.556612 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:16:43.565093 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:16:43.567992 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:16:43.568183 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:16:43.608378 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:16:43.608378 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:16:43.614620 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:16:43.623185 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:16:43.629405 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:16:43.641584 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:16:43.690989 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:16:43.691432 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:16:43.694702 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:16:43.694816 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:16:43.695177 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:16:43.697132 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:16:43.737013 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:16:43.756679 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:16:43.779998 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:16:43.784402 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:16:43.786853 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:16:43.788713 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:16:43.788975 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:16:43.791709 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:16:43.793841 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:16:43.795791 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:16:43.798079 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:16:43.800379 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:16:43.803009 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:16:43.806673 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:16:43.809197 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:16:43.832654 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:16:43.834682 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:16:43.836543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:16:43.836786 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:16:43.845950 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:16:43.848303 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:16:43.854167 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:16:43.858344 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:16:43.863281 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:16:43.863515 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:16:43.869996 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:16:43.870267 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:16:43.872765 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:16:43.872986 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:16:43.891349 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:16:43.893163 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:16:43.897512 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:16:43.920706 ignition[1443]: INFO : Ignition 2.19.0 Sep 4 17:16:43.924776 ignition[1443]: INFO : Stage: umount Sep 4 17:16:43.924776 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:43.924776 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:43.924776 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:43.924092 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:16:43.926579 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:16:43.928045 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:16:43.933216 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:16:43.933585 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:16:43.948772 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:16:43.950305 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:16:43.974417 ignition[1443]: INFO : PUT result: OK Sep 4 17:16:43.978972 ignition[1443]: INFO : umount: umount passed Sep 4 17:16:43.980631 ignition[1443]: INFO : Ignition finished successfully Sep 4 17:16:43.984888 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:16:43.988390 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:16:43.994789 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:16:43.994916 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:16:44.000703 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:16:44.000822 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:16:44.003360 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:16:44.003443 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:16:44.008331 systemd[1]: Stopped target network.target - Network. Sep 4 17:16:44.015502 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:16:44.015678 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:16:44.019004 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:16:44.028846 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:16:44.034312 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:16:44.036642 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:16:44.038332 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:16:44.040167 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:16:44.040282 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:16:44.042186 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:16:44.042296 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:16:44.044156 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:16:44.044273 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:16:44.046577 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:16:44.046663 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:16:44.050392 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:16:44.052951 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:16:44.059400 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:16:44.060404 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:16:44.060580 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:16:44.061778 systemd-networkd[1199]: eth0: DHCPv6 lease lost Sep 4 17:16:44.064505 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:16:44.064677 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:16:44.070106 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:16:44.070347 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:16:44.073456 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:16:44.073601 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:16:44.115428 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:16:44.119776 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:16:44.119902 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:16:44.122688 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:16:44.125358 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:16:44.125561 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:16:44.146649 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:16:44.149335 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:16:44.153574 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:16:44.153685 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:16:44.161539 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:16:44.162403 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:16:44.181771 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:16:44.182601 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:16:44.189452 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:16:44.189609 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:16:44.192701 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:16:44.192778 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:16:44.198418 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:16:44.198809 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:16:44.202514 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:16:44.202621 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:16:44.205026 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:16:44.205114 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:44.217599 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:16:44.227064 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:16:44.227191 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:16:44.229526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:16:44.229610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:44.232318 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:16:44.233346 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:16:44.251890 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:16:44.253086 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:16:44.271869 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:16:44.281514 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:16:44.308695 systemd[1]: Switching root. Sep 4 17:16:44.358293 systemd-journald[251]: Journal stopped Sep 4 17:16:46.816189 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 4 17:16:46.825590 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:16:46.825647 kernel: SELinux: policy capability open_perms=1 Sep 4 17:16:46.825680 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:16:46.825725 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:16:46.825757 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:16:46.825789 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:16:46.825819 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:16:46.825850 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:16:46.825881 kernel: audit: type=1403 audit(1725470204.992:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:16:46.825927 systemd[1]: Successfully loaded SELinux policy in 71.650ms. Sep 4 17:16:46.825970 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.460ms. Sep 4 17:16:46.826006 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:16:46.826045 systemd[1]: Detected virtualization amazon. Sep 4 17:16:46.826078 systemd[1]: Detected architecture arm64. Sep 4 17:16:46.826110 systemd[1]: Detected first boot. Sep 4 17:16:46.826144 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:16:46.826177 zram_generator::config[1485]: No configuration found. Sep 4 17:16:46.826209 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:16:46.829319 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:16:46.829374 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:16:46.829420 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:16:46.829456 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:16:46.829490 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:16:46.829520 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:16:46.829550 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:16:46.829583 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:16:46.829617 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:16:46.829651 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:16:46.829683 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:16:46.829717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:16:46.829750 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:16:46.829783 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:16:46.829813 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:16:46.829856 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:16:46.829889 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:16:46.829921 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:16:46.829951 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:16:46.829982 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:16:46.830027 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:16:46.830059 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:16:46.830092 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:16:46.830122 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:16:46.830154 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:16:46.830185 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:16:46.830217 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:16:46.834416 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:16:46.834469 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:16:46.834504 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:16:46.834536 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:16:46.834568 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:16:46.834599 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:16:46.834632 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:16:46.838223 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:16:46.838317 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:16:46.838359 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:16:46.838390 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:16:46.838422 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:16:46.838466 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:16:46.838516 systemd[1]: Reached target machines.target - Containers. Sep 4 17:16:46.838549 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:16:46.838580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:16:46.838613 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:16:46.838646 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:16:46.838683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:16:46.838717 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:16:46.838747 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:16:46.838779 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:16:46.838813 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:16:46.838845 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:16:46.838876 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:16:46.838908 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:16:46.838944 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:16:46.838976 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:16:46.839008 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:16:46.839038 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:16:46.839068 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:16:46.839098 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:16:46.839130 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:16:46.839161 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:16:46.839191 systemd[1]: Stopped verity-setup.service. Sep 4 17:16:46.839225 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:16:46.844677 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:16:46.844722 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:16:46.844751 kernel: fuse: init (API version 7.39) Sep 4 17:16:46.844784 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:16:46.844814 kernel: loop: module loaded Sep 4 17:16:46.844915 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:16:46.847454 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:16:46.847496 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:16:46.847526 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:16:46.847556 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:16:46.847586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:16:46.847616 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:16:46.847649 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:16:46.847688 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:16:46.847718 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:16:46.847750 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:16:46.847780 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:16:46.847810 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:16:46.847839 kernel: ACPI: bus type drm_connector registered Sep 4 17:16:46.847871 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:16:46.847907 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:16:46.847938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:16:46.847968 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:16:46.848046 systemd-journald[1569]: Collecting audit messages is disabled. Sep 4 17:16:46.848099 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:16:46.848131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:16:46.848166 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:16:46.848197 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:16:46.848229 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:16:46.852365 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:16:46.852407 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:16:46.852440 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:16:46.852477 systemd-journald[1569]: Journal started Sep 4 17:16:46.852538 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec2c55e155cb42533d3008ae3ce2d576) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:16:46.852619 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:16:46.159499 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:16:46.217672 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:16:46.218473 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:16:46.858491 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:16:46.864130 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:16:46.885041 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:16:46.896304 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:16:46.896396 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:16:46.909304 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:16:46.909400 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:16:46.924316 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:16:46.935299 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:16:46.941374 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:16:46.946385 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:16:46.949740 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:16:47.013355 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:16:47.025440 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:16:47.033182 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:16:47.047636 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:16:47.059524 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:16:47.067540 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:16:47.078275 kernel: loop0: detected capacity change from 0 to 52536 Sep 4 17:16:47.080366 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:16:47.095595 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:16:47.113435 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec2c55e155cb42533d3008ae3ce2d576 is 76.438ms for 914 entries. Sep 4 17:16:47.113435 systemd-journald[1569]: System Journal (/var/log/journal/ec2c55e155cb42533d3008ae3ce2d576) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:16:47.206122 systemd-journald[1569]: Received client request to flush runtime journal. Sep 4 17:16:47.206213 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:16:47.178267 udevadm[1624]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 4 17:16:47.216702 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:16:47.224628 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:16:47.227144 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:16:47.250284 kernel: loop1: detected capacity change from 0 to 114288 Sep 4 17:16:47.270340 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:16:47.283656 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:16:47.340578 kernel: loop2: detected capacity change from 0 to 65520 Sep 4 17:16:47.362542 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Sep 4 17:16:47.362573 systemd-tmpfiles[1632]: ACLs are not supported, ignoring. Sep 4 17:16:47.379531 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:16:47.437291 kernel: loop3: detected capacity change from 0 to 194512 Sep 4 17:16:47.561347 kernel: loop4: detected capacity change from 0 to 52536 Sep 4 17:16:47.578433 kernel: loop5: detected capacity change from 0 to 114288 Sep 4 17:16:47.602424 kernel: loop6: detected capacity change from 0 to 65520 Sep 4 17:16:47.616280 kernel: loop7: detected capacity change from 0 to 194512 Sep 4 17:16:47.656358 (sd-merge)[1638]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:16:47.657378 (sd-merge)[1638]: Merged extensions into '/usr'. Sep 4 17:16:47.670450 systemd[1]: Reloading requested from client PID 1596 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:16:47.670647 systemd[1]: Reloading... Sep 4 17:16:47.848342 zram_generator::config[1659]: No configuration found. Sep 4 17:16:48.232890 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:16:48.344030 systemd[1]: Reloading finished in 672 ms. Sep 4 17:16:48.390650 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:16:48.398741 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:16:48.409587 systemd[1]: Starting ensure-sysext.service... Sep 4 17:16:48.424601 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:16:48.432646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:16:48.463525 systemd[1]: Reloading requested from client PID 1714 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:16:48.463557 systemd[1]: Reloading... Sep 4 17:16:48.508390 ldconfig[1592]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:16:48.513593 systemd-udevd[1716]: Using default interface naming scheme 'v255'. Sep 4 17:16:48.515197 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:16:48.517927 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:16:48.521977 systemd-tmpfiles[1715]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:16:48.524856 systemd-tmpfiles[1715]: ACLs are not supported, ignoring. Sep 4 17:16:48.525021 systemd-tmpfiles[1715]: ACLs are not supported, ignoring. Sep 4 17:16:48.535643 systemd-tmpfiles[1715]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:16:48.536444 systemd-tmpfiles[1715]: Skipping /boot Sep 4 17:16:48.564055 systemd-tmpfiles[1715]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:16:48.564618 systemd-tmpfiles[1715]: Skipping /boot Sep 4 17:16:48.683339 zram_generator::config[1746]: No configuration found. Sep 4 17:16:48.789485 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1752) Sep 4 17:16:48.789672 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1752) Sep 4 17:16:48.815423 (udev-worker)[1768]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:16:49.069425 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:16:49.106267 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1747) Sep 4 17:16:49.228271 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:16:49.228701 systemd[1]: Reloading finished in 764 ms. Sep 4 17:16:49.261577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:16:49.265097 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:16:49.267975 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:16:49.363314 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:16:49.390984 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:16:49.407672 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:16:49.415756 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:16:49.418322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:16:49.428782 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:16:49.434793 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:16:49.440732 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:16:49.452229 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:16:49.459499 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:16:49.461741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:16:49.469742 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:16:49.485452 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:16:49.509294 lvm[1914]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:16:49.510552 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:16:49.519921 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:16:49.522384 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:16:49.528761 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:16:49.537599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:49.543841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:16:49.544617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:16:49.549350 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:16:49.549664 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:16:49.572395 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:16:49.575563 systemd[1]: Finished ensure-sysext.service. Sep 4 17:16:49.618220 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:16:49.620804 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:16:49.623370 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:16:49.645358 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:16:49.647427 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:16:49.647710 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:16:49.649149 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:16:49.661331 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:16:49.693407 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:16:49.697639 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:16:49.708583 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:16:49.724664 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:16:49.737038 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:16:49.759358 augenrules[1953]: No rules Sep 4 17:16:49.755177 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:16:49.758429 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:16:49.764991 lvm[1949]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:16:49.762051 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:16:49.776203 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:16:49.787899 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:16:49.842332 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:16:49.891351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:49.938101 systemd-resolved[1928]: Positive Trust Anchors: Sep 4 17:16:49.938143 systemd-resolved[1928]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:16:49.938209 systemd-resolved[1928]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:16:49.946755 systemd-resolved[1928]: Defaulting to hostname 'linux'. Sep 4 17:16:49.949504 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:16:49.952640 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:16:49.955088 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:16:49.957454 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:16:49.960311 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:16:49.962574 systemd-networkd[1926]: lo: Link UP Sep 4 17:16:49.962596 systemd-networkd[1926]: lo: Gained carrier Sep 4 17:16:49.966396 systemd-networkd[1926]: Enumeration completed Sep 4 17:16:49.967486 systemd-networkd[1926]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:49.967666 systemd-networkd[1926]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:16:49.969504 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:16:49.971443 systemd-networkd[1926]: eth0: Link UP Sep 4 17:16:49.972311 systemd-networkd[1926]: eth0: Gained carrier Sep 4 17:16:49.972489 systemd-networkd[1926]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:49.973409 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:16:49.976105 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:16:49.981494 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:16:49.981563 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:16:49.985463 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:16:49.989819 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:16:49.994395 systemd-networkd[1926]: eth0: DHCPv4 address 172.31.22.219/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:16:49.996999 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:16:50.008037 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:16:50.011298 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:16:50.013835 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:16:50.016299 systemd[1]: Reached target network.target - Network. Sep 4 17:16:50.018178 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:16:50.020416 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:16:50.022711 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:16:50.022774 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:16:50.030443 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:16:50.041913 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:16:50.048227 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:16:50.064462 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:16:50.078577 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:16:50.080548 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:16:50.084608 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:16:50.090575 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:16:50.098709 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:16:50.104584 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:16:50.111192 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:16:50.118580 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:16:50.128583 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:16:50.135480 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:16:50.138676 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:16:50.141673 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:16:50.145621 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:16:50.153611 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:16:50.167546 dbus-daemon[1976]: [system] SELinux support is enabled Sep 4 17:16:50.167902 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:16:50.177749 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:16:50.178650 dbus-daemon[1976]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1926 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:16:50.178137 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:16:50.198591 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:16:50.198717 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:16:50.203477 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:16:50.203539 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:16:50.206189 dbus-daemon[1976]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:16:50.222671 jq[1977]: false Sep 4 17:16:50.242519 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:16:50.245053 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:16:50.246389 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:16:50.332276 extend-filesystems[1978]: Found loop4 Sep 4 17:16:50.332276 extend-filesystems[1978]: Found loop5 Sep 4 17:16:50.332276 extend-filesystems[1978]: Found loop6 Sep 4 17:16:50.332276 extend-filesystems[1978]: Found loop7 Sep 4 17:16:50.332276 extend-filesystems[1978]: Found nvme0n1 Sep 4 17:16:50.332276 extend-filesystems[1978]: Found nvme0n1p1 Sep 4 17:16:50.332276 extend-filesystems[1978]: Found nvme0n1p2 Sep 4 17:16:50.332276 extend-filesystems[1978]: Found nvme0n1p3 Sep 4 17:16:50.360823 jq[1991]: true Sep 4 17:16:50.393369 extend-filesystems[1978]: Found usr Sep 4 17:16:50.393369 extend-filesystems[1978]: Found nvme0n1p4 Sep 4 17:16:50.393369 extend-filesystems[1978]: Found nvme0n1p6 Sep 4 17:16:50.393369 extend-filesystems[1978]: Found nvme0n1p7 Sep 4 17:16:50.393369 extend-filesystems[1978]: Found nvme0n1p9 Sep 4 17:16:50.393369 extend-filesystems[1978]: Checking size of /dev/nvme0n1p9 Sep 4 17:16:50.436528 tar[1998]: linux-arm64/helm Sep 4 17:16:50.384876 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:16:50.448097 update_engine[1990]: I0904 17:16:50.374476 1990 main.cc:92] Flatcar Update Engine starting Sep 4 17:16:50.448097 update_engine[1990]: I0904 17:16:50.395599 1990 update_check_scheduler.cc:74] Next update check in 11m24s Sep 4 17:16:50.391061 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:16:50.418980 (ntainerd)[2004]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:16:50.444053 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:16:50.458262 extend-filesystems[1978]: Resized partition /dev/nvme0n1p9 Sep 4 17:16:50.469582 jq[2009]: true Sep 4 17:16:50.444420 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:16:50.486941 extend-filesystems[2028]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:16:50.497321 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:16:50.507144 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:16:50.513896 ntpd[1980]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:18:26 UTC 2024 (1): Starting Sep 4 17:16:50.513961 ntpd[1980]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:16:50.514533 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:18:26 UTC 2024 (1): Starting Sep 4 17:16:50.514533 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:16:50.514533 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: ---------------------------------------------------- Sep 4 17:16:50.514533 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:16:50.514533 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:16:50.514533 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: corporation. Support and training for ntp-4 are Sep 4 17:16:50.514533 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: available at https://www.nwtime.org/support Sep 4 17:16:50.514533 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: ---------------------------------------------------- Sep 4 17:16:50.513983 ntpd[1980]: ---------------------------------------------------- Sep 4 17:16:50.514002 ntpd[1980]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:16:50.514022 ntpd[1980]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:16:50.514041 ntpd[1980]: corporation. Support and training for ntp-4 are Sep 4 17:16:50.514060 ntpd[1980]: available at https://www.nwtime.org/support Sep 4 17:16:50.514080 ntpd[1980]: ---------------------------------------------------- Sep 4 17:16:50.524004 ntpd[1980]: proto: precision = 0.096 usec (-23) Sep 4 17:16:50.524302 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: proto: precision = 0.096 usec (-23) Sep 4 17:16:50.524302 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: basedate set to 2024-08-23 Sep 4 17:16:50.524302 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: gps base set to 2024-08-25 (week 2329) Sep 4 17:16:50.524520 ntpd[1980]: basedate set to 2024-08-23 Sep 4 17:16:50.524548 ntpd[1980]: gps base set to 2024-08-25 (week 2329) Sep 4 17:16:50.532920 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:16:50.538654 coreos-metadata[1975]: Sep 04 17:16:50.538 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:16:50.544944 ntpd[1980]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:16:50.545749 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:16:50.545749 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:16:50.545995 coreos-metadata[1975]: Sep 04 17:16:50.545 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:16:50.545056 ntpd[1980]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:16:50.551864 coreos-metadata[1975]: Sep 04 17:16:50.551 INFO Fetch successful Sep 4 17:16:50.551864 coreos-metadata[1975]: Sep 04 17:16:50.551 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:16:50.555395 coreos-metadata[1975]: Sep 04 17:16:50.555 INFO Fetch successful Sep 4 17:16:50.555395 coreos-metadata[1975]: Sep 04 17:16:50.555 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:16:50.557400 coreos-metadata[1975]: Sep 04 17:16:50.557 INFO Fetch successful Sep 4 17:16:50.557400 coreos-metadata[1975]: Sep 04 17:16:50.557 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:16:50.557813 ntpd[1980]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:16:50.557919 ntpd[1980]: Listen normally on 3 eth0 172.31.22.219:123 Sep 4 17:16:50.558152 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:16:50.558152 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: Listen normally on 3 eth0 172.31.22.219:123 Sep 4 17:16:50.558152 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: Listen normally on 4 lo [::1]:123 Sep 4 17:16:50.558152 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: bind(21) AF_INET6 fe80::489:d9ff:feb3:3cfb%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:16:50.558152 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: unable to create socket on eth0 (5) for fe80::489:d9ff:feb3:3cfb%2#123 Sep 4 17:16:50.558013 ntpd[1980]: Listen normally on 4 lo [::1]:123 Sep 4 17:16:50.559378 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: failed to init interface for address fe80::489:d9ff:feb3:3cfb%2 Sep 4 17:16:50.559378 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: Listening on routing socket on fd #21 for interface updates Sep 4 17:16:50.558105 ntpd[1980]: bind(21) AF_INET6 fe80::489:d9ff:feb3:3cfb%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:16:50.558146 ntpd[1980]: unable to create socket on eth0 (5) for fe80::489:d9ff:feb3:3cfb%2#123 Sep 4 17:16:50.558178 ntpd[1980]: failed to init interface for address fe80::489:d9ff:feb3:3cfb%2 Sep 4 17:16:50.558286 ntpd[1980]: Listening on routing socket on fd #21 for interface updates Sep 4 17:16:50.561227 coreos-metadata[1975]: Sep 04 17:16:50.561 INFO Fetch successful Sep 4 17:16:50.561227 coreos-metadata[1975]: Sep 04 17:16:50.561 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:16:50.562503 coreos-metadata[1975]: Sep 04 17:16:50.562 INFO Fetch failed with 404: resource not found Sep 4 17:16:50.562503 coreos-metadata[1975]: Sep 04 17:16:50.562 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:16:50.569732 coreos-metadata[1975]: Sep 04 17:16:50.569 INFO Fetch successful Sep 4 17:16:50.569732 coreos-metadata[1975]: Sep 04 17:16:50.569 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:16:50.575691 coreos-metadata[1975]: Sep 04 17:16:50.575 INFO Fetch successful Sep 4 17:16:50.576004 coreos-metadata[1975]: Sep 04 17:16:50.575 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:16:50.578746 coreos-metadata[1975]: Sep 04 17:16:50.576 INFO Fetch successful Sep 4 17:16:50.578746 coreos-metadata[1975]: Sep 04 17:16:50.578 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:16:50.581558 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:16:50.584349 coreos-metadata[1975]: Sep 04 17:16:50.584 INFO Fetch successful Sep 4 17:16:50.584349 coreos-metadata[1975]: Sep 04 17:16:50.584 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:16:50.588484 coreos-metadata[1975]: Sep 04 17:16:50.588 INFO Fetch successful Sep 4 17:16:50.591822 ntpd[1980]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:50.591885 ntpd[1980]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:50.592005 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:50.592005 ntpd[1980]: 4 Sep 17:16:50 ntpd[1980]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:50.594752 extend-filesystems[2028]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:16:50.594752 extend-filesystems[2028]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:16:50.594752 extend-filesystems[2028]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:16:50.604890 extend-filesystems[1978]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:16:50.612169 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:16:50.612690 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:16:50.643139 systemd-logind[1987]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:16:50.652553 systemd-logind[1987]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 17:16:50.656947 systemd-logind[1987]: New seat seat0. Sep 4 17:16:50.665968 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:16:50.744414 bash[2060]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:16:50.753492 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:16:50.764801 systemd[1]: Starting sshkeys.service... Sep 4 17:16:50.771456 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:16:50.775191 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:16:50.801700 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:16:50.806888 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:16:50.844087 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1747) Sep 4 17:16:50.844878 dbus-daemon[1976]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:16:50.845147 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:16:50.853024 dbus-daemon[1976]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1995 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:16:50.859876 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:16:50.985016 polkitd[2071]: Started polkitd version 121 Sep 4 17:16:51.037079 polkitd[2071]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:16:51.039393 containerd[2004]: time="2024-09-04T17:16:51.037786256Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:16:51.037222 polkitd[2071]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:16:51.039712 polkitd[2071]: Finished loading, compiling and executing 2 rules Sep 4 17:16:51.052064 dbus-daemon[1976]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:16:51.052770 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:16:51.056363 polkitd[2071]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:16:51.084687 locksmithd[2016]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:16:51.137472 systemd-hostnamed[1995]: Hostname set to (transient) Sep 4 17:16:51.137834 systemd-resolved[1928]: System hostname changed to 'ip-172-31-22-219'. Sep 4 17:16:51.147092 coreos-metadata[2066]: Sep 04 17:16:51.147 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:16:51.148611 coreos-metadata[2066]: Sep 04 17:16:51.148 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:16:51.151785 coreos-metadata[2066]: Sep 04 17:16:51.151 INFO Fetch successful Sep 4 17:16:51.151785 coreos-metadata[2066]: Sep 04 17:16:51.151 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:16:51.154729 coreos-metadata[2066]: Sep 04 17:16:51.154 INFO Fetch successful Sep 4 17:16:51.161766 unknown[2066]: wrote ssh authorized keys file for user: core Sep 4 17:16:51.213077 update-ssh-keys[2151]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:16:51.217015 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:16:51.226519 containerd[2004]: time="2024-09-04T17:16:51.226144773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:51.231343 systemd[1]: Finished sshkeys.service. Sep 4 17:16:51.240517 containerd[2004]: time="2024-09-04T17:16:51.240373209Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:51.240678 containerd[2004]: time="2024-09-04T17:16:51.240647085Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:16:51.240811 containerd[2004]: time="2024-09-04T17:16:51.240782697Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:16:51.241542 containerd[2004]: time="2024-09-04T17:16:51.241187133Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:16:51.241542 containerd[2004]: time="2024-09-04T17:16:51.241230753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:51.241542 containerd[2004]: time="2024-09-04T17:16:51.241385001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:51.241542 containerd[2004]: time="2024-09-04T17:16:51.241414929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:51.244439 containerd[2004]: time="2024-09-04T17:16:51.243605325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:51.244439 containerd[2004]: time="2024-09-04T17:16:51.243665577Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:51.244439 containerd[2004]: time="2024-09-04T17:16:51.243701157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:51.244439 containerd[2004]: time="2024-09-04T17:16:51.243726261Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:51.244439 containerd[2004]: time="2024-09-04T17:16:51.243940845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:51.244439 containerd[2004]: time="2024-09-04T17:16:51.244372101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:51.245013 containerd[2004]: time="2024-09-04T17:16:51.244973925Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:51.245124 containerd[2004]: time="2024-09-04T17:16:51.245096109Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:16:51.245566 containerd[2004]: time="2024-09-04T17:16:51.245403513Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:16:51.245566 containerd[2004]: time="2024-09-04T17:16:51.245511693Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.255965889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.256078209Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.256208745Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.256269789Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.256305321Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.256570845Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.257057613Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.257299389Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.257335557Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.257366733Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.257398977Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.257431593Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.257465133Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:16:51.258423 containerd[2004]: time="2024-09-04T17:16:51.257497581Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:16:51.259082 containerd[2004]: time="2024-09-04T17:16:51.257535405Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:16:51.259082 containerd[2004]: time="2024-09-04T17:16:51.257565849Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:16:51.259082 containerd[2004]: time="2024-09-04T17:16:51.257599545Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:16:51.259082 containerd[2004]: time="2024-09-04T17:16:51.257629797Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:16:51.259082 containerd[2004]: time="2024-09-04T17:16:51.257678877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.259082 containerd[2004]: time="2024-09-04T17:16:51.257712741Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.259082 containerd[2004]: time="2024-09-04T17:16:51.257742813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.262915 containerd[2004]: time="2024-09-04T17:16:51.262848021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.263822 containerd[2004]: time="2024-09-04T17:16:51.263745117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.263937 containerd[2004]: time="2024-09-04T17:16:51.263830605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.263937 containerd[2004]: time="2024-09-04T17:16:51.263868537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264057 containerd[2004]: time="2024-09-04T17:16:51.263918577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264057 containerd[2004]: time="2024-09-04T17:16:51.263964693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264057 containerd[2004]: time="2024-09-04T17:16:51.264013785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264195 containerd[2004]: time="2024-09-04T17:16:51.264064017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264195 containerd[2004]: time="2024-09-04T17:16:51.264106929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264195 containerd[2004]: time="2024-09-04T17:16:51.264148773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264371 containerd[2004]: time="2024-09-04T17:16:51.264198153Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:16:51.264371 containerd[2004]: time="2024-09-04T17:16:51.264289341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264371 containerd[2004]: time="2024-09-04T17:16:51.264332829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264502 containerd[2004]: time="2024-09-04T17:16:51.264363513Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:16:51.264549 containerd[2004]: time="2024-09-04T17:16:51.264500565Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:16:51.264596 containerd[2004]: time="2024-09-04T17:16:51.264554841Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:16:51.264646 containerd[2004]: time="2024-09-04T17:16:51.264591573Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:16:51.264646 containerd[2004]: time="2024-09-04T17:16:51.264630153Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:16:51.264743 containerd[2004]: time="2024-09-04T17:16:51.264663921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.264743 containerd[2004]: time="2024-09-04T17:16:51.264695913Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:16:51.264743 containerd[2004]: time="2024-09-04T17:16:51.264729057Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:16:51.264900 containerd[2004]: time="2024-09-04T17:16:51.264763665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:16:51.277024 containerd[2004]: time="2024-09-04T17:16:51.271141317Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:16:51.277024 containerd[2004]: time="2024-09-04T17:16:51.274091481Z" level=info msg="Connect containerd service" Sep 4 17:16:51.277024 containerd[2004]: time="2024-09-04T17:16:51.274182369Z" level=info msg="using legacy CRI server" Sep 4 17:16:51.277024 containerd[2004]: time="2024-09-04T17:16:51.274201689Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:16:51.277024 containerd[2004]: time="2024-09-04T17:16:51.274402785Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:16:51.277024 containerd[2004]: time="2024-09-04T17:16:51.276905157Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:16:51.277800 containerd[2004]: time="2024-09-04T17:16:51.277732821Z" level=info msg="Start subscribing containerd event" Sep 4 17:16:51.279203 containerd[2004]: time="2024-09-04T17:16:51.278487657Z" level=info msg="Start recovering state" Sep 4 17:16:51.279203 containerd[2004]: time="2024-09-04T17:16:51.278643873Z" level=info msg="Start event monitor" Sep 4 17:16:51.279203 containerd[2004]: time="2024-09-04T17:16:51.278669349Z" level=info msg="Start snapshots syncer" Sep 4 17:16:51.279203 containerd[2004]: time="2024-09-04T17:16:51.278693169Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:16:51.279203 containerd[2004]: time="2024-09-04T17:16:51.278712309Z" level=info msg="Start streaming server" Sep 4 17:16:51.280119 containerd[2004]: time="2024-09-04T17:16:51.280078005Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:16:51.280932 containerd[2004]: time="2024-09-04T17:16:51.280886337Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:16:51.284372 containerd[2004]: time="2024-09-04T17:16:51.282387741Z" level=info msg="containerd successfully booted in 0.247584s" Sep 4 17:16:51.282528 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:16:51.514683 ntpd[1980]: bind(24) AF_INET6 fe80::489:d9ff:feb3:3cfb%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:16:51.515549 ntpd[1980]: 4 Sep 17:16:51 ntpd[1980]: bind(24) AF_INET6 fe80::489:d9ff:feb3:3cfb%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:16:51.515644 ntpd[1980]: unable to create socket on eth0 (6) for fe80::489:d9ff:feb3:3cfb%2#123 Sep 4 17:16:51.516427 ntpd[1980]: 4 Sep 17:16:51 ntpd[1980]: unable to create socket on eth0 (6) for fe80::489:d9ff:feb3:3cfb%2#123 Sep 4 17:16:51.516427 ntpd[1980]: 4 Sep 17:16:51 ntpd[1980]: failed to init interface for address fe80::489:d9ff:feb3:3cfb%2 Sep 4 17:16:51.516212 ntpd[1980]: failed to init interface for address fe80::489:d9ff:feb3:3cfb%2 Sep 4 17:16:51.633439 systemd-networkd[1926]: eth0: Gained IPv6LL Sep 4 17:16:51.643049 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:16:51.648698 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:16:51.660885 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:16:51.673623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:16:51.688430 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:16:51.794869 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:16:51.839274 amazon-ssm-agent[2183]: Initializing new seelog logger Sep 4 17:16:51.839274 amazon-ssm-agent[2183]: New Seelog Logger Creation Complete Sep 4 17:16:51.839274 amazon-ssm-agent[2183]: 2024/09/04 17:16:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:51.839274 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:51.842137 amazon-ssm-agent[2183]: 2024/09/04 17:16:51 processing appconfig overrides Sep 4 17:16:51.843286 amazon-ssm-agent[2183]: 2024/09/04 17:16:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:51.843659 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:51.845346 amazon-ssm-agent[2183]: 2024/09/04 17:16:51 processing appconfig overrides Sep 4 17:16:51.845627 amazon-ssm-agent[2183]: 2024/09/04 17:16:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:51.845627 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:51.845751 amazon-ssm-agent[2183]: 2024/09/04 17:16:51 processing appconfig overrides Sep 4 17:16:51.846481 amazon-ssm-agent[2183]: 2024-09-04 17:16:51 INFO Proxy environment variables: Sep 4 17:16:51.851584 amazon-ssm-agent[2183]: 2024/09/04 17:16:51 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:51.851584 amazon-ssm-agent[2183]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:51.851755 amazon-ssm-agent[2183]: 2024/09/04 17:16:51 processing appconfig overrides Sep 4 17:16:51.946416 amazon-ssm-agent[2183]: 2024-09-04 17:16:51 INFO https_proxy: Sep 4 17:16:52.046581 amazon-ssm-agent[2183]: 2024-09-04 17:16:51 INFO http_proxy: Sep 4 17:16:52.104866 tar[1998]: linux-arm64/LICENSE Sep 4 17:16:52.104866 tar[1998]: linux-arm64/README.md Sep 4 17:16:52.142814 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:16:52.145926 amazon-ssm-agent[2183]: 2024-09-04 17:16:51 INFO no_proxy: Sep 4 17:16:52.244467 amazon-ssm-agent[2183]: 2024-09-04 17:16:51 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:16:52.294596 sshd_keygen[2018]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:16:52.341178 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:16:52.344701 amazon-ssm-agent[2183]: 2024-09-04 17:16:51 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:16:52.353836 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:16:52.364927 systemd[1]: Started sshd@0-172.31.22.219:22-139.178.89.65:46210.service - OpenSSH per-connection server daemon (139.178.89.65:46210). Sep 4 17:16:52.391057 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:16:52.396431 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:16:52.410741 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:16:52.445105 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO Agent will take identity from EC2 Sep 4 17:16:52.447181 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:16:52.458841 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:16:52.467881 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:16:52.470695 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:16:52.544282 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:16:52.608752 sshd[2212]: Accepted publickey for core from 139.178.89.65 port 46210 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:52.611517 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:52.635256 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:16:52.643728 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:16:52.650841 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:16:52.659795 systemd-logind[1987]: New session 1 of user core. Sep 4 17:16:52.706332 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:16:52.722593 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:16:52.743762 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:16:52.744528 (systemd)[2224]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:16:52.844273 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:16:52.943368 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 17:16:52.975526 systemd[2224]: Queued start job for default target default.target. Sep 4 17:16:52.981335 systemd[2224]: Created slice app.slice - User Application Slice. Sep 4 17:16:52.981394 systemd[2224]: Reached target paths.target - Paths. Sep 4 17:16:52.981427 systemd[2224]: Reached target timers.target - Timers. Sep 4 17:16:52.987368 systemd[2224]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:16:53.014360 systemd[2224]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:16:53.014466 systemd[2224]: Reached target sockets.target - Sockets. Sep 4 17:16:53.014499 systemd[2224]: Reached target basic.target - Basic System. Sep 4 17:16:53.014596 systemd[2224]: Reached target default.target - Main User Target. Sep 4 17:16:53.014662 systemd[2224]: Startup finished in 250ms. Sep 4 17:16:53.014846 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:16:53.023566 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:16:53.043645 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:16:53.146340 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:16:53.197765 systemd[1]: Started sshd@1-172.31.22.219:22-139.178.89.65:59386.service - OpenSSH per-connection server daemon (139.178.89.65:59386). Sep 4 17:16:53.247545 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO [Registrar] Starting registrar module Sep 4 17:16:53.348864 amazon-ssm-agent[2183]: 2024-09-04 17:16:52 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:16:53.407712 sshd[2236]: Accepted publickey for core from 139.178.89.65 port 59386 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:53.412559 sshd[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:53.428744 systemd-logind[1987]: New session 2 of user core. Sep 4 17:16:53.438509 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:16:53.557126 amazon-ssm-agent[2183]: 2024-09-04 17:16:53 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:16:53.572623 sshd[2236]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:53.578471 systemd[1]: sshd@1-172.31.22.219:22-139.178.89.65:59386.service: Deactivated successfully. Sep 4 17:16:53.582887 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:16:53.584374 systemd-logind[1987]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:16:53.587098 systemd-logind[1987]: Removed session 2. Sep 4 17:16:53.588351 amazon-ssm-agent[2183]: 2024-09-04 17:16:53 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:16:53.588351 amazon-ssm-agent[2183]: 2024-09-04 17:16:53 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:16:53.588351 amazon-ssm-agent[2183]: 2024-09-04 17:16:53 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:16:53.614745 systemd[1]: Started sshd@2-172.31.22.219:22-139.178.89.65:59390.service - OpenSSH per-connection server daemon (139.178.89.65:59390). Sep 4 17:16:53.657584 amazon-ssm-agent[2183]: 2024-09-04 17:16:53 INFO [CredentialRefresher] Next credential rotation will be in 32.24165928746667 minutes Sep 4 17:16:53.789305 sshd[2243]: Accepted publickey for core from 139.178.89.65 port 59390 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:53.791509 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:53.801623 systemd-logind[1987]: New session 3 of user core. Sep 4 17:16:53.808782 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:16:53.940895 sshd[2243]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:53.948217 systemd[1]: sshd@2-172.31.22.219:22-139.178.89.65:59390.service: Deactivated successfully. Sep 4 17:16:53.949645 systemd-logind[1987]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:16:53.954671 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:16:53.958205 systemd-logind[1987]: Removed session 3. Sep 4 17:16:54.289554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:16:54.292881 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:16:54.296506 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:16:54.300449 systemd[1]: Startup finished in 1.211s (kernel) + 8.173s (initrd) + 9.377s (userspace) = 18.763s. Sep 4 17:16:54.514717 ntpd[1980]: Listen normally on 7 eth0 [fe80::489:d9ff:feb3:3cfb%2]:123 Sep 4 17:16:54.515187 ntpd[1980]: 4 Sep 17:16:54 ntpd[1980]: Listen normally on 7 eth0 [fe80::489:d9ff:feb3:3cfb%2]:123 Sep 4 17:16:54.617567 amazon-ssm-agent[2183]: 2024-09-04 17:16:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:16:54.719485 amazon-ssm-agent[2183]: 2024-09-04 17:16:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2260) started Sep 4 17:16:54.820212 amazon-ssm-agent[2183]: 2024-09-04 17:16:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:16:55.953654 kubelet[2254]: E0904 17:16:55.953520 2254 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:16:55.958785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:16:55.959143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:16:55.959737 systemd[1]: kubelet.service: Consumed 1.329s CPU time. Sep 4 17:16:57.674307 systemd-resolved[1928]: Clock change detected. Flushing caches. Sep 4 17:17:04.142344 systemd[1]: Started sshd@3-172.31.22.219:22-139.178.89.65:51230.service - OpenSSH per-connection server daemon (139.178.89.65:51230). Sep 4 17:17:04.318634 sshd[2278]: Accepted publickey for core from 139.178.89.65 port 51230 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:04.321221 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:04.329686 systemd-logind[1987]: New session 4 of user core. Sep 4 17:17:04.339048 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:17:04.466166 sshd[2278]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:04.472050 systemd[1]: sshd@3-172.31.22.219:22-139.178.89.65:51230.service: Deactivated successfully. Sep 4 17:17:04.476926 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:17:04.479589 systemd-logind[1987]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:17:04.481492 systemd-logind[1987]: Removed session 4. Sep 4 17:17:04.502943 systemd[1]: Started sshd@4-172.31.22.219:22-139.178.89.65:51244.service - OpenSSH per-connection server daemon (139.178.89.65:51244). Sep 4 17:17:04.684963 sshd[2285]: Accepted publickey for core from 139.178.89.65 port 51244 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:04.687533 sshd[2285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:04.694761 systemd-logind[1987]: New session 5 of user core. Sep 4 17:17:04.707046 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:17:04.827599 sshd[2285]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:04.834639 systemd[1]: sshd@4-172.31.22.219:22-139.178.89.65:51244.service: Deactivated successfully. Sep 4 17:17:04.837639 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:17:04.838742 systemd-logind[1987]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:17:04.841068 systemd-logind[1987]: Removed session 5. Sep 4 17:17:04.867320 systemd[1]: Started sshd@5-172.31.22.219:22-139.178.89.65:51256.service - OpenSSH per-connection server daemon (139.178.89.65:51256). Sep 4 17:17:05.049140 sshd[2292]: Accepted publickey for core from 139.178.89.65 port 51256 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:05.051696 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:05.061125 systemd-logind[1987]: New session 6 of user core. Sep 4 17:17:05.067061 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:17:05.196145 sshd[2292]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:05.202868 systemd[1]: sshd@5-172.31.22.219:22-139.178.89.65:51256.service: Deactivated successfully. Sep 4 17:17:05.207128 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:17:05.208571 systemd-logind[1987]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:17:05.210575 systemd-logind[1987]: Removed session 6. Sep 4 17:17:05.237004 systemd[1]: Started sshd@6-172.31.22.219:22-139.178.89.65:51264.service - OpenSSH per-connection server daemon (139.178.89.65:51264). Sep 4 17:17:05.419286 sshd[2299]: Accepted publickey for core from 139.178.89.65 port 51264 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:05.421868 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:05.429391 systemd-logind[1987]: New session 7 of user core. Sep 4 17:17:05.440025 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:17:05.556566 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:17:05.557578 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:17:05.573932 sudo[2302]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:05.599872 sshd[2299]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:05.606469 systemd[1]: sshd@6-172.31.22.219:22-139.178.89.65:51264.service: Deactivated successfully. Sep 4 17:17:05.609520 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:17:05.611035 systemd-logind[1987]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:17:05.613263 systemd-logind[1987]: Removed session 7. Sep 4 17:17:05.637313 systemd[1]: Started sshd@7-172.31.22.219:22-139.178.89.65:51270.service - OpenSSH per-connection server daemon (139.178.89.65:51270). Sep 4 17:17:05.821636 sshd[2307]: Accepted publickey for core from 139.178.89.65 port 51270 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:05.823688 sshd[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:05.830993 systemd-logind[1987]: New session 8 of user core. Sep 4 17:17:05.839028 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:17:05.946235 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:17:05.946905 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:17:05.952953 sudo[2311]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:05.962733 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:17:05.963384 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:17:05.984364 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:17:06.000462 auditctl[2314]: No rules Sep 4 17:17:06.002753 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:17:06.003293 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:17:06.012577 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:17:06.064708 augenrules[2332]: No rules Sep 4 17:17:06.066365 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:17:06.068113 sudo[2310]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:06.094068 sshd[2307]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:06.101059 systemd[1]: sshd@7-172.31.22.219:22-139.178.89.65:51270.service: Deactivated successfully. Sep 4 17:17:06.104439 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:17:06.106699 systemd-logind[1987]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:17:06.108568 systemd-logind[1987]: Removed session 8. Sep 4 17:17:06.125474 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:17:06.134155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:06.136731 systemd[1]: Started sshd@8-172.31.22.219:22-139.178.89.65:51278.service - OpenSSH per-connection server daemon (139.178.89.65:51278). Sep 4 17:17:06.323722 sshd[2341]: Accepted publickey for core from 139.178.89.65 port 51278 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:17:06.327078 sshd[2341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:17:06.339863 systemd-logind[1987]: New session 9 of user core. Sep 4 17:17:06.346100 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:17:06.427503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:06.436586 (kubelet)[2351]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:17:06.455509 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:17:06.456643 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:17:06.539830 kubelet[2351]: E0904 17:17:06.538262 2351 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:17:06.549726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:17:06.550132 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:17:06.655258 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:17:06.655565 (dockerd)[2369]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:17:07.028096 dockerd[2369]: time="2024-09-04T17:17:07.027669646Z" level=info msg="Starting up" Sep 4 17:17:07.165895 dockerd[2369]: time="2024-09-04T17:17:07.165811379Z" level=info msg="Loading containers: start." Sep 4 17:17:07.321007 kernel: Initializing XFRM netlink socket Sep 4 17:17:07.352707 (udev-worker)[2393]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:17:07.436839 systemd-networkd[1926]: docker0: Link UP Sep 4 17:17:07.478522 dockerd[2369]: time="2024-09-04T17:17:07.478445628Z" level=info msg="Loading containers: done." Sep 4 17:17:07.506220 dockerd[2369]: time="2024-09-04T17:17:07.506036340Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:17:07.506821 dockerd[2369]: time="2024-09-04T17:17:07.506475756Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:17:07.506821 dockerd[2369]: time="2024-09-04T17:17:07.506685648Z" level=info msg="Daemon has completed initialization" Sep 4 17:17:07.558823 dockerd[2369]: time="2024-09-04T17:17:07.558598561Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:17:07.559148 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:17:08.916307 containerd[2004]: time="2024-09-04T17:17:08.916229439Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:17:09.529935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226446294.mount: Deactivated successfully. Sep 4 17:17:11.070589 containerd[2004]: time="2024-09-04T17:17:11.070079930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:11.072325 containerd[2004]: time="2024-09-04T17:17:11.072239570Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=32283562" Sep 4 17:17:11.073249 containerd[2004]: time="2024-09-04T17:17:11.073193738Z" level=info msg="ImageCreate event name:\"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:11.079064 containerd[2004]: time="2024-09-04T17:17:11.078983018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:11.082500 containerd[2004]: time="2024-09-04T17:17:11.081437582Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"32280362\" in 2.165121071s" Sep 4 17:17:11.082500 containerd[2004]: time="2024-09-04T17:17:11.081500798Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\"" Sep 4 17:17:11.120015 containerd[2004]: time="2024-09-04T17:17:11.119954822Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:17:12.844444 containerd[2004]: time="2024-09-04T17:17:12.844380931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:12.846511 containerd[2004]: time="2024-09-04T17:17:12.846452971Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=29368210" Sep 4 17:17:12.847405 containerd[2004]: time="2024-09-04T17:17:12.846900823Z" level=info msg="ImageCreate event name:\"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:12.858838 containerd[2004]: time="2024-09-04T17:17:12.858008623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:12.860078 containerd[2004]: time="2024-09-04T17:17:12.860026003Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"30855477\" in 1.740003621s" Sep 4 17:17:12.860262 containerd[2004]: time="2024-09-04T17:17:12.860227759Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\"" Sep 4 17:17:12.902116 containerd[2004]: time="2024-09-04T17:17:12.902058247Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:17:13.991686 containerd[2004]: time="2024-09-04T17:17:13.991616829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:13.993071 containerd[2004]: time="2024-09-04T17:17:13.992547693Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=15751073" Sep 4 17:17:13.994472 containerd[2004]: time="2024-09-04T17:17:13.994380645Z" level=info msg="ImageCreate event name:\"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:14.000299 containerd[2004]: time="2024-09-04T17:17:14.000187037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:14.002774 containerd[2004]: time="2024-09-04T17:17:14.002573141Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"17238358\" in 1.100452422s" Sep 4 17:17:14.002774 containerd[2004]: time="2024-09-04T17:17:14.002634041Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\"" Sep 4 17:17:14.040852 containerd[2004]: time="2024-09-04T17:17:14.040767137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:17:15.313172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116666207.mount: Deactivated successfully. Sep 4 17:17:15.774975 containerd[2004]: time="2024-09-04T17:17:15.774903117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:15.776420 containerd[2004]: time="2024-09-04T17:17:15.776348037Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=25251883" Sep 4 17:17:15.777857 containerd[2004]: time="2024-09-04T17:17:15.777747609Z" level=info msg="ImageCreate event name:\"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:15.781590 containerd[2004]: time="2024-09-04T17:17:15.781489221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:15.783308 containerd[2004]: time="2024-09-04T17:17:15.783097137Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"25250902\" in 1.742237636s" Sep 4 17:17:15.783308 containerd[2004]: time="2024-09-04T17:17:15.783151029Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\"" Sep 4 17:17:15.823456 containerd[2004]: time="2024-09-04T17:17:15.823162534Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:17:16.364010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144307677.mount: Deactivated successfully. Sep 4 17:17:16.720167 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:17:16.728163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:17.060596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:17.076385 (kubelet)[2648]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:17:17.172257 kubelet[2648]: E0904 17:17:17.172125 2648 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:17:17.180296 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:17:17.180998 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:17:17.680976 containerd[2004]: time="2024-09-04T17:17:17.680909867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:17.683921 containerd[2004]: time="2024-09-04T17:17:17.683843087Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Sep 4 17:17:17.685024 containerd[2004]: time="2024-09-04T17:17:17.684923315Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:17.691878 containerd[2004]: time="2024-09-04T17:17:17.691770803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:17.694414 containerd[2004]: time="2024-09-04T17:17:17.694209707Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.870983033s" Sep 4 17:17:17.694414 containerd[2004]: time="2024-09-04T17:17:17.694269143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:17:17.733176 containerd[2004]: time="2024-09-04T17:17:17.733023275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:17:18.209580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739244736.mount: Deactivated successfully. Sep 4 17:17:18.216803 containerd[2004]: time="2024-09-04T17:17:18.216487462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:18.218173 containerd[2004]: time="2024-09-04T17:17:18.218120230Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:17:18.218863 containerd[2004]: time="2024-09-04T17:17:18.218467882Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:18.222746 containerd[2004]: time="2024-09-04T17:17:18.222647422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:18.225038 containerd[2004]: time="2024-09-04T17:17:18.224465770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 491.387511ms" Sep 4 17:17:18.225038 containerd[2004]: time="2024-09-04T17:17:18.224522962Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:17:18.264712 containerd[2004]: time="2024-09-04T17:17:18.264624010Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:17:18.793887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3762642575.mount: Deactivated successfully. Sep 4 17:17:21.237984 containerd[2004]: time="2024-09-04T17:17:21.237906169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:21.247172 containerd[2004]: time="2024-09-04T17:17:21.247066009Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Sep 4 17:17:21.248110 containerd[2004]: time="2024-09-04T17:17:21.248010625Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:21.254336 containerd[2004]: time="2024-09-04T17:17:21.254255461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:21.257131 containerd[2004]: time="2024-09-04T17:17:21.257083825Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 2.992376763s" Sep 4 17:17:21.259350 containerd[2004]: time="2024-09-04T17:17:21.257279461Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:17:21.332587 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:17:27.220709 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 4 17:17:27.231974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:27.571235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:27.573466 (kubelet)[2791]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:17:27.659462 kubelet[2791]: E0904 17:17:27.659365 2791 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:17:27.664337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:17:27.664685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:17:30.795667 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:30.815268 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:30.851902 systemd[1]: Reloading requested from client PID 2805 ('systemctl') (unit session-9.scope)... Sep 4 17:17:30.851930 systemd[1]: Reloading... Sep 4 17:17:31.083850 zram_generator::config[2846]: No configuration found. Sep 4 17:17:31.319240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:31.490588 systemd[1]: Reloading finished in 637 ms. Sep 4 17:17:31.577731 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:17:31.577946 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:17:31.579281 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:31.594495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:31.898105 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:31.901308 (kubelet)[2905]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:17:31.988140 kubelet[2905]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:31.988140 kubelet[2905]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:17:31.988140 kubelet[2905]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:31.988730 kubelet[2905]: I0904 17:17:31.988235 2905 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:17:32.915227 kubelet[2905]: I0904 17:17:32.915187 2905 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:17:32.915894 kubelet[2905]: I0904 17:17:32.915423 2905 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:17:32.915894 kubelet[2905]: I0904 17:17:32.915765 2905 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:17:32.951073 kubelet[2905]: I0904 17:17:32.951024 2905 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:17:32.952351 kubelet[2905]: E0904 17:17:32.952317 2905 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.219:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:32.971446 kubelet[2905]: I0904 17:17:32.971401 2905 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:17:32.971916 kubelet[2905]: I0904 17:17:32.971888 2905 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:17:32.972265 kubelet[2905]: I0904 17:17:32.972206 2905 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:17:32.972432 kubelet[2905]: I0904 17:17:32.972269 2905 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:17:32.972432 kubelet[2905]: I0904 17:17:32.972292 2905 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:17:32.972553 kubelet[2905]: I0904 17:17:32.972472 2905 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:32.976962 kubelet[2905]: I0904 17:17:32.976911 2905 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:17:32.976962 kubelet[2905]: I0904 17:17:32.976965 2905 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:17:32.977114 kubelet[2905]: I0904 17:17:32.977016 2905 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:17:32.977114 kubelet[2905]: I0904 17:17:32.977048 2905 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:17:32.985595 kubelet[2905]: W0904 17:17:32.985091 2905 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.22.219:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:32.985595 kubelet[2905]: E0904 17:17:32.985172 2905 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.219:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:32.985595 kubelet[2905]: W0904 17:17:32.985274 2905 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.22.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-219&limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:32.985595 kubelet[2905]: E0904 17:17:32.985355 2905 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-219&limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:32.985595 kubelet[2905]: I0904 17:17:32.985489 2905 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:17:32.987015 kubelet[2905]: I0904 17:17:32.986120 2905 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:17:32.987015 kubelet[2905]: W0904 17:17:32.986265 2905 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:17:32.987503 kubelet[2905]: I0904 17:17:32.987447 2905 server.go:1256] "Started kubelet" Sep 4 17:17:32.989211 kubelet[2905]: I0904 17:17:32.989155 2905 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:17:32.991875 kubelet[2905]: I0904 17:17:32.991364 2905 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:17:32.996544 kubelet[2905]: I0904 17:17:32.995477 2905 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:17:32.996544 kubelet[2905]: I0904 17:17:32.995972 2905 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:17:32.998867 kubelet[2905]: E0904 17:17:32.998825 2905 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.22.219:6443/api/v1/namespaces/default/events\": dial tcp 172.31.22.219:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-22-219.17f21a0d9bd15487 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-22-219,UID:ip-172-31-22-219,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-22-219,},FirstTimestamp:2024-09-04 17:17:32.987405447 +0000 UTC m=+1.078470066,LastTimestamp:2024-09-04 17:17:32.987405447 +0000 UTC m=+1.078470066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-22-219,}" Sep 4 17:17:33.003015 kubelet[2905]: I0904 17:17:33.002951 2905 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:17:33.005082 kubelet[2905]: I0904 17:17:33.005045 2905 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:17:33.005407 kubelet[2905]: I0904 17:17:33.005382 2905 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:17:33.009803 kubelet[2905]: I0904 17:17:33.009643 2905 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:17:33.011382 kubelet[2905]: W0904 17:17:33.011163 2905 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.22.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:33.011611 kubelet[2905]: E0904 17:17:33.011580 2905 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:33.013639 kubelet[2905]: E0904 17:17:33.013530 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-219?timeout=10s\": dial tcp 172.31.22.219:6443: connect: connection refused" interval="200ms" Sep 4 17:17:33.014701 kubelet[2905]: E0904 17:17:33.014640 2905 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:17:33.016418 kubelet[2905]: I0904 17:17:33.016388 2905 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:17:33.017825 kubelet[2905]: I0904 17:17:33.017273 2905 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:17:33.019861 kubelet[2905]: I0904 17:17:33.019830 2905 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:17:33.049637 kubelet[2905]: I0904 17:17:33.049598 2905 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:17:33.061191 kubelet[2905]: I0904 17:17:33.061156 2905 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:17:33.061815 kubelet[2905]: I0904 17:17:33.061368 2905 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:17:33.061815 kubelet[2905]: I0904 17:17:33.061423 2905 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:17:33.061815 kubelet[2905]: E0904 17:17:33.061525 2905 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:17:33.065982 kubelet[2905]: W0904 17:17:33.065917 2905 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.22.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:33.066616 kubelet[2905]: E0904 17:17:33.066546 2905 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:33.067493 kubelet[2905]: I0904 17:17:33.067440 2905 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:17:33.067493 kubelet[2905]: I0904 17:17:33.067483 2905 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:17:33.067673 kubelet[2905]: I0904 17:17:33.067513 2905 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:33.071673 kubelet[2905]: I0904 17:17:33.071446 2905 policy_none.go:49] "None policy: Start" Sep 4 17:17:33.073514 kubelet[2905]: I0904 17:17:33.073474 2905 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:17:33.073651 kubelet[2905]: I0904 17:17:33.073546 2905 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:17:33.084354 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:17:33.097673 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:17:33.105452 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:17:33.108292 kubelet[2905]: I0904 17:17:33.108248 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-219" Sep 4 17:17:33.109513 kubelet[2905]: E0904 17:17:33.109467 2905 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.219:6443/api/v1/nodes\": dial tcp 172.31.22.219:6443: connect: connection refused" node="ip-172-31-22-219" Sep 4 17:17:33.121661 kubelet[2905]: I0904 17:17:33.120461 2905 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:17:33.121661 kubelet[2905]: I0904 17:17:33.120860 2905 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:17:33.124925 kubelet[2905]: E0904 17:17:33.124864 2905 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-219\" not found" Sep 4 17:17:33.161954 kubelet[2905]: I0904 17:17:33.161813 2905 topology_manager.go:215] "Topology Admit Handler" podUID="1ee23fe9afb6fe760fb8d3448c7e5af3" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-219" Sep 4 17:17:33.164258 kubelet[2905]: I0904 17:17:33.163923 2905 topology_manager.go:215] "Topology Admit Handler" podUID="e43c5d82a015f77184a4633661c897e2" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:33.166437 kubelet[2905]: I0904 17:17:33.166014 2905 topology_manager.go:215] "Topology Admit Handler" podUID="61153b2363dbe5b5fe661a8666cfcca1" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-219" Sep 4 17:17:33.179668 systemd[1]: Created slice kubepods-burstable-pod1ee23fe9afb6fe760fb8d3448c7e5af3.slice - libcontainer container kubepods-burstable-pod1ee23fe9afb6fe760fb8d3448c7e5af3.slice. Sep 4 17:17:33.204541 systemd[1]: Created slice kubepods-burstable-pod61153b2363dbe5b5fe661a8666cfcca1.slice - libcontainer container kubepods-burstable-pod61153b2363dbe5b5fe661a8666cfcca1.slice. Sep 4 17:17:33.212153 kubelet[2905]: I0904 17:17:33.211713 2905 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ee23fe9afb6fe760fb8d3448c7e5af3-ca-certs\") pod \"kube-apiserver-ip-172-31-22-219\" (UID: \"1ee23fe9afb6fe760fb8d3448c7e5af3\") " pod="kube-system/kube-apiserver-ip-172-31-22-219" Sep 4 17:17:33.212153 kubelet[2905]: I0904 17:17:33.211852 2905 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:33.212153 kubelet[2905]: I0904 17:17:33.211906 2905 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:33.212153 kubelet[2905]: I0904 17:17:33.211952 2905 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:33.212153 kubelet[2905]: I0904 17:17:33.211998 2905 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61153b2363dbe5b5fe661a8666cfcca1-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-219\" (UID: \"61153b2363dbe5b5fe661a8666cfcca1\") " pod="kube-system/kube-scheduler-ip-172-31-22-219" Sep 4 17:17:33.212570 kubelet[2905]: I0904 17:17:33.212042 2905 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ee23fe9afb6fe760fb8d3448c7e5af3-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-219\" (UID: \"1ee23fe9afb6fe760fb8d3448c7e5af3\") " pod="kube-system/kube-apiserver-ip-172-31-22-219" Sep 4 17:17:33.212570 kubelet[2905]: I0904 17:17:33.212088 2905 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ee23fe9afb6fe760fb8d3448c7e5af3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-219\" (UID: \"1ee23fe9afb6fe760fb8d3448c7e5af3\") " pod="kube-system/kube-apiserver-ip-172-31-22-219" Sep 4 17:17:33.212570 kubelet[2905]: I0904 17:17:33.212129 2905 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:33.212570 kubelet[2905]: I0904 17:17:33.212176 2905 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:33.214615 kubelet[2905]: E0904 17:17:33.214556 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-219?timeout=10s\": dial tcp 172.31.22.219:6443: connect: connection refused" interval="400ms" Sep 4 17:17:33.215550 systemd[1]: Created slice kubepods-burstable-pode43c5d82a015f77184a4633661c897e2.slice - libcontainer container kubepods-burstable-pode43c5d82a015f77184a4633661c897e2.slice. Sep 4 17:17:33.313198 kubelet[2905]: I0904 17:17:33.312602 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-219" Sep 4 17:17:33.313198 kubelet[2905]: E0904 17:17:33.313068 2905 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.219:6443/api/v1/nodes\": dial tcp 172.31.22.219:6443: connect: connection refused" node="ip-172-31-22-219" Sep 4 17:17:33.500701 containerd[2004]: time="2024-09-04T17:17:33.500630533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-219,Uid:1ee23fe9afb6fe760fb8d3448c7e5af3,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:33.512139 containerd[2004]: time="2024-09-04T17:17:33.511672094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-219,Uid:61153b2363dbe5b5fe661a8666cfcca1,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:33.521317 containerd[2004]: time="2024-09-04T17:17:33.521241914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-219,Uid:e43c5d82a015f77184a4633661c897e2,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:33.615525 kubelet[2905]: E0904 17:17:33.615475 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-219?timeout=10s\": dial tcp 172.31.22.219:6443: connect: connection refused" interval="800ms" Sep 4 17:17:33.715870 kubelet[2905]: I0904 17:17:33.715364 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-219" Sep 4 17:17:33.715870 kubelet[2905]: E0904 17:17:33.715861 2905 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.219:6443/api/v1/nodes\": dial tcp 172.31.22.219:6443: connect: connection refused" node="ip-172-31-22-219" Sep 4 17:17:33.917250 kubelet[2905]: W0904 17:17:33.917018 2905 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.22.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:33.917250 kubelet[2905]: E0904 17:17:33.917120 2905 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.219:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:33.994754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2214249119.mount: Deactivated successfully. Sep 4 17:17:34.022544 containerd[2004]: time="2024-09-04T17:17:34.022445232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:34.033670 containerd[2004]: time="2024-09-04T17:17:34.033586428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:17:34.036607 containerd[2004]: time="2024-09-04T17:17:34.036512076Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:34.040360 containerd[2004]: time="2024-09-04T17:17:34.040273116Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:17:34.042891 containerd[2004]: time="2024-09-04T17:17:34.042711720Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:34.049114 containerd[2004]: time="2024-09-04T17:17:34.049023816Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:34.050376 containerd[2004]: time="2024-09-04T17:17:34.050238672Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:17:34.055014 containerd[2004]: time="2024-09-04T17:17:34.054961464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:34.059177 containerd[2004]: time="2024-09-04T17:17:34.058460100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 546.618902ms" Sep 4 17:17:34.061570 containerd[2004]: time="2024-09-04T17:17:34.061504548Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.755023ms" Sep 4 17:17:34.063666 containerd[2004]: time="2024-09-04T17:17:34.063615408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 542.26067ms" Sep 4 17:17:34.261353 containerd[2004]: time="2024-09-04T17:17:34.260826277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:34.261353 containerd[2004]: time="2024-09-04T17:17:34.260951905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:34.261353 containerd[2004]: time="2024-09-04T17:17:34.260989693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:34.261353 containerd[2004]: time="2024-09-04T17:17:34.261168805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:34.267052 containerd[2004]: time="2024-09-04T17:17:34.265260505Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:34.267052 containerd[2004]: time="2024-09-04T17:17:34.265826497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:34.267052 containerd[2004]: time="2024-09-04T17:17:34.265896685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:34.267052 containerd[2004]: time="2024-09-04T17:17:34.266122129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:34.269063 containerd[2004]: time="2024-09-04T17:17:34.268562065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:34.269063 containerd[2004]: time="2024-09-04T17:17:34.268647637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:34.269063 containerd[2004]: time="2024-09-04T17:17:34.268672729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:34.272267 containerd[2004]: time="2024-09-04T17:17:34.271989697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:34.278014 kubelet[2905]: W0904 17:17:34.277944 2905 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.22.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:34.278998 kubelet[2905]: E0904 17:17:34.278701 2905 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.219:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:34.312121 systemd[1]: Started cri-containerd-aa00979f0bafa152996b9baa3aed3a7dca56ee5663b44b4b1da12776145f5604.scope - libcontainer container aa00979f0bafa152996b9baa3aed3a7dca56ee5663b44b4b1da12776145f5604. Sep 4 17:17:34.329937 systemd[1]: Started cri-containerd-7a2b90a79bbace353fca1661fd07514fa0a6e5aeea9ad7f97f6d514c60a61f50.scope - libcontainer container 7a2b90a79bbace353fca1661fd07514fa0a6e5aeea9ad7f97f6d514c60a61f50. Sep 4 17:17:34.344990 systemd[1]: Started cri-containerd-31d5f044a3745fcf8ad480287a0f07428c2b9a913d92b696ae62180f9de00ab2.scope - libcontainer container 31d5f044a3745fcf8ad480287a0f07428c2b9a913d92b696ae62180f9de00ab2. Sep 4 17:17:34.412532 containerd[2004]: time="2024-09-04T17:17:34.412478942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-219,Uid:1ee23fe9afb6fe760fb8d3448c7e5af3,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a2b90a79bbace353fca1661fd07514fa0a6e5aeea9ad7f97f6d514c60a61f50\"" Sep 4 17:17:34.418141 kubelet[2905]: E0904 17:17:34.418084 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-219?timeout=10s\": dial tcp 172.31.22.219:6443: connect: connection refused" interval="1.6s" Sep 4 17:17:34.422380 containerd[2004]: time="2024-09-04T17:17:34.422313602Z" level=info msg="CreateContainer within sandbox \"7a2b90a79bbace353fca1661fd07514fa0a6e5aeea9ad7f97f6d514c60a61f50\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:17:34.460998 kubelet[2905]: W0904 17:17:34.460870 2905 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.22.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-219&limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:34.461131 kubelet[2905]: E0904 17:17:34.461013 2905 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.219:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-219&limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:34.475661 containerd[2004]: time="2024-09-04T17:17:34.475131470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-219,Uid:61153b2363dbe5b5fe661a8666cfcca1,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa00979f0bafa152996b9baa3aed3a7dca56ee5663b44b4b1da12776145f5604\"" Sep 4 17:17:34.478676 containerd[2004]: time="2024-09-04T17:17:34.478616990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-219,Uid:e43c5d82a015f77184a4633661c897e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"31d5f044a3745fcf8ad480287a0f07428c2b9a913d92b696ae62180f9de00ab2\"" Sep 4 17:17:34.484305 containerd[2004]: time="2024-09-04T17:17:34.484235990Z" level=info msg="CreateContainer within sandbox \"7a2b90a79bbace353fca1661fd07514fa0a6e5aeea9ad7f97f6d514c60a61f50\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d4fa07a430d750db74f2bf12337378792b5de8db1a63b72cd76c7bac5c62a773\"" Sep 4 17:17:34.487993 containerd[2004]: time="2024-09-04T17:17:34.487931738Z" level=info msg="StartContainer for \"d4fa07a430d750db74f2bf12337378792b5de8db1a63b72cd76c7bac5c62a773\"" Sep 4 17:17:34.493588 containerd[2004]: time="2024-09-04T17:17:34.493381382Z" level=info msg="CreateContainer within sandbox \"31d5f044a3745fcf8ad480287a0f07428c2b9a913d92b696ae62180f9de00ab2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:17:34.493762 containerd[2004]: time="2024-09-04T17:17:34.493688906Z" level=info msg="CreateContainer within sandbox \"aa00979f0bafa152996b9baa3aed3a7dca56ee5663b44b4b1da12776145f5604\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:17:34.521341 containerd[2004]: time="2024-09-04T17:17:34.520810347Z" level=info msg="CreateContainer within sandbox \"31d5f044a3745fcf8ad480287a0f07428c2b9a913d92b696ae62180f9de00ab2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e\"" Sep 4 17:17:34.521877 kubelet[2905]: I0904 17:17:34.521555 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-219" Sep 4 17:17:34.523612 kubelet[2905]: E0904 17:17:34.523237 2905 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.22.219:6443/api/v1/nodes\": dial tcp 172.31.22.219:6443: connect: connection refused" node="ip-172-31-22-219" Sep 4 17:17:34.524241 containerd[2004]: time="2024-09-04T17:17:34.523381599Z" level=info msg="StartContainer for \"75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e\"" Sep 4 17:17:34.530355 containerd[2004]: time="2024-09-04T17:17:34.530187195Z" level=info msg="CreateContainer within sandbox \"aa00979f0bafa152996b9baa3aed3a7dca56ee5663b44b4b1da12776145f5604\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95\"" Sep 4 17:17:34.531077 containerd[2004]: time="2024-09-04T17:17:34.530870007Z" level=info msg="StartContainer for \"b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95\"" Sep 4 17:17:34.555596 kubelet[2905]: W0904 17:17:34.555107 2905 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.22.219:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:34.555596 kubelet[2905]: E0904 17:17:34.555206 2905 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.219:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.219:6443: connect: connection refused Sep 4 17:17:34.561341 systemd[1]: Started cri-containerd-d4fa07a430d750db74f2bf12337378792b5de8db1a63b72cd76c7bac5c62a773.scope - libcontainer container d4fa07a430d750db74f2bf12337378792b5de8db1a63b72cd76c7bac5c62a773. Sep 4 17:17:34.618448 systemd[1]: Started cri-containerd-75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e.scope - libcontainer container 75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e. Sep 4 17:17:34.622104 systemd[1]: Started cri-containerd-b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95.scope - libcontainer container b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95. Sep 4 17:17:34.679197 containerd[2004]: time="2024-09-04T17:17:34.678887955Z" level=info msg="StartContainer for \"d4fa07a430d750db74f2bf12337378792b5de8db1a63b72cd76c7bac5c62a773\" returns successfully" Sep 4 17:17:34.747825 containerd[2004]: time="2024-09-04T17:17:34.746551660Z" level=info msg="StartContainer for \"b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95\" returns successfully" Sep 4 17:17:34.765681 containerd[2004]: time="2024-09-04T17:17:34.765590800Z" level=info msg="StartContainer for \"75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e\" returns successfully" Sep 4 17:17:35.698900 update_engine[1990]: I0904 17:17:35.697830 1990 update_attempter.cc:509] Updating boot flags... Sep 4 17:17:35.814065 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3192) Sep 4 17:17:36.135948 kubelet[2905]: I0904 17:17:36.135897 2905 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-219" Sep 4 17:17:36.298306 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3191) Sep 4 17:17:38.523501 kubelet[2905]: I0904 17:17:38.523455 2905 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-219" Sep 4 17:17:38.635135 kubelet[2905]: E0904 17:17:38.635036 2905 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Sep 4 17:17:38.986638 kubelet[2905]: I0904 17:17:38.986579 2905 apiserver.go:52] "Watching apiserver" Sep 4 17:17:39.010270 kubelet[2905]: I0904 17:17:39.010206 2905 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:17:41.616062 systemd[1]: Reloading requested from client PID 3362 ('systemctl') (unit session-9.scope)... Sep 4 17:17:41.616094 systemd[1]: Reloading... Sep 4 17:17:41.849958 zram_generator::config[3409]: No configuration found. Sep 4 17:17:42.121438 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:42.328189 systemd[1]: Reloading finished in 711 ms. Sep 4 17:17:42.416038 kubelet[2905]: I0904 17:17:42.414701 2905 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:17:42.415191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:42.429016 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:17:42.429436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:42.429514 systemd[1]: kubelet.service: Consumed 1.829s CPU time, 114.3M memory peak, 0B memory swap peak. Sep 4 17:17:42.437417 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:42.743158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:42.759308 (kubelet)[3460]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:17:42.887317 kubelet[3460]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:42.887317 kubelet[3460]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:17:42.887317 kubelet[3460]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:42.888628 kubelet[3460]: I0904 17:17:42.888249 3460 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:17:42.898646 kubelet[3460]: I0904 17:17:42.898585 3460 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:17:42.898646 kubelet[3460]: I0904 17:17:42.898635 3460 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:17:42.899051 kubelet[3460]: I0904 17:17:42.898997 3460 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:17:42.903199 kubelet[3460]: I0904 17:17:42.903032 3460 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:17:42.907190 kubelet[3460]: I0904 17:17:42.907087 3460 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:17:42.921005 kubelet[3460]: I0904 17:17:42.920457 3460 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:17:42.921273 kubelet[3460]: I0904 17:17:42.921245 3460 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:17:42.921926 kubelet[3460]: I0904 17:17:42.921881 3460 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:17:42.923142 kubelet[3460]: I0904 17:17:42.923051 3460 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:17:42.923373 kubelet[3460]: I0904 17:17:42.923350 3460 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:17:42.923550 kubelet[3460]: I0904 17:17:42.923529 3460 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:42.925822 kubelet[3460]: I0904 17:17:42.925464 3460 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:17:42.925822 kubelet[3460]: I0904 17:17:42.925510 3460 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:17:42.925822 kubelet[3460]: I0904 17:17:42.925557 3460 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:17:42.925822 kubelet[3460]: I0904 17:17:42.925600 3460 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:17:42.932240 kubelet[3460]: I0904 17:17:42.932154 3460 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:17:42.932714 kubelet[3460]: I0904 17:17:42.932675 3460 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:17:42.935168 kubelet[3460]: I0904 17:17:42.934656 3460 server.go:1256] "Started kubelet" Sep 4 17:17:42.940741 kubelet[3460]: I0904 17:17:42.939684 3460 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:17:42.949708 kubelet[3460]: I0904 17:17:42.942510 3460 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:17:42.952114 kubelet[3460]: I0904 17:17:42.952066 3460 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:17:42.953603 kubelet[3460]: I0904 17:17:42.953559 3460 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:17:42.956816 kubelet[3460]: I0904 17:17:42.956306 3460 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:17:42.960830 kubelet[3460]: I0904 17:17:42.959981 3460 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:17:42.963534 kubelet[3460]: I0904 17:17:42.961476 3460 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:17:42.971849 kubelet[3460]: I0904 17:17:42.962086 3460 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:17:42.971849 kubelet[3460]: I0904 17:17:42.969867 3460 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:17:42.971849 kubelet[3460]: I0904 17:17:42.971727 3460 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:17:43.023083 kubelet[3460]: E0904 17:17:43.022429 3460 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:17:43.032132 kubelet[3460]: I0904 17:17:43.032087 3460 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:17:43.068036 kubelet[3460]: I0904 17:17:43.066655 3460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:17:43.071916 kubelet[3460]: I0904 17:17:43.070431 3460 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:17:43.071916 kubelet[3460]: I0904 17:17:43.070473 3460 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:17:43.071916 kubelet[3460]: I0904 17:17:43.070506 3460 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:17:43.071916 kubelet[3460]: E0904 17:17:43.070586 3460 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:17:43.076860 kubelet[3460]: E0904 17:17:43.076398 3460 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Sep 4 17:17:43.094828 kubelet[3460]: I0904 17:17:43.093936 3460 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-22-219" Sep 4 17:17:43.116129 kubelet[3460]: I0904 17:17:43.115765 3460 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-22-219" Sep 4 17:17:43.117530 kubelet[3460]: I0904 17:17:43.117040 3460 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-22-219" Sep 4 17:17:43.170957 kubelet[3460]: E0904 17:17:43.170908 3460 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:17:43.188741 kubelet[3460]: I0904 17:17:43.188694 3460 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:17:43.188741 kubelet[3460]: I0904 17:17:43.188735 3460 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:17:43.189000 kubelet[3460]: I0904 17:17:43.188770 3460 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:43.190411 kubelet[3460]: I0904 17:17:43.189028 3460 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:17:43.190411 kubelet[3460]: I0904 17:17:43.189068 3460 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:17:43.190411 kubelet[3460]: I0904 17:17:43.189085 3460 policy_none.go:49] "None policy: Start" Sep 4 17:17:43.192684 kubelet[3460]: I0904 17:17:43.191684 3460 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:17:43.192684 kubelet[3460]: I0904 17:17:43.191737 3460 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:17:43.192684 kubelet[3460]: I0904 17:17:43.192205 3460 state_mem.go:75] "Updated machine memory state" Sep 4 17:17:43.202926 kubelet[3460]: I0904 17:17:43.201908 3460 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:17:43.202926 kubelet[3460]: I0904 17:17:43.202668 3460 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:17:43.371705 kubelet[3460]: I0904 17:17:43.371536 3460 topology_manager.go:215] "Topology Admit Handler" podUID="1ee23fe9afb6fe760fb8d3448c7e5af3" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-219" Sep 4 17:17:43.373928 kubelet[3460]: I0904 17:17:43.372763 3460 topology_manager.go:215] "Topology Admit Handler" podUID="e43c5d82a015f77184a4633661c897e2" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:43.374258 kubelet[3460]: I0904 17:17:43.374218 3460 topology_manager.go:215] "Topology Admit Handler" podUID="61153b2363dbe5b5fe661a8666cfcca1" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-219" Sep 4 17:17:43.385710 kubelet[3460]: E0904 17:17:43.385616 3460 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-22-219\" already exists" pod="kube-system/kube-scheduler-ip-172-31-22-219" Sep 4 17:17:43.474602 kubelet[3460]: I0904 17:17:43.474363 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:43.474602 kubelet[3460]: I0904 17:17:43.474518 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:43.476280 kubelet[3460]: I0904 17:17:43.474888 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:43.476280 kubelet[3460]: I0904 17:17:43.475018 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1ee23fe9afb6fe760fb8d3448c7e5af3-ca-certs\") pod \"kube-apiserver-ip-172-31-22-219\" (UID: \"1ee23fe9afb6fe760fb8d3448c7e5af3\") " pod="kube-system/kube-apiserver-ip-172-31-22-219" Sep 4 17:17:43.476280 kubelet[3460]: I0904 17:17:43.475247 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1ee23fe9afb6fe760fb8d3448c7e5af3-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-219\" (UID: \"1ee23fe9afb6fe760fb8d3448c7e5af3\") " pod="kube-system/kube-apiserver-ip-172-31-22-219" Sep 4 17:17:43.477007 kubelet[3460]: I0904 17:17:43.476616 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/61153b2363dbe5b5fe661a8666cfcca1-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-219\" (UID: \"61153b2363dbe5b5fe661a8666cfcca1\") " pod="kube-system/kube-scheduler-ip-172-31-22-219" Sep 4 17:17:43.477007 kubelet[3460]: I0904 17:17:43.476735 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:43.477007 kubelet[3460]: I0904 17:17:43.476901 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1ee23fe9afb6fe760fb8d3448c7e5af3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-219\" (UID: \"1ee23fe9afb6fe760fb8d3448c7e5af3\") " pod="kube-system/kube-apiserver-ip-172-31-22-219" Sep 4 17:17:43.477007 kubelet[3460]: I0904 17:17:43.476956 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e43c5d82a015f77184a4633661c897e2-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-219\" (UID: \"e43c5d82a015f77184a4633661c897e2\") " pod="kube-system/kube-controller-manager-ip-172-31-22-219" Sep 4 17:17:43.929817 kubelet[3460]: I0904 17:17:43.928653 3460 apiserver.go:52] "Watching apiserver" Sep 4 17:17:43.971755 kubelet[3460]: I0904 17:17:43.971689 3460 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:17:44.220007 kubelet[3460]: E0904 17:17:44.219962 3460 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-219\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-219" Sep 4 17:17:44.268185 kubelet[3460]: I0904 17:17:44.268100 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-219" podStartSLOduration=3.268029287 podStartE2EDuration="3.268029287s" podCreationTimestamp="2024-09-04 17:17:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:44.239613683 +0000 UTC m=+1.469913897" watchObservedRunningTime="2024-09-04 17:17:44.268029287 +0000 UTC m=+1.498329477" Sep 4 17:17:44.316550 kubelet[3460]: I0904 17:17:44.316476 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-219" podStartSLOduration=1.316418687 podStartE2EDuration="1.316418687s" podCreationTimestamp="2024-09-04 17:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:44.268573835 +0000 UTC m=+1.498874049" watchObservedRunningTime="2024-09-04 17:17:44.316418687 +0000 UTC m=+1.546718937" Sep 4 17:17:44.354101 kubelet[3460]: I0904 17:17:44.353803 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-219" podStartSLOduration=1.353724023 podStartE2EDuration="1.353724023s" podCreationTimestamp="2024-09-04 17:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:44.318455159 +0000 UTC m=+1.548755385" watchObservedRunningTime="2024-09-04 17:17:44.353724023 +0000 UTC m=+1.584024213" Sep 4 17:17:47.546631 sudo[2352]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:47.570127 sshd[2341]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:47.575739 systemd[1]: sshd@8-172.31.22.219:22-139.178.89.65:51278.service: Deactivated successfully. Sep 4 17:17:47.580338 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:17:47.581326 systemd[1]: session-9.scope: Consumed 12.890s CPU time, 136.6M memory peak, 0B memory swap peak. Sep 4 17:17:47.585978 systemd-logind[1987]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:17:47.588210 systemd-logind[1987]: Removed session 9. Sep 4 17:17:55.166833 kubelet[3460]: I0904 17:17:55.166558 3460 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:17:55.167533 containerd[2004]: time="2024-09-04T17:17:55.167038149Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:17:55.169578 kubelet[3460]: I0904 17:17:55.167993 3460 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:17:55.188259 kubelet[3460]: I0904 17:17:55.188112 3460 topology_manager.go:215] "Topology Admit Handler" podUID="ba539674-2096-4391-a1f4-9ece4580e530" podNamespace="kube-system" podName="kube-proxy-8kttd" Sep 4 17:17:55.216901 systemd[1]: Created slice kubepods-besteffort-podba539674_2096_4391_a1f4_9ece4580e530.slice - libcontainer container kubepods-besteffort-podba539674_2096_4391_a1f4_9ece4580e530.slice. Sep 4 17:17:55.257535 kubelet[3460]: I0904 17:17:55.257235 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba539674-2096-4391-a1f4-9ece4580e530-lib-modules\") pod \"kube-proxy-8kttd\" (UID: \"ba539674-2096-4391-a1f4-9ece4580e530\") " pod="kube-system/kube-proxy-8kttd" Sep 4 17:17:55.257535 kubelet[3460]: I0904 17:17:55.257314 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba539674-2096-4391-a1f4-9ece4580e530-xtables-lock\") pod \"kube-proxy-8kttd\" (UID: \"ba539674-2096-4391-a1f4-9ece4580e530\") " pod="kube-system/kube-proxy-8kttd" Sep 4 17:17:55.257535 kubelet[3460]: I0904 17:17:55.257362 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67p8l\" (UniqueName: \"kubernetes.io/projected/ba539674-2096-4391-a1f4-9ece4580e530-kube-api-access-67p8l\") pod \"kube-proxy-8kttd\" (UID: \"ba539674-2096-4391-a1f4-9ece4580e530\") " pod="kube-system/kube-proxy-8kttd" Sep 4 17:17:55.257535 kubelet[3460]: I0904 17:17:55.257408 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ba539674-2096-4391-a1f4-9ece4580e530-kube-proxy\") pod \"kube-proxy-8kttd\" (UID: \"ba539674-2096-4391-a1f4-9ece4580e530\") " pod="kube-system/kube-proxy-8kttd" Sep 4 17:17:55.461508 kubelet[3460]: I0904 17:17:55.461342 3460 topology_manager.go:215] "Topology Admit Handler" podUID="f60621ec-d917-4d75-8397-d04f4c6d873d" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-74sv4" Sep 4 17:17:55.477477 systemd[1]: Created slice kubepods-besteffort-podf60621ec_d917_4d75_8397_d04f4c6d873d.slice - libcontainer container kubepods-besteffort-podf60621ec_d917_4d75_8397_d04f4c6d873d.slice. Sep 4 17:17:55.534894 containerd[2004]: time="2024-09-04T17:17:55.534754775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8kttd,Uid:ba539674-2096-4391-a1f4-9ece4580e530,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:55.559831 kubelet[3460]: I0904 17:17:55.559561 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gk85v\" (UniqueName: \"kubernetes.io/projected/f60621ec-d917-4d75-8397-d04f4c6d873d-kube-api-access-gk85v\") pod \"tigera-operator-5d56685c77-74sv4\" (UID: \"f60621ec-d917-4d75-8397-d04f4c6d873d\") " pod="tigera-operator/tigera-operator-5d56685c77-74sv4" Sep 4 17:17:55.559831 kubelet[3460]: I0904 17:17:55.559635 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f60621ec-d917-4d75-8397-d04f4c6d873d-var-lib-calico\") pod \"tigera-operator-5d56685c77-74sv4\" (UID: \"f60621ec-d917-4d75-8397-d04f4c6d873d\") " pod="tigera-operator/tigera-operator-5d56685c77-74sv4" Sep 4 17:17:55.572934 containerd[2004]: time="2024-09-04T17:17:55.572699999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:55.573102 containerd[2004]: time="2024-09-04T17:17:55.572968643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:55.573242 containerd[2004]: time="2024-09-04T17:17:55.573071063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:55.574514 containerd[2004]: time="2024-09-04T17:17:55.574297859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:55.620224 systemd[1]: Started cri-containerd-3d22977fe0ad59e16d80292e95f94bdb278c99471a9b44355a7f182a490aa533.scope - libcontainer container 3d22977fe0ad59e16d80292e95f94bdb278c99471a9b44355a7f182a490aa533. Sep 4 17:17:55.669515 containerd[2004]: time="2024-09-04T17:17:55.669363684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8kttd,Uid:ba539674-2096-4391-a1f4-9ece4580e530,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d22977fe0ad59e16d80292e95f94bdb278c99471a9b44355a7f182a490aa533\"" Sep 4 17:17:55.680056 containerd[2004]: time="2024-09-04T17:17:55.679900380Z" level=info msg="CreateContainer within sandbox \"3d22977fe0ad59e16d80292e95f94bdb278c99471a9b44355a7f182a490aa533\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:17:55.704630 containerd[2004]: time="2024-09-04T17:17:55.704551512Z" level=info msg="CreateContainer within sandbox \"3d22977fe0ad59e16d80292e95f94bdb278c99471a9b44355a7f182a490aa533\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c44576fe89a31db7711086502aab28a09ad6330053481543dbba232db1444555\"" Sep 4 17:17:55.706192 containerd[2004]: time="2024-09-04T17:17:55.706125708Z" level=info msg="StartContainer for \"c44576fe89a31db7711086502aab28a09ad6330053481543dbba232db1444555\"" Sep 4 17:17:55.756094 systemd[1]: Started cri-containerd-c44576fe89a31db7711086502aab28a09ad6330053481543dbba232db1444555.scope - libcontainer container c44576fe89a31db7711086502aab28a09ad6330053481543dbba232db1444555. Sep 4 17:17:55.788186 containerd[2004]: time="2024-09-04T17:17:55.787770420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-74sv4,Uid:f60621ec-d917-4d75-8397-d04f4c6d873d,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:17:55.816069 containerd[2004]: time="2024-09-04T17:17:55.815689896Z" level=info msg="StartContainer for \"c44576fe89a31db7711086502aab28a09ad6330053481543dbba232db1444555\" returns successfully" Sep 4 17:17:55.866758 containerd[2004]: time="2024-09-04T17:17:55.865896793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:55.866985 containerd[2004]: time="2024-09-04T17:17:55.866660593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:55.866985 containerd[2004]: time="2024-09-04T17:17:55.866696389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:55.867107 containerd[2004]: time="2024-09-04T17:17:55.867024361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:55.904118 systemd[1]: Started cri-containerd-9fd2dc5d73c5fd8a652e00418f4e533a000484bab3bc9a6bc3995fe8d6555aca.scope - libcontainer container 9fd2dc5d73c5fd8a652e00418f4e533a000484bab3bc9a6bc3995fe8d6555aca. Sep 4 17:17:55.997888 containerd[2004]: time="2024-09-04T17:17:55.997819933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-74sv4,Uid:f60621ec-d917-4d75-8397-d04f4c6d873d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9fd2dc5d73c5fd8a652e00418f4e533a000484bab3bc9a6bc3995fe8d6555aca\"" Sep 4 17:17:56.005633 containerd[2004]: time="2024-09-04T17:17:56.004575201Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:17:57.211320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728065270.mount: Deactivated successfully. Sep 4 17:17:58.325400 containerd[2004]: time="2024-09-04T17:17:58.325321273Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:58.327119 containerd[2004]: time="2024-09-04T17:17:58.327044785Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485907" Sep 4 17:17:58.328927 containerd[2004]: time="2024-09-04T17:17:58.328640797Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:58.338207 containerd[2004]: time="2024-09-04T17:17:58.338122669Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:58.343174 containerd[2004]: time="2024-09-04T17:17:58.342582937Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.337936756s" Sep 4 17:17:58.343174 containerd[2004]: time="2024-09-04T17:17:58.342645697Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:17:58.347918 containerd[2004]: time="2024-09-04T17:17:58.347677981Z" level=info msg="CreateContainer within sandbox \"9fd2dc5d73c5fd8a652e00418f4e533a000484bab3bc9a6bc3995fe8d6555aca\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:17:58.368400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117570280.mount: Deactivated successfully. Sep 4 17:17:58.372074 containerd[2004]: time="2024-09-04T17:17:58.371997541Z" level=info msg="CreateContainer within sandbox \"9fd2dc5d73c5fd8a652e00418f4e533a000484bab3bc9a6bc3995fe8d6555aca\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3\"" Sep 4 17:17:58.373674 containerd[2004]: time="2024-09-04T17:17:58.373597309Z" level=info msg="StartContainer for \"3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3\"" Sep 4 17:17:58.433113 systemd[1]: Started cri-containerd-3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3.scope - libcontainer container 3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3. Sep 4 17:17:58.479209 containerd[2004]: time="2024-09-04T17:17:58.479122430Z" level=info msg="StartContainer for \"3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3\" returns successfully" Sep 4 17:17:59.218678 kubelet[3460]: I0904 17:17:59.218037 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8kttd" podStartSLOduration=4.217978321 podStartE2EDuration="4.217978321s" podCreationTimestamp="2024-09-04 17:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:56.21152827 +0000 UTC m=+13.441828484" watchObservedRunningTime="2024-09-04 17:17:59.217978321 +0000 UTC m=+16.448278547" Sep 4 17:18:03.097635 kubelet[3460]: I0904 17:18:03.097565 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-74sv4" podStartSLOduration=5.755754916 podStartE2EDuration="8.097506604s" podCreationTimestamp="2024-09-04 17:17:55 +0000 UTC" firstStartedPulling="2024-09-04 17:17:56.001721937 +0000 UTC m=+13.232022139" lastFinishedPulling="2024-09-04 17:17:58.343473637 +0000 UTC m=+15.573773827" observedRunningTime="2024-09-04 17:17:59.218420881 +0000 UTC m=+16.448721083" watchObservedRunningTime="2024-09-04 17:18:03.097506604 +0000 UTC m=+20.327806782" Sep 4 17:18:03.711670 kubelet[3460]: I0904 17:18:03.709694 3460 topology_manager.go:215] "Topology Admit Handler" podUID="46fc87cd-7df0-4aca-9917-30f8aac4a31d" podNamespace="calico-system" podName="calico-typha-8646bcb45f-cmgk6" Sep 4 17:18:03.732969 kubelet[3460]: W0904 17:18:03.730565 3460 reflector.go:539] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-22-219" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-219' and this object Sep 4 17:18:03.731014 systemd[1]: Created slice kubepods-besteffort-pod46fc87cd_7df0_4aca_9917_30f8aac4a31d.slice - libcontainer container kubepods-besteffort-pod46fc87cd_7df0_4aca_9917_30f8aac4a31d.slice. Sep 4 17:18:03.735174 kubelet[3460]: W0904 17:18:03.730912 3460 reflector.go:539] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-22-219" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-219' and this object Sep 4 17:18:03.735378 kubelet[3460]: E0904 17:18:03.735359 3460 reflector.go:147] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-22-219" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-219' and this object Sep 4 17:18:03.735632 kubelet[3460]: W0904 17:18:03.731047 3460 reflector.go:539] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-22-219" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-219' and this object Sep 4 17:18:03.735632 kubelet[3460]: E0904 17:18:03.735131 3460 reflector.go:147] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-22-219" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-219' and this object Sep 4 17:18:03.735632 kubelet[3460]: E0904 17:18:03.735608 3460 reflector.go:147] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-22-219" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-22-219' and this object Sep 4 17:18:03.811835 kubelet[3460]: I0904 17:18:03.811133 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlcw5\" (UniqueName: \"kubernetes.io/projected/46fc87cd-7df0-4aca-9917-30f8aac4a31d-kube-api-access-hlcw5\") pod \"calico-typha-8646bcb45f-cmgk6\" (UID: \"46fc87cd-7df0-4aca-9917-30f8aac4a31d\") " pod="calico-system/calico-typha-8646bcb45f-cmgk6" Sep 4 17:18:03.811835 kubelet[3460]: I0904 17:18:03.811215 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/46fc87cd-7df0-4aca-9917-30f8aac4a31d-typha-certs\") pod \"calico-typha-8646bcb45f-cmgk6\" (UID: \"46fc87cd-7df0-4aca-9917-30f8aac4a31d\") " pod="calico-system/calico-typha-8646bcb45f-cmgk6" Sep 4 17:18:03.811835 kubelet[3460]: I0904 17:18:03.811265 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46fc87cd-7df0-4aca-9917-30f8aac4a31d-tigera-ca-bundle\") pod \"calico-typha-8646bcb45f-cmgk6\" (UID: \"46fc87cd-7df0-4aca-9917-30f8aac4a31d\") " pod="calico-system/calico-typha-8646bcb45f-cmgk6" Sep 4 17:18:03.974070 kubelet[3460]: I0904 17:18:03.974002 3460 topology_manager.go:215] "Topology Admit Handler" podUID="2d318da0-2ff1-4ef6-a636-a6fe53db8259" podNamespace="calico-system" podName="calico-node-gt2ct" Sep 4 17:18:03.997743 systemd[1]: Created slice kubepods-besteffort-pod2d318da0_2ff1_4ef6_a636_a6fe53db8259.slice - libcontainer container kubepods-besteffort-pod2d318da0_2ff1_4ef6_a636_a6fe53db8259.slice. Sep 4 17:18:04.012459 kubelet[3460]: I0904 17:18:04.012358 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-xtables-lock\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.012459 kubelet[3460]: I0904 17:18:04.012433 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2d318da0-2ff1-4ef6-a636-a6fe53db8259-node-certs\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.012459 kubelet[3460]: I0904 17:18:04.012479 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-flexvol-driver-host\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.014321 kubelet[3460]: I0904 17:18:04.013004 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-policysync\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.014321 kubelet[3460]: I0904 17:18:04.013111 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-lib-modules\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.014321 kubelet[3460]: I0904 17:18:04.013158 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-bin-dir\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.014321 kubelet[3460]: I0904 17:18:04.013238 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-log-dir\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.014321 kubelet[3460]: I0904 17:18:04.013339 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-var-lib-calico\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.014608 kubelet[3460]: I0904 17:18:04.013468 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d318da0-2ff1-4ef6-a636-a6fe53db8259-tigera-ca-bundle\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.014608 kubelet[3460]: I0904 17:18:04.013526 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-var-run-calico\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.014608 kubelet[3460]: I0904 17:18:04.013570 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-net-dir\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.014608 kubelet[3460]: I0904 17:18:04.013645 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6jwf\" (UniqueName: \"kubernetes.io/projected/2d318da0-2ff1-4ef6-a636-a6fe53db8259-kube-api-access-q6jwf\") pod \"calico-node-gt2ct\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " pod="calico-system/calico-node-gt2ct" Sep 4 17:18:04.093217 kubelet[3460]: I0904 17:18:04.093160 3460 topology_manager.go:215] "Topology Admit Handler" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" podNamespace="calico-system" podName="csi-node-driver-j8t28" Sep 4 17:18:04.093665 kubelet[3460]: E0904 17:18:04.093618 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8t28" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" Sep 4 17:18:04.115824 kubelet[3460]: I0904 17:18:04.115111 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/88b22ba2-b48a-4946-b09c-791f6c8e7d48-registration-dir\") pod \"csi-node-driver-j8t28\" (UID: \"88b22ba2-b48a-4946-b09c-791f6c8e7d48\") " pod="calico-system/csi-node-driver-j8t28" Sep 4 17:18:04.115824 kubelet[3460]: I0904 17:18:04.115252 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/88b22ba2-b48a-4946-b09c-791f6c8e7d48-socket-dir\") pod \"csi-node-driver-j8t28\" (UID: \"88b22ba2-b48a-4946-b09c-791f6c8e7d48\") " pod="calico-system/csi-node-driver-j8t28" Sep 4 17:18:04.115824 kubelet[3460]: I0904 17:18:04.115409 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/88b22ba2-b48a-4946-b09c-791f6c8e7d48-varrun\") pod \"csi-node-driver-j8t28\" (UID: \"88b22ba2-b48a-4946-b09c-791f6c8e7d48\") " pod="calico-system/csi-node-driver-j8t28" Sep 4 17:18:04.115824 kubelet[3460]: I0904 17:18:04.115481 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht4vj\" (UniqueName: \"kubernetes.io/projected/88b22ba2-b48a-4946-b09c-791f6c8e7d48-kube-api-access-ht4vj\") pod \"csi-node-driver-j8t28\" (UID: \"88b22ba2-b48a-4946-b09c-791f6c8e7d48\") " pod="calico-system/csi-node-driver-j8t28" Sep 4 17:18:04.115824 kubelet[3460]: I0904 17:18:04.115609 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88b22ba2-b48a-4946-b09c-791f6c8e7d48-kubelet-dir\") pod \"csi-node-driver-j8t28\" (UID: \"88b22ba2-b48a-4946-b09c-791f6c8e7d48\") " pod="calico-system/csi-node-driver-j8t28" Sep 4 17:18:04.123179 kubelet[3460]: E0904 17:18:04.122945 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.123179 kubelet[3460]: W0904 17:18:04.122986 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.123179 kubelet[3460]: E0904 17:18:04.123024 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.128198 kubelet[3460]: E0904 17:18:04.127917 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.128198 kubelet[3460]: W0904 17:18:04.127951 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.128198 kubelet[3460]: E0904 17:18:04.127990 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.129175 kubelet[3460]: E0904 17:18:04.128573 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.129175 kubelet[3460]: W0904 17:18:04.128600 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.129175 kubelet[3460]: E0904 17:18:04.128638 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.134182 kubelet[3460]: E0904 17:18:04.134135 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.135127 kubelet[3460]: W0904 17:18:04.134364 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.135127 kubelet[3460]: E0904 17:18:04.134405 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.140035 kubelet[3460]: E0904 17:18:04.139995 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.140274 kubelet[3460]: W0904 17:18:04.140246 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.140536 kubelet[3460]: E0904 17:18:04.140370 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.142136 kubelet[3460]: E0904 17:18:04.142100 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.142501 kubelet[3460]: W0904 17:18:04.142279 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.142501 kubelet[3460]: E0904 17:18:04.142326 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.144587 kubelet[3460]: E0904 17:18:04.144309 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.144587 kubelet[3460]: W0904 17:18:04.144341 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.144587 kubelet[3460]: E0904 17:18:04.144394 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.145439 kubelet[3460]: E0904 17:18:04.145255 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.145439 kubelet[3460]: W0904 17:18:04.145284 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.145439 kubelet[3460]: E0904 17:18:04.145319 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.146247 kubelet[3460]: E0904 17:18:04.146036 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.146247 kubelet[3460]: W0904 17:18:04.146061 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.146247 kubelet[3460]: E0904 17:18:04.146092 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.146943 kubelet[3460]: E0904 17:18:04.146683 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.146943 kubelet[3460]: W0904 17:18:04.146706 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.146943 kubelet[3460]: E0904 17:18:04.146739 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.148686 kubelet[3460]: E0904 17:18:04.148489 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.148686 kubelet[3460]: W0904 17:18:04.148535 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.148686 kubelet[3460]: E0904 17:18:04.148588 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.218412 kubelet[3460]: E0904 17:18:04.218115 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.218412 kubelet[3460]: W0904 17:18:04.218161 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.218412 kubelet[3460]: E0904 17:18:04.218198 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.218859 kubelet[3460]: E0904 17:18:04.218774 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.219225 kubelet[3460]: W0904 17:18:04.218982 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.219225 kubelet[3460]: E0904 17:18:04.219025 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.220151 kubelet[3460]: E0904 17:18:04.219676 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.220151 kubelet[3460]: W0904 17:18:04.219705 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.220151 kubelet[3460]: E0904 17:18:04.219751 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.221660 kubelet[3460]: E0904 17:18:04.221504 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.221660 kubelet[3460]: W0904 17:18:04.221538 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.221908 kubelet[3460]: E0904 17:18:04.221845 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.223375 kubelet[3460]: E0904 17:18:04.223096 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.223375 kubelet[3460]: W0904 17:18:04.223129 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.223375 kubelet[3460]: E0904 17:18:04.223205 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.224730 kubelet[3460]: E0904 17:18:04.224459 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.224730 kubelet[3460]: W0904 17:18:04.224491 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.224730 kubelet[3460]: E0904 17:18:04.224659 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.228765 kubelet[3460]: E0904 17:18:04.228377 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.228765 kubelet[3460]: W0904 17:18:04.228436 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.229055 kubelet[3460]: E0904 17:18:04.228846 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.232027 kubelet[3460]: E0904 17:18:04.231584 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.232027 kubelet[3460]: W0904 17:18:04.231620 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.233076 kubelet[3460]: E0904 17:18:04.232684 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.233867 kubelet[3460]: E0904 17:18:04.233605 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.233867 kubelet[3460]: W0904 17:18:04.233637 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.234353 kubelet[3460]: E0904 17:18:04.233875 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.235519 kubelet[3460]: E0904 17:18:04.235193 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.235519 kubelet[3460]: W0904 17:18:04.235225 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.235519 kubelet[3460]: E0904 17:18:04.235512 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.236868 kubelet[3460]: E0904 17:18:04.236392 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.236868 kubelet[3460]: W0904 17:18:04.236450 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.236868 kubelet[3460]: E0904 17:18:04.236754 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.238176 kubelet[3460]: E0904 17:18:04.237714 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.238176 kubelet[3460]: W0904 17:18:04.237820 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.238176 kubelet[3460]: E0904 17:18:04.237901 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.239305 kubelet[3460]: E0904 17:18:04.238936 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.239305 kubelet[3460]: W0904 17:18:04.238966 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.239305 kubelet[3460]: E0904 17:18:04.239163 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.240362 kubelet[3460]: E0904 17:18:04.239929 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.240362 kubelet[3460]: W0904 17:18:04.239959 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.240826 kubelet[3460]: E0904 17:18:04.240651 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.241475 kubelet[3460]: E0904 17:18:04.241338 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.241475 kubelet[3460]: W0904 17:18:04.241366 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.242311 kubelet[3460]: E0904 17:18:04.242237 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.242311 kubelet[3460]: W0904 17:18:04.242273 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.242711 kubelet[3460]: E0904 17:18:04.242257 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.242711 kubelet[3460]: E0904 17:18:04.242653 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.243531 kubelet[3460]: E0904 17:18:04.243493 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.243531 kubelet[3460]: W0904 17:18:04.243527 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.243892 kubelet[3460]: E0904 17:18:04.243691 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.246172 kubelet[3460]: E0904 17:18:04.246118 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.246172 kubelet[3460]: W0904 17:18:04.246157 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.246493 kubelet[3460]: E0904 17:18:04.246304 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.246880 kubelet[3460]: E0904 17:18:04.246512 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.246880 kubelet[3460]: W0904 17:18:04.246529 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.246880 kubelet[3460]: E0904 17:18:04.246566 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.248569 kubelet[3460]: E0904 17:18:04.248272 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.248569 kubelet[3460]: W0904 17:18:04.248304 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.248569 kubelet[3460]: E0904 17:18:04.248355 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.249193 kubelet[3460]: E0904 17:18:04.248999 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.249193 kubelet[3460]: W0904 17:18:04.249024 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.249193 kubelet[3460]: E0904 17:18:04.249090 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.249843 kubelet[3460]: E0904 17:18:04.249720 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.249843 kubelet[3460]: W0904 17:18:04.249745 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.249843 kubelet[3460]: E0904 17:18:04.249840 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.250700 kubelet[3460]: E0904 17:18:04.250482 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.250700 kubelet[3460]: W0904 17:18:04.250508 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.250700 kubelet[3460]: E0904 17:18:04.250574 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.251586 kubelet[3460]: E0904 17:18:04.251367 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.251586 kubelet[3460]: W0904 17:18:04.251425 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.251586 kubelet[3460]: E0904 17:18:04.251499 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.252836 kubelet[3460]: E0904 17:18:04.252656 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.252836 kubelet[3460]: W0904 17:18:04.252758 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.253253 kubelet[3460]: E0904 17:18:04.252943 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.254236 kubelet[3460]: E0904 17:18:04.254189 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.254236 kubelet[3460]: W0904 17:18:04.254226 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.254236 kubelet[3460]: E0904 17:18:04.254277 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.256407 kubelet[3460]: E0904 17:18:04.256363 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.256407 kubelet[3460]: W0904 17:18:04.256401 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.256727 kubelet[3460]: E0904 17:18:04.256550 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.257360 kubelet[3460]: E0904 17:18:04.257320 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.257360 kubelet[3460]: W0904 17:18:04.257353 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.257517 kubelet[3460]: E0904 17:18:04.257401 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.258027 kubelet[3460]: E0904 17:18:04.257990 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.258027 kubelet[3460]: W0904 17:18:04.258020 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.258575 kubelet[3460]: E0904 17:18:04.258171 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.260004 kubelet[3460]: E0904 17:18:04.259914 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.260004 kubelet[3460]: W0904 17:18:04.259959 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.260004 kubelet[3460]: E0904 17:18:04.259997 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.350158 kubelet[3460]: E0904 17:18:04.350095 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.350158 kubelet[3460]: W0904 17:18:04.350140 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.350378 kubelet[3460]: E0904 17:18:04.350181 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.350855 kubelet[3460]: E0904 17:18:04.350767 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.350855 kubelet[3460]: W0904 17:18:04.350842 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.351072 kubelet[3460]: E0904 17:18:04.350882 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.351549 kubelet[3460]: E0904 17:18:04.351493 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.351549 kubelet[3460]: W0904 17:18:04.351532 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.351736 kubelet[3460]: E0904 17:18:04.351568 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.352381 kubelet[3460]: E0904 17:18:04.352326 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.352381 kubelet[3460]: W0904 17:18:04.352365 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.352653 kubelet[3460]: E0904 17:18:04.352397 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.353919 kubelet[3460]: E0904 17:18:04.353866 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.353919 kubelet[3460]: W0904 17:18:04.353918 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.354168 kubelet[3460]: E0904 17:18:04.353968 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.354649 kubelet[3460]: E0904 17:18:04.354590 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.354649 kubelet[3460]: W0904 17:18:04.354631 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.354946 kubelet[3460]: E0904 17:18:04.354669 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.455585 kubelet[3460]: E0904 17:18:04.455543 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.455585 kubelet[3460]: W0904 17:18:04.455578 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.455882 kubelet[3460]: E0904 17:18:04.455618 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.457137 kubelet[3460]: E0904 17:18:04.456881 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.457137 kubelet[3460]: W0904 17:18:04.456915 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.457137 kubelet[3460]: E0904 17:18:04.456945 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.457537 kubelet[3460]: E0904 17:18:04.457518 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.457735 kubelet[3460]: W0904 17:18:04.457616 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.457735 kubelet[3460]: E0904 17:18:04.457648 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.458364 kubelet[3460]: E0904 17:18:04.458164 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.458364 kubelet[3460]: W0904 17:18:04.458229 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.458364 kubelet[3460]: E0904 17:18:04.458261 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.458975 kubelet[3460]: E0904 17:18:04.458770 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.458975 kubelet[3460]: W0904 17:18:04.458817 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.458975 kubelet[3460]: E0904 17:18:04.458843 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.459411 kubelet[3460]: E0904 17:18:04.459301 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.459411 kubelet[3460]: W0904 17:18:04.459322 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.459411 kubelet[3460]: E0904 17:18:04.459346 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.560910 kubelet[3460]: E0904 17:18:04.560639 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.560910 kubelet[3460]: W0904 17:18:04.560670 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.560910 kubelet[3460]: E0904 17:18:04.560702 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.561740 kubelet[3460]: E0904 17:18:04.561448 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.561740 kubelet[3460]: W0904 17:18:04.561471 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.561740 kubelet[3460]: E0904 17:18:04.561498 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.562188 kubelet[3460]: E0904 17:18:04.562014 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.562188 kubelet[3460]: W0904 17:18:04.562034 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.562188 kubelet[3460]: E0904 17:18:04.562058 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.562823 kubelet[3460]: E0904 17:18:04.562567 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.562823 kubelet[3460]: W0904 17:18:04.562586 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.562823 kubelet[3460]: E0904 17:18:04.562657 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.563446 kubelet[3460]: E0904 17:18:04.563244 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.563446 kubelet[3460]: W0904 17:18:04.563265 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.563446 kubelet[3460]: E0904 17:18:04.563295 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.564309 kubelet[3460]: E0904 17:18:04.563874 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.564309 kubelet[3460]: W0904 17:18:04.563900 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.564309 kubelet[3460]: E0904 17:18:04.563928 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.595822 kubelet[3460]: E0904 17:18:04.595641 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.595822 kubelet[3460]: W0904 17:18:04.595673 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.595822 kubelet[3460]: E0904 17:18:04.595707 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.599963 kubelet[3460]: E0904 17:18:04.599928 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.600322 kubelet[3460]: W0904 17:18:04.600220 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.600322 kubelet[3460]: E0904 17:18:04.600267 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.665940 kubelet[3460]: E0904 17:18:04.665741 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.665940 kubelet[3460]: W0904 17:18:04.665773 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.665940 kubelet[3460]: E0904 17:18:04.665838 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.666711 kubelet[3460]: E0904 17:18:04.666488 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.666711 kubelet[3460]: W0904 17:18:04.666509 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.666711 kubelet[3460]: E0904 17:18:04.666534 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.667258 kubelet[3460]: E0904 17:18:04.667123 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.667258 kubelet[3460]: W0904 17:18:04.667145 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.667258 kubelet[3460]: E0904 17:18:04.667169 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.667764 kubelet[3460]: E0904 17:18:04.667655 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.667764 kubelet[3460]: W0904 17:18:04.667674 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.667764 kubelet[3460]: E0904 17:18:04.667697 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.706879 kubelet[3460]: E0904 17:18:04.706840 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.707122 kubelet[3460]: W0904 17:18:04.707024 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.707122 kubelet[3460]: E0904 17:18:04.707066 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.769132 kubelet[3460]: E0904 17:18:04.768957 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.769132 kubelet[3460]: W0904 17:18:04.768992 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.769132 kubelet[3460]: E0904 17:18:04.769027 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.769806 kubelet[3460]: E0904 17:18:04.769649 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.769806 kubelet[3460]: W0904 17:18:04.769670 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.769806 kubelet[3460]: E0904 17:18:04.769696 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.770503 kubelet[3460]: E0904 17:18:04.770394 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.770503 kubelet[3460]: W0904 17:18:04.770418 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.770503 kubelet[3460]: E0904 17:18:04.770446 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.871686 kubelet[3460]: E0904 17:18:04.871565 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.873515 kubelet[3460]: W0904 17:18:04.873219 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.873515 kubelet[3460]: E0904 17:18:04.873268 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.874213 kubelet[3460]: E0904 17:18:04.873972 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.874213 kubelet[3460]: W0904 17:18:04.873997 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.874213 kubelet[3460]: E0904 17:18:04.874026 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.874757 kubelet[3460]: E0904 17:18:04.874644 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.874757 kubelet[3460]: W0904 17:18:04.874671 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.874757 kubelet[3460]: E0904 17:18:04.874705 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.924887 kubelet[3460]: E0904 17:18:04.924623 3460 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:04.924887 kubelet[3460]: E0904 17:18:04.924681 3460 projected.go:200] Error preparing data for projected volume kube-api-access-hlcw5 for pod calico-system/calico-typha-8646bcb45f-cmgk6: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:04.925517 kubelet[3460]: E0904 17:18:04.925165 3460 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/46fc87cd-7df0-4aca-9917-30f8aac4a31d-kube-api-access-hlcw5 podName:46fc87cd-7df0-4aca-9917-30f8aac4a31d nodeName:}" failed. No retries permitted until 2024-09-04 17:18:05.424763786 +0000 UTC m=+22.655063964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hlcw5" (UniqueName: "kubernetes.io/projected/46fc87cd-7df0-4aca-9917-30f8aac4a31d-kube-api-access-hlcw5") pod "calico-typha-8646bcb45f-cmgk6" (UID: "46fc87cd-7df0-4aca-9917-30f8aac4a31d") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:04.975843 kubelet[3460]: E0904 17:18:04.975687 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.975843 kubelet[3460]: W0904 17:18:04.975718 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.976266 kubelet[3460]: E0904 17:18:04.975770 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.976688 kubelet[3460]: E0904 17:18:04.976537 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.976688 kubelet[3460]: W0904 17:18:04.976558 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.976688 kubelet[3460]: E0904 17:18:04.976583 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:04.977200 kubelet[3460]: E0904 17:18:04.977180 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:04.977366 kubelet[3460]: W0904 17:18:04.977280 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:04.977366 kubelet[3460]: E0904 17:18:04.977311 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.078768 kubelet[3460]: E0904 17:18:05.078609 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.078768 kubelet[3460]: W0904 17:18:05.078639 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.078768 kubelet[3460]: E0904 17:18:05.078671 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.079555 kubelet[3460]: E0904 17:18:05.079356 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.079555 kubelet[3460]: W0904 17:18:05.079378 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.079555 kubelet[3460]: E0904 17:18:05.079406 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.080063 kubelet[3460]: E0904 17:18:05.079951 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.080063 kubelet[3460]: W0904 17:18:05.079971 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.080063 kubelet[3460]: E0904 17:18:05.079996 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.162549 kubelet[3460]: E0904 17:18:05.162250 3460 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:05.162549 kubelet[3460]: E0904 17:18:05.162312 3460 projected.go:200] Error preparing data for projected volume kube-api-access-q6jwf for pod calico-system/calico-node-gt2ct: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:05.162549 kubelet[3460]: E0904 17:18:05.162393 3460 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d318da0-2ff1-4ef6-a636-a6fe53db8259-kube-api-access-q6jwf podName:2d318da0-2ff1-4ef6-a636-a6fe53db8259 nodeName:}" failed. No retries permitted until 2024-09-04 17:18:05.662365215 +0000 UTC m=+22.892665393 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q6jwf" (UniqueName: "kubernetes.io/projected/2d318da0-2ff1-4ef6-a636-a6fe53db8259-kube-api-access-q6jwf") pod "calico-node-gt2ct" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:05.181311 kubelet[3460]: E0904 17:18:05.181186 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.181311 kubelet[3460]: W0904 17:18:05.181237 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.181311 kubelet[3460]: E0904 17:18:05.181271 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.182379 kubelet[3460]: E0904 17:18:05.182081 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.182379 kubelet[3460]: W0904 17:18:05.182103 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.182379 kubelet[3460]: E0904 17:18:05.182129 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.183014 kubelet[3460]: E0904 17:18:05.182836 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.183014 kubelet[3460]: W0904 17:18:05.182858 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.183014 kubelet[3460]: E0904 17:18:05.182913 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.274350 kubelet[3460]: E0904 17:18:05.273833 3460 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:05.274350 kubelet[3460]: E0904 17:18:05.273898 3460 projected.go:200] Error preparing data for projected volume kube-api-access-ht4vj for pod calico-system/csi-node-driver-j8t28: failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:05.274350 kubelet[3460]: E0904 17:18:05.274001 3460 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/88b22ba2-b48a-4946-b09c-791f6c8e7d48-kube-api-access-ht4vj podName:88b22ba2-b48a-4946-b09c-791f6c8e7d48 nodeName:}" failed. No retries permitted until 2024-09-04 17:18:05.773973087 +0000 UTC m=+23.004273277 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ht4vj" (UniqueName: "kubernetes.io/projected/88b22ba2-b48a-4946-b09c-791f6c8e7d48-kube-api-access-ht4vj") pod "csi-node-driver-j8t28" (UID: "88b22ba2-b48a-4946-b09c-791f6c8e7d48") : failed to sync configmap cache: timed out waiting for the condition Sep 4 17:18:05.284183 kubelet[3460]: E0904 17:18:05.284050 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.284183 kubelet[3460]: W0904 17:18:05.284103 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.284676 kubelet[3460]: E0904 17:18:05.284155 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.285188 kubelet[3460]: E0904 17:18:05.285115 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.285188 kubelet[3460]: W0904 17:18:05.285139 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.285526 kubelet[3460]: E0904 17:18:05.285168 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.286182 kubelet[3460]: E0904 17:18:05.286028 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.286182 kubelet[3460]: W0904 17:18:05.286052 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.286182 kubelet[3460]: E0904 17:18:05.286101 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.387666 kubelet[3460]: E0904 17:18:05.387473 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.387666 kubelet[3460]: W0904 17:18:05.387506 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.387666 kubelet[3460]: E0904 17:18:05.387539 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.388522 kubelet[3460]: E0904 17:18:05.388299 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.388522 kubelet[3460]: W0904 17:18:05.388322 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.388522 kubelet[3460]: E0904 17:18:05.388350 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.389070 kubelet[3460]: E0904 17:18:05.388966 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.389070 kubelet[3460]: W0904 17:18:05.388995 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.389070 kubelet[3460]: E0904 17:18:05.389022 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.490674 kubelet[3460]: E0904 17:18:05.490629 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.490674 kubelet[3460]: W0904 17:18:05.490665 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.490896 kubelet[3460]: E0904 17:18:05.490862 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.491467 kubelet[3460]: E0904 17:18:05.491351 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.491551 kubelet[3460]: W0904 17:18:05.491472 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.491551 kubelet[3460]: E0904 17:18:05.491504 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.492250 kubelet[3460]: E0904 17:18:05.492214 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.492374 kubelet[3460]: W0904 17:18:05.492264 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.492374 kubelet[3460]: E0904 17:18:05.492295 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.493012 kubelet[3460]: E0904 17:18:05.492960 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.493012 kubelet[3460]: W0904 17:18:05.493008 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.493147 kubelet[3460]: E0904 17:18:05.493037 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.493456 kubelet[3460]: E0904 17:18:05.493419 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.493456 kubelet[3460]: W0904 17:18:05.493443 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.493872 kubelet[3460]: E0904 17:18:05.493470 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.493959 kubelet[3460]: E0904 17:18:05.493900 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.493959 kubelet[3460]: W0904 17:18:05.493918 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.493959 kubelet[3460]: E0904 17:18:05.493943 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.494342 kubelet[3460]: E0904 17:18:05.494305 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.494342 kubelet[3460]: W0904 17:18:05.494324 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.494441 kubelet[3460]: E0904 17:18:05.494348 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.507605 kubelet[3460]: E0904 17:18:05.506344 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.507605 kubelet[3460]: W0904 17:18:05.506380 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.507605 kubelet[3460]: E0904 17:18:05.506419 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.539668 containerd[2004]: time="2024-09-04T17:18:05.539463249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8646bcb45f-cmgk6,Uid:46fc87cd-7df0-4aca-9917-30f8aac4a31d,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:05.595261 kubelet[3460]: E0904 17:18:05.595205 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.595261 kubelet[3460]: W0904 17:18:05.595244 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.595490 kubelet[3460]: E0904 17:18:05.595285 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.597407 kubelet[3460]: E0904 17:18:05.597237 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.597407 kubelet[3460]: W0904 17:18:05.597397 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.597610 kubelet[3460]: E0904 17:18:05.597574 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.609915 containerd[2004]: time="2024-09-04T17:18:05.607053789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:05.609915 containerd[2004]: time="2024-09-04T17:18:05.607157745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:05.609915 containerd[2004]: time="2024-09-04T17:18:05.607193949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:05.609915 containerd[2004]: time="2024-09-04T17:18:05.607348641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:05.667280 systemd[1]: Started cri-containerd-829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46.scope - libcontainer container 829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46. Sep 4 17:18:05.700834 kubelet[3460]: E0904 17:18:05.699865 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.700834 kubelet[3460]: W0904 17:18:05.699922 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.700834 kubelet[3460]: E0904 17:18:05.699963 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.703034 kubelet[3460]: E0904 17:18:05.701610 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.703034 kubelet[3460]: W0904 17:18:05.701640 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.703034 kubelet[3460]: E0904 17:18:05.702892 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.703995 kubelet[3460]: E0904 17:18:05.703775 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.703995 kubelet[3460]: W0904 17:18:05.703853 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.703995 kubelet[3460]: E0904 17:18:05.703888 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.705278 kubelet[3460]: E0904 17:18:05.705252 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.705388 kubelet[3460]: W0904 17:18:05.705364 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.706755 kubelet[3460]: E0904 17:18:05.705575 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.707672 kubelet[3460]: E0904 17:18:05.707472 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.707672 kubelet[3460]: W0904 17:18:05.707503 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.707672 kubelet[3460]: E0904 17:18:05.707539 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.711059 kubelet[3460]: E0904 17:18:05.710096 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.711059 kubelet[3460]: W0904 17:18:05.710131 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.711059 kubelet[3460]: E0904 17:18:05.710170 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.740629 kubelet[3460]: E0904 17:18:05.740506 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.740629 kubelet[3460]: W0904 17:18:05.740537 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.740629 kubelet[3460]: E0904 17:18:05.740573 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.805575 kubelet[3460]: E0904 17:18:05.805224 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.805575 kubelet[3460]: W0904 17:18:05.805258 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.805575 kubelet[3460]: E0904 17:18:05.805299 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.808264 kubelet[3460]: E0904 17:18:05.808211 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.808264 kubelet[3460]: W0904 17:18:05.808247 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.809122 kubelet[3460]: E0904 17:18:05.808303 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.809671 containerd[2004]: time="2024-09-04T17:18:05.809422738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8646bcb45f-cmgk6,Uid:46fc87cd-7df0-4aca-9917-30f8aac4a31d,Namespace:calico-system,Attempt:0,} returns sandbox id \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\"" Sep 4 17:18:05.812515 kubelet[3460]: E0904 17:18:05.811725 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.812515 kubelet[3460]: W0904 17:18:05.811775 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.812515 kubelet[3460]: E0904 17:18:05.811842 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.814333 containerd[2004]: time="2024-09-04T17:18:05.814127890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gt2ct,Uid:2d318da0-2ff1-4ef6-a636-a6fe53db8259,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:05.815833 kubelet[3460]: E0904 17:18:05.815497 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.815833 kubelet[3460]: W0904 17:18:05.815559 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.815833 kubelet[3460]: E0904 17:18:05.815598 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.817923 kubelet[3460]: E0904 17:18:05.817149 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.817923 kubelet[3460]: W0904 17:18:05.817187 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.817923 kubelet[3460]: E0904 17:18:05.817224 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.829641 containerd[2004]: time="2024-09-04T17:18:05.829585666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:18:05.832858 kubelet[3460]: E0904 17:18:05.832637 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:05.832858 kubelet[3460]: W0904 17:18:05.832678 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:05.832858 kubelet[3460]: E0904 17:18:05.832718 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:05.882682 containerd[2004]: time="2024-09-04T17:18:05.882510106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:05.882682 containerd[2004]: time="2024-09-04T17:18:05.882620770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:05.882963 containerd[2004]: time="2024-09-04T17:18:05.882660202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:05.884177 containerd[2004]: time="2024-09-04T17:18:05.883886110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:05.922091 systemd[1]: Started cri-containerd-6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda.scope - libcontainer container 6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda. Sep 4 17:18:05.990841 containerd[2004]: time="2024-09-04T17:18:05.990612839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gt2ct,Uid:2d318da0-2ff1-4ef6-a636-a6fe53db8259,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\"" Sep 4 17:18:06.072267 kubelet[3460]: E0904 17:18:06.071751 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8t28" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" Sep 4 17:18:08.071916 kubelet[3460]: E0904 17:18:08.071299 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8t28" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" Sep 4 17:18:08.173724 containerd[2004]: time="2024-09-04T17:18:08.172695202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:08.176407 containerd[2004]: time="2024-09-04T17:18:08.176334874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:18:08.177547 containerd[2004]: time="2024-09-04T17:18:08.177491002Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:08.184592 containerd[2004]: time="2024-09-04T17:18:08.184498666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:08.190205 containerd[2004]: time="2024-09-04T17:18:08.190105678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.360048736s" Sep 4 17:18:08.190205 containerd[2004]: time="2024-09-04T17:18:08.190197370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:18:08.193984 containerd[2004]: time="2024-09-04T17:18:08.193913962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:18:08.217524 containerd[2004]: time="2024-09-04T17:18:08.217459270Z" level=info msg="CreateContainer within sandbox \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:18:08.256712 containerd[2004]: time="2024-09-04T17:18:08.256567294Z" level=info msg="CreateContainer within sandbox \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\"" Sep 4 17:18:08.260416 containerd[2004]: time="2024-09-04T17:18:08.258698962Z" level=info msg="StartContainer for \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\"" Sep 4 17:18:08.325092 systemd[1]: Started cri-containerd-702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431.scope - libcontainer container 702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431. Sep 4 17:18:08.438294 containerd[2004]: time="2024-09-04T17:18:08.438186191Z" level=info msg="StartContainer for \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\" returns successfully" Sep 4 17:18:09.261714 containerd[2004]: time="2024-09-04T17:18:09.261546899Z" level=info msg="StopContainer for \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\" with timeout 300 (s)" Sep 4 17:18:09.263724 containerd[2004]: time="2024-09-04T17:18:09.263456819Z" level=info msg="Stop container \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\" with signal terminated" Sep 4 17:18:09.303253 kubelet[3460]: I0904 17:18:09.303118 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8646bcb45f-cmgk6" podStartSLOduration=3.935841323 podStartE2EDuration="6.303015827s" podCreationTimestamp="2024-09-04 17:18:03 +0000 UTC" firstStartedPulling="2024-09-04 17:18:05.82396129 +0000 UTC m=+23.054261480" lastFinishedPulling="2024-09-04 17:18:08.19113571 +0000 UTC m=+25.421435984" observedRunningTime="2024-09-04 17:18:09.301204367 +0000 UTC m=+26.531504569" watchObservedRunningTime="2024-09-04 17:18:09.303015827 +0000 UTC m=+26.533316029" Sep 4 17:18:09.325890 systemd[1]: cri-containerd-702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431.scope: Deactivated successfully. Sep 4 17:18:09.416341 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431-rootfs.mount: Deactivated successfully. Sep 4 17:18:09.662476 containerd[2004]: time="2024-09-04T17:18:09.661842913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:09.676843 containerd[2004]: time="2024-09-04T17:18:09.676753789Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:18:09.691859 containerd[2004]: time="2024-09-04T17:18:09.691266577Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:09.708050 containerd[2004]: time="2024-09-04T17:18:09.707991961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:09.711192 containerd[2004]: time="2024-09-04T17:18:09.709709557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.515722203s" Sep 4 17:18:09.711192 containerd[2004]: time="2024-09-04T17:18:09.711038941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:18:09.716484 containerd[2004]: time="2024-09-04T17:18:09.716413717Z" level=info msg="CreateContainer within sandbox \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:18:09.791180 containerd[2004]: time="2024-09-04T17:18:09.790957754Z" level=info msg="shim disconnected" id=702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431 namespace=k8s.io Sep 4 17:18:09.791180 containerd[2004]: time="2024-09-04T17:18:09.791034818Z" level=warning msg="cleaning up after shim disconnected" id=702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431 namespace=k8s.io Sep 4 17:18:09.791180 containerd[2004]: time="2024-09-04T17:18:09.791054690Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:18:09.793850 containerd[2004]: time="2024-09-04T17:18:09.792729062Z" level=info msg="CreateContainer within sandbox \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\"" Sep 4 17:18:09.797925 containerd[2004]: time="2024-09-04T17:18:09.797676866Z" level=info msg="StartContainer for \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\"" Sep 4 17:18:09.858537 containerd[2004]: time="2024-09-04T17:18:09.858379298Z" level=info msg="StopContainer for \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\" returns successfully" Sep 4 17:18:09.872373 containerd[2004]: time="2024-09-04T17:18:09.871366814Z" level=info msg="StopPodSandbox for \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\"" Sep 4 17:18:09.872533 containerd[2004]: time="2024-09-04T17:18:09.872385554Z" level=info msg="Container to stop \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:18:09.883073 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46-shm.mount: Deactivated successfully. Sep 4 17:18:09.898518 systemd[1]: Started cri-containerd-9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8.scope - libcontainer container 9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8. Sep 4 17:18:09.906084 systemd[1]: cri-containerd-829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46.scope: Deactivated successfully. Sep 4 17:18:10.012411 containerd[2004]: time="2024-09-04T17:18:10.011840231Z" level=info msg="shim disconnected" id=829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46 namespace=k8s.io Sep 4 17:18:10.014835 containerd[2004]: time="2024-09-04T17:18:10.012933947Z" level=warning msg="cleaning up after shim disconnected" id=829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46 namespace=k8s.io Sep 4 17:18:10.014835 containerd[2004]: time="2024-09-04T17:18:10.013287383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:18:10.049119 containerd[2004]: time="2024-09-04T17:18:10.049047875Z" level=info msg="StartContainer for \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\" returns successfully" Sep 4 17:18:10.072492 kubelet[3460]: E0904 17:18:10.072433 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8t28" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" Sep 4 17:18:10.083688 containerd[2004]: time="2024-09-04T17:18:10.083618375Z" level=info msg="TearDown network for sandbox \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\" successfully" Sep 4 17:18:10.083979 containerd[2004]: time="2024-09-04T17:18:10.083930783Z" level=info msg="StopPodSandbox for \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\" returns successfully" Sep 4 17:18:10.138868 kubelet[3460]: I0904 17:18:10.138680 3460 topology_manager.go:215] "Topology Admit Handler" podUID="92ded563-c73b-44b6-ba9c-77a581a92349" podNamespace="calico-system" podName="calico-typha-58864b494c-ng8gr" Sep 4 17:18:10.139289 kubelet[3460]: E0904 17:18:10.139259 3460 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="46fc87cd-7df0-4aca-9917-30f8aac4a31d" containerName="calico-typha" Sep 4 17:18:10.139827 kubelet[3460]: I0904 17:18:10.139472 3460 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fc87cd-7df0-4aca-9917-30f8aac4a31d" containerName="calico-typha" Sep 4 17:18:10.142264 kubelet[3460]: E0904 17:18:10.142077 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.143159 kubelet[3460]: W0904 17:18:10.142924 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.143159 kubelet[3460]: E0904 17:18:10.142977 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.146103 kubelet[3460]: E0904 17:18:10.145846 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.146103 kubelet[3460]: W0904 17:18:10.145888 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.146103 kubelet[3460]: E0904 17:18:10.145925 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.148318 kubelet[3460]: E0904 17:18:10.147968 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.148318 kubelet[3460]: W0904 17:18:10.148021 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.148318 kubelet[3460]: E0904 17:18:10.148062 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.150330 kubelet[3460]: E0904 17:18:10.150018 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.150330 kubelet[3460]: W0904 17:18:10.150086 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.150330 kubelet[3460]: E0904 17:18:10.150124 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.154814 kubelet[3460]: E0904 17:18:10.154109 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.154814 kubelet[3460]: W0904 17:18:10.154142 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.154814 kubelet[3460]: E0904 17:18:10.154176 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.157367 kubelet[3460]: E0904 17:18:10.157132 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.157367 kubelet[3460]: W0904 17:18:10.157164 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.157367 kubelet[3460]: E0904 17:18:10.157200 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.158331 kubelet[3460]: E0904 17:18:10.157910 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.158331 kubelet[3460]: W0904 17:18:10.157935 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.158331 kubelet[3460]: E0904 17:18:10.157967 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.158900 kubelet[3460]: E0904 17:18:10.158689 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.158900 kubelet[3460]: W0904 17:18:10.158716 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.158900 kubelet[3460]: E0904 17:18:10.158746 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.160440 kubelet[3460]: E0904 17:18:10.160286 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.160440 kubelet[3460]: W0904 17:18:10.160316 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.160440 kubelet[3460]: E0904 17:18:10.160350 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.163041 kubelet[3460]: E0904 17:18:10.161040 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.163041 kubelet[3460]: W0904 17:18:10.161065 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.163041 kubelet[3460]: E0904 17:18:10.161093 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.163750 kubelet[3460]: E0904 17:18:10.163599 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.163750 kubelet[3460]: W0904 17:18:10.163628 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.163750 kubelet[3460]: E0904 17:18:10.163663 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.165750 kubelet[3460]: E0904 17:18:10.165498 3460 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:18:10.165750 kubelet[3460]: W0904 17:18:10.165546 3460 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:18:10.165750 kubelet[3460]: E0904 17:18:10.165584 3460 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:18:10.166695 systemd[1]: Created slice kubepods-besteffort-pod92ded563_c73b_44b6_ba9c_77a581a92349.slice - libcontainer container kubepods-besteffort-pod92ded563_c73b_44b6_ba9c_77a581a92349.slice. Sep 4 17:18:10.202927 systemd[1]: cri-containerd-9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8.scope: Deactivated successfully. Sep 4 17:18:10.248272 kubelet[3460]: I0904 17:18:10.247528 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/46fc87cd-7df0-4aca-9917-30f8aac4a31d-typha-certs\") pod \"46fc87cd-7df0-4aca-9917-30f8aac4a31d\" (UID: \"46fc87cd-7df0-4aca-9917-30f8aac4a31d\") " Sep 4 17:18:10.248272 kubelet[3460]: I0904 17:18:10.247639 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46fc87cd-7df0-4aca-9917-30f8aac4a31d-tigera-ca-bundle\") pod \"46fc87cd-7df0-4aca-9917-30f8aac4a31d\" (UID: \"46fc87cd-7df0-4aca-9917-30f8aac4a31d\") " Sep 4 17:18:10.248272 kubelet[3460]: I0904 17:18:10.247720 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlcw5\" (UniqueName: \"kubernetes.io/projected/46fc87cd-7df0-4aca-9917-30f8aac4a31d-kube-api-access-hlcw5\") pod \"46fc87cd-7df0-4aca-9917-30f8aac4a31d\" (UID: \"46fc87cd-7df0-4aca-9917-30f8aac4a31d\") " Sep 4 17:18:10.248272 kubelet[3460]: I0904 17:18:10.247863 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92ded563-c73b-44b6-ba9c-77a581a92349-tigera-ca-bundle\") pod \"calico-typha-58864b494c-ng8gr\" (UID: \"92ded563-c73b-44b6-ba9c-77a581a92349\") " pod="calico-system/calico-typha-58864b494c-ng8gr" Sep 4 17:18:10.248272 kubelet[3460]: I0904 17:18:10.247938 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj4wq\" (UniqueName: \"kubernetes.io/projected/92ded563-c73b-44b6-ba9c-77a581a92349-kube-api-access-bj4wq\") pod \"calico-typha-58864b494c-ng8gr\" (UID: \"92ded563-c73b-44b6-ba9c-77a581a92349\") " pod="calico-system/calico-typha-58864b494c-ng8gr" Sep 4 17:18:10.248649 kubelet[3460]: I0904 17:18:10.248017 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/92ded563-c73b-44b6-ba9c-77a581a92349-typha-certs\") pod \"calico-typha-58864b494c-ng8gr\" (UID: \"92ded563-c73b-44b6-ba9c-77a581a92349\") " pod="calico-system/calico-typha-58864b494c-ng8gr" Sep 4 17:18:10.260938 kubelet[3460]: I0904 17:18:10.260377 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fc87cd-7df0-4aca-9917-30f8aac4a31d-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "46fc87cd-7df0-4aca-9917-30f8aac4a31d" (UID: "46fc87cd-7df0-4aca-9917-30f8aac4a31d"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:18:10.275519 kubelet[3460]: I0904 17:18:10.274184 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fc87cd-7df0-4aca-9917-30f8aac4a31d-kube-api-access-hlcw5" (OuterVolumeSpecName: "kube-api-access-hlcw5") pod "46fc87cd-7df0-4aca-9917-30f8aac4a31d" (UID: "46fc87cd-7df0-4aca-9917-30f8aac4a31d"). InnerVolumeSpecName "kube-api-access-hlcw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:18:10.281501 kubelet[3460]: I0904 17:18:10.281331 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46fc87cd-7df0-4aca-9917-30f8aac4a31d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "46fc87cd-7df0-4aca-9917-30f8aac4a31d" (UID: "46fc87cd-7df0-4aca-9917-30f8aac4a31d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:18:10.284295 containerd[2004]: time="2024-09-04T17:18:10.282891972Z" level=info msg="StopContainer for \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\" with timeout 5 (s)" Sep 4 17:18:10.295415 kubelet[3460]: I0904 17:18:10.295133 3460 scope.go:117] "RemoveContainer" containerID="702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431" Sep 4 17:18:10.295691 containerd[2004]: time="2024-09-04T17:18:10.294940440Z" level=info msg="Stop container \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\" with signal terminated" Sep 4 17:18:10.296529 containerd[2004]: time="2024-09-04T17:18:10.296055984Z" level=info msg="shim disconnected" id=9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8 namespace=k8s.io Sep 4 17:18:10.296529 containerd[2004]: time="2024-09-04T17:18:10.296475060Z" level=warning msg="cleaning up after shim disconnected" id=9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8 namespace=k8s.io Sep 4 17:18:10.297132 containerd[2004]: time="2024-09-04T17:18:10.296500884Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:18:10.303442 containerd[2004]: time="2024-09-04T17:18:10.303380388Z" level=info msg="RemoveContainer for \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\"" Sep 4 17:18:10.312806 containerd[2004]: time="2024-09-04T17:18:10.312593604Z" level=info msg="RemoveContainer for \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\" returns successfully" Sep 4 17:18:10.313395 kubelet[3460]: I0904 17:18:10.313237 3460 scope.go:117] "RemoveContainer" containerID="702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431" Sep 4 17:18:10.314874 containerd[2004]: time="2024-09-04T17:18:10.314306508Z" level=error msg="ContainerStatus for \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\": not found" Sep 4 17:18:10.315030 kubelet[3460]: E0904 17:18:10.314655 3460 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\": not found" containerID="702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431" Sep 4 17:18:10.315030 kubelet[3460]: I0904 17:18:10.314746 3460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431"} err="failed to get container status \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\": rpc error: code = NotFound desc = an error occurred when try to find container \"702c388872180834d5120d37c2f0b2700a943123c58a824c97648b741bfe9431\": not found" Sep 4 17:18:10.329587 systemd[1]: Removed slice kubepods-besteffort-pod46fc87cd_7df0_4aca_9917_30f8aac4a31d.slice - libcontainer container kubepods-besteffort-pod46fc87cd_7df0_4aca_9917_30f8aac4a31d.slice. Sep 4 17:18:10.351118 kubelet[3460]: I0904 17:18:10.349137 3460 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hlcw5\" (UniqueName: \"kubernetes.io/projected/46fc87cd-7df0-4aca-9917-30f8aac4a31d-kube-api-access-hlcw5\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.351941 kubelet[3460]: I0904 17:18:10.351841 3460 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/46fc87cd-7df0-4aca-9917-30f8aac4a31d-typha-certs\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.351941 kubelet[3460]: I0904 17:18:10.351889 3460 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/46fc87cd-7df0-4aca-9917-30f8aac4a31d-tigera-ca-bundle\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.392319 containerd[2004]: time="2024-09-04T17:18:10.392027617Z" level=info msg="StopContainer for \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\" returns successfully" Sep 4 17:18:10.394087 containerd[2004]: time="2024-09-04T17:18:10.393583681Z" level=info msg="StopPodSandbox for \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\"" Sep 4 17:18:10.394087 containerd[2004]: time="2024-09-04T17:18:10.393657133Z" level=info msg="Container to stop \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:18:10.429285 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8-rootfs.mount: Deactivated successfully. Sep 4 17:18:10.429474 systemd[1]: var-lib-kubelet-pods-46fc87cd\x2d7df0\x2d4aca\x2d9917\x2d30f8aac4a31d-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Sep 4 17:18:10.429620 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda-shm.mount: Deactivated successfully. Sep 4 17:18:10.429757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46-rootfs.mount: Deactivated successfully. Sep 4 17:18:10.429922 systemd[1]: var-lib-kubelet-pods-46fc87cd\x2d7df0\x2d4aca\x2d9917\x2d30f8aac4a31d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlcw5.mount: Deactivated successfully. Sep 4 17:18:10.430061 systemd[1]: var-lib-kubelet-pods-46fc87cd\x2d7df0\x2d4aca\x2d9917\x2d30f8aac4a31d-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Sep 4 17:18:10.440084 systemd[1]: cri-containerd-6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda.scope: Deactivated successfully. Sep 4 17:18:10.478325 containerd[2004]: time="2024-09-04T17:18:10.478069585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58864b494c-ng8gr,Uid:92ded563-c73b-44b6-ba9c-77a581a92349,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:10.496085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda-rootfs.mount: Deactivated successfully. Sep 4 17:18:10.516645 containerd[2004]: time="2024-09-04T17:18:10.516550453Z" level=info msg="shim disconnected" id=6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda namespace=k8s.io Sep 4 17:18:10.516645 containerd[2004]: time="2024-09-04T17:18:10.516634993Z" level=warning msg="cleaning up after shim disconnected" id=6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda namespace=k8s.io Sep 4 17:18:10.517838 containerd[2004]: time="2024-09-04T17:18:10.516657385Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:18:10.576299 containerd[2004]: time="2024-09-04T17:18:10.574489010Z" level=info msg="TearDown network for sandbox \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\" successfully" Sep 4 17:18:10.576299 containerd[2004]: time="2024-09-04T17:18:10.574545518Z" level=info msg="StopPodSandbox for \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\" returns successfully" Sep 4 17:18:10.591423 containerd[2004]: time="2024-09-04T17:18:10.589288838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:10.591423 containerd[2004]: time="2024-09-04T17:18:10.589405262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:10.591423 containerd[2004]: time="2024-09-04T17:18:10.589487762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:10.591423 containerd[2004]: time="2024-09-04T17:18:10.589684178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:10.664161 systemd[1]: Started cri-containerd-a36ac5c29dba32244c498b26ed07465b2e58fd05305e375e715cc960afbd4ded.scope - libcontainer container a36ac5c29dba32244c498b26ed07465b2e58fd05305e375e715cc960afbd4ded. Sep 4 17:18:10.759944 kubelet[3460]: I0904 17:18:10.759679 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-policysync\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.759944 kubelet[3460]: I0904 17:18:10.759764 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-var-run-calico\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.759944 kubelet[3460]: I0904 17:18:10.759823 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-policysync" (OuterVolumeSpecName: "policysync") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:18:10.759944 kubelet[3460]: I0904 17:18:10.759861 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:18:10.759944 kubelet[3460]: I0904 17:18:10.759830 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-var-lib-calico\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.760965 kubelet[3460]: I0904 17:18:10.759898 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:18:10.760965 kubelet[3460]: I0904 17:18:10.759929 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-net-dir\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.760965 kubelet[3460]: I0904 17:18:10.759976 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2d318da0-2ff1-4ef6-a636-a6fe53db8259-node-certs\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.760965 kubelet[3460]: I0904 17:18:10.760018 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-lib-modules\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.760965 kubelet[3460]: I0904 17:18:10.760017 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:18:10.760965 kubelet[3460]: I0904 17:18:10.760068 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-flexvol-driver-host\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.761300 kubelet[3460]: I0904 17:18:10.760133 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6jwf\" (UniqueName: \"kubernetes.io/projected/2d318da0-2ff1-4ef6-a636-a6fe53db8259-kube-api-access-q6jwf\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.761300 kubelet[3460]: I0904 17:18:10.760177 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-bin-dir\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.761300 kubelet[3460]: I0904 17:18:10.760219 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-xtables-lock\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.761300 kubelet[3460]: I0904 17:18:10.760261 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-log-dir\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.761300 kubelet[3460]: I0904 17:18:10.760309 3460 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d318da0-2ff1-4ef6-a636-a6fe53db8259-tigera-ca-bundle\") pod \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\" (UID: \"2d318da0-2ff1-4ef6-a636-a6fe53db8259\") " Sep 4 17:18:10.761300 kubelet[3460]: I0904 17:18:10.760377 3460 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-policysync\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.761618 kubelet[3460]: I0904 17:18:10.760404 3460 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-var-run-calico\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.761618 kubelet[3460]: I0904 17:18:10.760428 3460 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-var-lib-calico\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.761618 kubelet[3460]: I0904 17:18:10.761096 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d318da0-2ff1-4ef6-a636-a6fe53db8259-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:18:10.761618 kubelet[3460]: I0904 17:18:10.761151 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:18:10.761618 kubelet[3460]: I0904 17:18:10.761192 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:18:10.764718 kubelet[3460]: I0904 17:18:10.764505 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:18:10.764718 kubelet[3460]: I0904 17:18:10.764625 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:18:10.764718 kubelet[3460]: I0904 17:18:10.764670 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:18:10.773652 kubelet[3460]: I0904 17:18:10.773594 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d318da0-2ff1-4ef6-a636-a6fe53db8259-node-certs" (OuterVolumeSpecName: "node-certs") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:18:10.781606 containerd[2004]: time="2024-09-04T17:18:10.781481331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58864b494c-ng8gr,Uid:92ded563-c73b-44b6-ba9c-77a581a92349,Namespace:calico-system,Attempt:0,} returns sandbox id \"a36ac5c29dba32244c498b26ed07465b2e58fd05305e375e715cc960afbd4ded\"" Sep 4 17:18:10.783517 kubelet[3460]: I0904 17:18:10.783202 3460 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d318da0-2ff1-4ef6-a636-a6fe53db8259-kube-api-access-q6jwf" (OuterVolumeSpecName: "kube-api-access-q6jwf") pod "2d318da0-2ff1-4ef6-a636-a6fe53db8259" (UID: "2d318da0-2ff1-4ef6-a636-a6fe53db8259"). InnerVolumeSpecName "kube-api-access-q6jwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:18:10.807157 containerd[2004]: time="2024-09-04T17:18:10.807047175Z" level=info msg="CreateContainer within sandbox \"a36ac5c29dba32244c498b26ed07465b2e58fd05305e375e715cc960afbd4ded\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:18:10.833056 containerd[2004]: time="2024-09-04T17:18:10.832354827Z" level=info msg="CreateContainer within sandbox \"a36ac5c29dba32244c498b26ed07465b2e58fd05305e375e715cc960afbd4ded\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bcb9a28c91f44f3ef140ce0216133baa1991f96bcb9293cba373d40779afc9bd\"" Sep 4 17:18:10.835018 containerd[2004]: time="2024-09-04T17:18:10.834959823Z" level=info msg="StartContainer for \"bcb9a28c91f44f3ef140ce0216133baa1991f96bcb9293cba373d40779afc9bd\"" Sep 4 17:18:10.861067 kubelet[3460]: I0904 17:18:10.861003 3460 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-net-dir\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.861067 kubelet[3460]: I0904 17:18:10.861072 3460 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2d318da0-2ff1-4ef6-a636-a6fe53db8259-node-certs\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.861623 kubelet[3460]: I0904 17:18:10.861100 3460 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-lib-modules\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.861623 kubelet[3460]: I0904 17:18:10.861126 3460 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-flexvol-driver-host\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.861623 kubelet[3460]: I0904 17:18:10.861153 3460 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-q6jwf\" (UniqueName: \"kubernetes.io/projected/2d318da0-2ff1-4ef6-a636-a6fe53db8259-kube-api-access-q6jwf\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.861623 kubelet[3460]: I0904 17:18:10.861182 3460 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-bin-dir\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.861623 kubelet[3460]: I0904 17:18:10.861218 3460 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-xtables-lock\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.861623 kubelet[3460]: I0904 17:18:10.861243 3460 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2d318da0-2ff1-4ef6-a636-a6fe53db8259-cni-log-dir\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.861623 kubelet[3460]: I0904 17:18:10.861267 3460 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d318da0-2ff1-4ef6-a636-a6fe53db8259-tigera-ca-bundle\") on node \"ip-172-31-22-219\" DevicePath \"\"" Sep 4 17:18:10.896176 systemd[1]: Started cri-containerd-bcb9a28c91f44f3ef140ce0216133baa1991f96bcb9293cba373d40779afc9bd.scope - libcontainer container bcb9a28c91f44f3ef140ce0216133baa1991f96bcb9293cba373d40779afc9bd. Sep 4 17:18:11.083158 containerd[2004]: time="2024-09-04T17:18:11.082971612Z" level=info msg="StartContainer for \"bcb9a28c91f44f3ef140ce0216133baa1991f96bcb9293cba373d40779afc9bd\" returns successfully" Sep 4 17:18:11.088275 kubelet[3460]: I0904 17:18:11.088191 3460 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="46fc87cd-7df0-4aca-9917-30f8aac4a31d" path="/var/lib/kubelet/pods/46fc87cd-7df0-4aca-9917-30f8aac4a31d/volumes" Sep 4 17:18:11.112572 systemd[1]: Removed slice kubepods-besteffort-pod2d318da0_2ff1_4ef6_a636_a6fe53db8259.slice - libcontainer container kubepods-besteffort-pod2d318da0_2ff1_4ef6_a636_a6fe53db8259.slice. Sep 4 17:18:11.305831 kubelet[3460]: I0904 17:18:11.305680 3460 scope.go:117] "RemoveContainer" containerID="9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8" Sep 4 17:18:11.315712 containerd[2004]: time="2024-09-04T17:18:11.315415477Z" level=info msg="RemoveContainer for \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\"" Sep 4 17:18:11.323270 containerd[2004]: time="2024-09-04T17:18:11.322919401Z" level=info msg="RemoveContainer for \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\" returns successfully" Sep 4 17:18:11.323745 kubelet[3460]: I0904 17:18:11.323476 3460 scope.go:117] "RemoveContainer" containerID="9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8" Sep 4 17:18:11.328977 containerd[2004]: time="2024-09-04T17:18:11.328365385Z" level=error msg="ContainerStatus for \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\": not found" Sep 4 17:18:11.329930 kubelet[3460]: E0904 17:18:11.329342 3460 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\": not found" containerID="9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8" Sep 4 17:18:11.329930 kubelet[3460]: I0904 17:18:11.329521 3460 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8"} err="failed to get container status \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c51267f1940317fab75826a2fb03a33d96b48db4035fc872cea41fe5b114bc8\": not found" Sep 4 17:18:11.388913 kubelet[3460]: I0904 17:18:11.385380 3460 topology_manager.go:215] "Topology Admit Handler" podUID="c79f833e-a804-41eb-a3db-5b51444c0191" podNamespace="calico-system" podName="calico-node-zm4mp" Sep 4 17:18:11.388913 kubelet[3460]: E0904 17:18:11.385468 3460 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2d318da0-2ff1-4ef6-a636-a6fe53db8259" containerName="flexvol-driver" Sep 4 17:18:11.388913 kubelet[3460]: I0904 17:18:11.385520 3460 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d318da0-2ff1-4ef6-a636-a6fe53db8259" containerName="flexvol-driver" Sep 4 17:18:11.405501 systemd[1]: Created slice kubepods-besteffort-podc79f833e_a804_41eb_a3db_5b51444c0191.slice - libcontainer container kubepods-besteffort-podc79f833e_a804_41eb_a3db_5b51444c0191.slice. Sep 4 17:18:11.435377 systemd[1]: var-lib-kubelet-pods-2d318da0\x2d2ff1\x2d4ef6\x2da636\x2da6fe53db8259-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6jwf.mount: Deactivated successfully. Sep 4 17:18:11.435584 systemd[1]: var-lib-kubelet-pods-2d318da0\x2d2ff1\x2d4ef6\x2da636\x2da6fe53db8259-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Sep 4 17:18:11.468033 kubelet[3460]: I0904 17:18:11.467204 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-58864b494c-ng8gr" podStartSLOduration=5.467147666 podStartE2EDuration="5.467147666s" podCreationTimestamp="2024-09-04 17:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:11.426179258 +0000 UTC m=+28.656479484" watchObservedRunningTime="2024-09-04 17:18:11.467147666 +0000 UTC m=+28.697447916" Sep 4 17:18:11.567174 kubelet[3460]: I0904 17:18:11.566721 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c79f833e-a804-41eb-a3db-5b51444c0191-lib-modules\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.567174 kubelet[3460]: I0904 17:18:11.566922 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c79f833e-a804-41eb-a3db-5b51444c0191-cni-bin-dir\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.567174 kubelet[3460]: I0904 17:18:11.567033 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c79f833e-a804-41eb-a3db-5b51444c0191-cni-net-dir\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.567174 kubelet[3460]: I0904 17:18:11.567155 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c79f833e-a804-41eb-a3db-5b51444c0191-tigera-ca-bundle\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.567492 kubelet[3460]: I0904 17:18:11.567268 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c79f833e-a804-41eb-a3db-5b51444c0191-var-run-calico\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.567492 kubelet[3460]: I0904 17:18:11.567383 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s68jq\" (UniqueName: \"kubernetes.io/projected/c79f833e-a804-41eb-a3db-5b51444c0191-kube-api-access-s68jq\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.568562 kubelet[3460]: I0904 17:18:11.567985 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c79f833e-a804-41eb-a3db-5b51444c0191-policysync\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.568562 kubelet[3460]: I0904 17:18:11.568050 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c79f833e-a804-41eb-a3db-5b51444c0191-cni-log-dir\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.568562 kubelet[3460]: I0904 17:18:11.568141 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c79f833e-a804-41eb-a3db-5b51444c0191-flexvol-driver-host\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.568562 kubelet[3460]: I0904 17:18:11.568248 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c79f833e-a804-41eb-a3db-5b51444c0191-xtables-lock\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.568562 kubelet[3460]: I0904 17:18:11.568309 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c79f833e-a804-41eb-a3db-5b51444c0191-var-lib-calico\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.568889 kubelet[3460]: I0904 17:18:11.568376 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c79f833e-a804-41eb-a3db-5b51444c0191-node-certs\") pod \"calico-node-zm4mp\" (UID: \"c79f833e-a804-41eb-a3db-5b51444c0191\") " pod="calico-system/calico-node-zm4mp" Sep 4 17:18:11.722561 containerd[2004]: time="2024-09-04T17:18:11.722510451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zm4mp,Uid:c79f833e-a804-41eb-a3db-5b51444c0191,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:11.775575 containerd[2004]: time="2024-09-04T17:18:11.775174768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:11.775575 containerd[2004]: time="2024-09-04T17:18:11.775392472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:11.776307 containerd[2004]: time="2024-09-04T17:18:11.775513540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:11.778130 containerd[2004]: time="2024-09-04T17:18:11.777275692Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:11.827143 systemd[1]: Started cri-containerd-be720ec80260831fc0cfa9b42f133792f0252506d2aac7530bd4a205e34d2a22.scope - libcontainer container be720ec80260831fc0cfa9b42f133792f0252506d2aac7530bd4a205e34d2a22. Sep 4 17:18:11.883736 containerd[2004]: time="2024-09-04T17:18:11.883431532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zm4mp,Uid:c79f833e-a804-41eb-a3db-5b51444c0191,Namespace:calico-system,Attempt:0,} returns sandbox id \"be720ec80260831fc0cfa9b42f133792f0252506d2aac7530bd4a205e34d2a22\"" Sep 4 17:18:11.893690 containerd[2004]: time="2024-09-04T17:18:11.893614660Z" level=info msg="CreateContainer within sandbox \"be720ec80260831fc0cfa9b42f133792f0252506d2aac7530bd4a205e34d2a22\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:18:11.924023 containerd[2004]: time="2024-09-04T17:18:11.923032480Z" level=info msg="CreateContainer within sandbox \"be720ec80260831fc0cfa9b42f133792f0252506d2aac7530bd4a205e34d2a22\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d5a31c7ee186f8d82761d09773c3e572299e276ddda1f43fad51241c60bdc57e\"" Sep 4 17:18:11.924394 containerd[2004]: time="2024-09-04T17:18:11.924322348Z" level=info msg="StartContainer for \"d5a31c7ee186f8d82761d09773c3e572299e276ddda1f43fad51241c60bdc57e\"" Sep 4 17:18:12.000120 systemd[1]: Started cri-containerd-d5a31c7ee186f8d82761d09773c3e572299e276ddda1f43fad51241c60bdc57e.scope - libcontainer container d5a31c7ee186f8d82761d09773c3e572299e276ddda1f43fad51241c60bdc57e. Sep 4 17:18:12.071687 kubelet[3460]: E0904 17:18:12.071214 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8t28" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" Sep 4 17:18:12.083983 containerd[2004]: time="2024-09-04T17:18:12.083742517Z" level=info msg="StartContainer for \"d5a31c7ee186f8d82761d09773c3e572299e276ddda1f43fad51241c60bdc57e\" returns successfully" Sep 4 17:18:12.140857 systemd[1]: cri-containerd-d5a31c7ee186f8d82761d09773c3e572299e276ddda1f43fad51241c60bdc57e.scope: Deactivated successfully. Sep 4 17:18:12.214661 containerd[2004]: time="2024-09-04T17:18:12.214541642Z" level=info msg="shim disconnected" id=d5a31c7ee186f8d82761d09773c3e572299e276ddda1f43fad51241c60bdc57e namespace=k8s.io Sep 4 17:18:12.214661 containerd[2004]: time="2024-09-04T17:18:12.214641806Z" level=warning msg="cleaning up after shim disconnected" id=d5a31c7ee186f8d82761d09773c3e572299e276ddda1f43fad51241c60bdc57e namespace=k8s.io Sep 4 17:18:12.214661 containerd[2004]: time="2024-09-04T17:18:12.214663778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:18:12.346810 containerd[2004]: time="2024-09-04T17:18:12.346069802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:18:13.077719 kubelet[3460]: I0904 17:18:13.077007 3460 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2d318da0-2ff1-4ef6-a636-a6fe53db8259" path="/var/lib/kubelet/pods/2d318da0-2ff1-4ef6-a636-a6fe53db8259/volumes" Sep 4 17:18:14.071140 kubelet[3460]: E0904 17:18:14.071086 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8t28" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" Sep 4 17:18:14.148311 kubelet[3460]: I0904 17:18:14.148253 3460 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 4 17:18:16.000210 containerd[2004]: time="2024-09-04T17:18:16.000140873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:16.003342 containerd[2004]: time="2024-09-04T17:18:16.003272249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:18:16.005396 containerd[2004]: time="2024-09-04T17:18:16.005314865Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:16.022111 containerd[2004]: time="2024-09-04T17:18:16.020256785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:16.025000 containerd[2004]: time="2024-09-04T17:18:16.024291089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 3.678154423s" Sep 4 17:18:16.025000 containerd[2004]: time="2024-09-04T17:18:16.024365657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:18:16.032686 containerd[2004]: time="2024-09-04T17:18:16.032612261Z" level=info msg="CreateContainer within sandbox \"be720ec80260831fc0cfa9b42f133792f0252506d2aac7530bd4a205e34d2a22\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:18:16.061540 containerd[2004]: time="2024-09-04T17:18:16.061464197Z" level=info msg="CreateContainer within sandbox \"be720ec80260831fc0cfa9b42f133792f0252506d2aac7530bd4a205e34d2a22\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f4fd99c09fe192e2451313bf463a7d215a5790f04c5127b7229f8ca568e16343\"" Sep 4 17:18:16.065021 containerd[2004]: time="2024-09-04T17:18:16.063066341Z" level=info msg="StartContainer for \"f4fd99c09fe192e2451313bf463a7d215a5790f04c5127b7229f8ca568e16343\"" Sep 4 17:18:16.071640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201686152.mount: Deactivated successfully. Sep 4 17:18:16.073830 kubelet[3460]: E0904 17:18:16.073138 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8t28" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" Sep 4 17:18:16.154276 systemd[1]: Started cri-containerd-f4fd99c09fe192e2451313bf463a7d215a5790f04c5127b7229f8ca568e16343.scope - libcontainer container f4fd99c09fe192e2451313bf463a7d215a5790f04c5127b7229f8ca568e16343. Sep 4 17:18:16.302015 containerd[2004]: time="2024-09-04T17:18:16.301859106Z" level=info msg="StartContainer for \"f4fd99c09fe192e2451313bf463a7d215a5790f04c5127b7229f8ca568e16343\" returns successfully" Sep 4 17:18:17.355478 systemd[1]: cri-containerd-f4fd99c09fe192e2451313bf463a7d215a5790f04c5127b7229f8ca568e16343.scope: Deactivated successfully. Sep 4 17:18:17.397290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4fd99c09fe192e2451313bf463a7d215a5790f04c5127b7229f8ca568e16343-rootfs.mount: Deactivated successfully. Sep 4 17:18:17.405869 kubelet[3460]: I0904 17:18:17.405768 3460 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:18:17.443305 kubelet[3460]: I0904 17:18:17.443237 3460 topology_manager.go:215] "Topology Admit Handler" podUID="3514e90f-33f1-43e8-8b04-bc96331f0618" podNamespace="kube-system" podName="coredns-76f75df574-ncgw4" Sep 4 17:18:17.456106 kubelet[3460]: I0904 17:18:17.455838 3460 topology_manager.go:215] "Topology Admit Handler" podUID="408c6ec0-ae37-46f2-8a2e-26492274c3b6" podNamespace="kube-system" podName="coredns-76f75df574-cwjsl" Sep 4 17:18:17.456802 kubelet[3460]: I0904 17:18:17.456750 3460 topology_manager.go:215] "Topology Admit Handler" podUID="d8823db7-aaef-4420-b78a-28c3c358b6ff" podNamespace="calico-system" podName="calico-kube-controllers-85bf876798-6rhxn" Sep 4 17:18:17.470418 systemd[1]: Created slice kubepods-burstable-pod3514e90f_33f1_43e8_8b04_bc96331f0618.slice - libcontainer container kubepods-burstable-pod3514e90f_33f1_43e8_8b04_bc96331f0618.slice. Sep 4 17:18:17.502366 systemd[1]: Created slice kubepods-besteffort-podd8823db7_aaef_4420_b78a_28c3c358b6ff.slice - libcontainer container kubepods-besteffort-podd8823db7_aaef_4420_b78a_28c3c358b6ff.slice. Sep 4 17:18:17.517833 systemd[1]: Created slice kubepods-burstable-pod408c6ec0_ae37_46f2_8a2e_26492274c3b6.slice - libcontainer container kubepods-burstable-pod408c6ec0_ae37_46f2_8a2e_26492274c3b6.slice. Sep 4 17:18:17.615848 kubelet[3460]: I0904 17:18:17.615100 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d8823db7-aaef-4420-b78a-28c3c358b6ff-tigera-ca-bundle\") pod \"calico-kube-controllers-85bf876798-6rhxn\" (UID: \"d8823db7-aaef-4420-b78a-28c3c358b6ff\") " pod="calico-system/calico-kube-controllers-85bf876798-6rhxn" Sep 4 17:18:17.615848 kubelet[3460]: I0904 17:18:17.615196 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgr7b\" (UniqueName: \"kubernetes.io/projected/408c6ec0-ae37-46f2-8a2e-26492274c3b6-kube-api-access-xgr7b\") pod \"coredns-76f75df574-cwjsl\" (UID: \"408c6ec0-ae37-46f2-8a2e-26492274c3b6\") " pod="kube-system/coredns-76f75df574-cwjsl" Sep 4 17:18:17.615848 kubelet[3460]: I0904 17:18:17.615250 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfkng\" (UniqueName: \"kubernetes.io/projected/3514e90f-33f1-43e8-8b04-bc96331f0618-kube-api-access-nfkng\") pod \"coredns-76f75df574-ncgw4\" (UID: \"3514e90f-33f1-43e8-8b04-bc96331f0618\") " pod="kube-system/coredns-76f75df574-ncgw4" Sep 4 17:18:17.615848 kubelet[3460]: I0904 17:18:17.615299 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3514e90f-33f1-43e8-8b04-bc96331f0618-config-volume\") pod \"coredns-76f75df574-ncgw4\" (UID: \"3514e90f-33f1-43e8-8b04-bc96331f0618\") " pod="kube-system/coredns-76f75df574-ncgw4" Sep 4 17:18:17.615848 kubelet[3460]: I0904 17:18:17.615479 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/408c6ec0-ae37-46f2-8a2e-26492274c3b6-config-volume\") pod \"coredns-76f75df574-cwjsl\" (UID: \"408c6ec0-ae37-46f2-8a2e-26492274c3b6\") " pod="kube-system/coredns-76f75df574-cwjsl" Sep 4 17:18:17.616411 kubelet[3460]: I0904 17:18:17.615580 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75l9k\" (UniqueName: \"kubernetes.io/projected/d8823db7-aaef-4420-b78a-28c3c358b6ff-kube-api-access-75l9k\") pod \"calico-kube-controllers-85bf876798-6rhxn\" (UID: \"d8823db7-aaef-4420-b78a-28c3c358b6ff\") " pod="calico-system/calico-kube-controllers-85bf876798-6rhxn" Sep 4 17:18:17.789237 containerd[2004]: time="2024-09-04T17:18:17.789159069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ncgw4,Uid:3514e90f-33f1-43e8-8b04-bc96331f0618,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:17.817711 containerd[2004]: time="2024-09-04T17:18:17.817660654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bf876798-6rhxn,Uid:d8823db7-aaef-4420-b78a-28c3c358b6ff,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:17.830092 containerd[2004]: time="2024-09-04T17:18:17.830016046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cwjsl,Uid:408c6ec0-ae37-46f2-8a2e-26492274c3b6,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:18.091711 systemd[1]: Created slice kubepods-besteffort-pod88b22ba2_b48a_4946_b09c_791f6c8e7d48.slice - libcontainer container kubepods-besteffort-pod88b22ba2_b48a_4946_b09c_791f6c8e7d48.slice. Sep 4 17:18:18.096585 containerd[2004]: time="2024-09-04T17:18:18.096526159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8t28,Uid:88b22ba2-b48a-4946-b09c-791f6c8e7d48,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:18.144256 containerd[2004]: time="2024-09-04T17:18:18.143972959Z" level=info msg="shim disconnected" id=f4fd99c09fe192e2451313bf463a7d215a5790f04c5127b7229f8ca568e16343 namespace=k8s.io Sep 4 17:18:18.144434 containerd[2004]: time="2024-09-04T17:18:18.144255787Z" level=warning msg="cleaning up after shim disconnected" id=f4fd99c09fe192e2451313bf463a7d215a5790f04c5127b7229f8ca568e16343 namespace=k8s.io Sep 4 17:18:18.144434 containerd[2004]: time="2024-09-04T17:18:18.144312907Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:18:18.393728 containerd[2004]: time="2024-09-04T17:18:18.391488212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:18:18.480166 containerd[2004]: time="2024-09-04T17:18:18.480046773Z" level=error msg="Failed to destroy network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.482878 containerd[2004]: time="2024-09-04T17:18:18.480947649Z" level=error msg="encountered an error cleaning up failed sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.482878 containerd[2004]: time="2024-09-04T17:18:18.481651641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cwjsl,Uid:408c6ec0-ae37-46f2-8a2e-26492274c3b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.484128 kubelet[3460]: E0904 17:18:18.484040 3460 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.486839 kubelet[3460]: E0904 17:18:18.484534 3460 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cwjsl" Sep 4 17:18:18.486839 kubelet[3460]: E0904 17:18:18.485028 3460 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-cwjsl" Sep 4 17:18:18.486839 kubelet[3460]: E0904 17:18:18.485133 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cwjsl_kube-system(408c6ec0-ae37-46f2-8a2e-26492274c3b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cwjsl_kube-system(408c6ec0-ae37-46f2-8a2e-26492274c3b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cwjsl" podUID="408c6ec0-ae37-46f2-8a2e-26492274c3b6" Sep 4 17:18:18.486721 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0-shm.mount: Deactivated successfully. Sep 4 17:18:18.505448 containerd[2004]: time="2024-09-04T17:18:18.505116957Z" level=error msg="Failed to destroy network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.512748 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e-shm.mount: Deactivated successfully. Sep 4 17:18:18.515317 containerd[2004]: time="2024-09-04T17:18:18.512461401Z" level=error msg="encountered an error cleaning up failed sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.517318 containerd[2004]: time="2024-09-04T17:18:18.513355209Z" level=error msg="Failed to destroy network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.525500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a-shm.mount: Deactivated successfully. Sep 4 17:18:18.527262 containerd[2004]: time="2024-09-04T17:18:18.526125753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ncgw4,Uid:3514e90f-33f1-43e8-8b04-bc96331f0618,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.527934 containerd[2004]: time="2024-09-04T17:18:18.527688393Z" level=error msg="encountered an error cleaning up failed sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.527934 containerd[2004]: time="2024-09-04T17:18:18.527820825Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bf876798-6rhxn,Uid:d8823db7-aaef-4420-b78a-28c3c358b6ff,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.531590 kubelet[3460]: E0904 17:18:18.530933 3460 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.531590 kubelet[3460]: E0904 17:18:18.531025 3460 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85bf876798-6rhxn" Sep 4 17:18:18.531590 kubelet[3460]: E0904 17:18:18.531521 3460 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85bf876798-6rhxn" Sep 4 17:18:18.532209 kubelet[3460]: E0904 17:18:18.531916 3460 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.532209 kubelet[3460]: E0904 17:18:18.532136 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85bf876798-6rhxn_calico-system(d8823db7-aaef-4420-b78a-28c3c358b6ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85bf876798-6rhxn_calico-system(d8823db7-aaef-4420-b78a-28c3c358b6ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85bf876798-6rhxn" podUID="d8823db7-aaef-4420-b78a-28c3c358b6ff" Sep 4 17:18:18.532209 kubelet[3460]: E0904 17:18:18.532153 3460 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ncgw4" Sep 4 17:18:18.532517 kubelet[3460]: E0904 17:18:18.532209 3460 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ncgw4" Sep 4 17:18:18.532517 kubelet[3460]: E0904 17:18:18.532277 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ncgw4_kube-system(3514e90f-33f1-43e8-8b04-bc96331f0618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ncgw4_kube-system(3514e90f-33f1-43e8-8b04-bc96331f0618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ncgw4" podUID="3514e90f-33f1-43e8-8b04-bc96331f0618" Sep 4 17:18:18.541084 containerd[2004]: time="2024-09-04T17:18:18.540981789Z" level=error msg="Failed to destroy network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.542364 containerd[2004]: time="2024-09-04T17:18:18.542271657Z" level=error msg="encountered an error cleaning up failed sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.544919 containerd[2004]: time="2024-09-04T17:18:18.542393457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8t28,Uid:88b22ba2-b48a-4946-b09c-791f6c8e7d48,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.546891 kubelet[3460]: E0904 17:18:18.545807 3460 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:18.546891 kubelet[3460]: E0904 17:18:18.545931 3460 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8t28" Sep 4 17:18:18.546891 kubelet[3460]: E0904 17:18:18.545972 3460 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8t28" Sep 4 17:18:18.548022 kubelet[3460]: E0904 17:18:18.547957 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8t28_calico-system(88b22ba2-b48a-4946-b09c-791f6c8e7d48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8t28_calico-system(88b22ba2-b48a-4946-b09c-791f6c8e7d48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8t28" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" Sep 4 17:18:18.549815 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888-shm.mount: Deactivated successfully. Sep 4 17:18:19.388539 kubelet[3460]: I0904 17:18:19.388480 3460 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:19.391193 containerd[2004]: time="2024-09-04T17:18:19.390355629Z" level=info msg="StopPodSandbox for \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\"" Sep 4 17:18:19.391193 containerd[2004]: time="2024-09-04T17:18:19.390744129Z" level=info msg="Ensure that sandbox dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e in task-service has been cleanup successfully" Sep 4 17:18:19.395556 kubelet[3460]: I0904 17:18:19.394926 3460 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:19.396515 containerd[2004]: time="2024-09-04T17:18:19.396465513Z" level=info msg="StopPodSandbox for \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\"" Sep 4 17:18:19.396842 containerd[2004]: time="2024-09-04T17:18:19.396759765Z" level=info msg="Ensure that sandbox 8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a in task-service has been cleanup successfully" Sep 4 17:18:19.404777 kubelet[3460]: I0904 17:18:19.404691 3460 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:19.413814 containerd[2004]: time="2024-09-04T17:18:19.413317894Z" level=info msg="StopPodSandbox for \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\"" Sep 4 17:18:19.413814 containerd[2004]: time="2024-09-04T17:18:19.413623678Z" level=info msg="Ensure that sandbox 852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0 in task-service has been cleanup successfully" Sep 4 17:18:19.416098 kubelet[3460]: I0904 17:18:19.416035 3460 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:19.420966 containerd[2004]: time="2024-09-04T17:18:19.420752878Z" level=info msg="StopPodSandbox for \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\"" Sep 4 17:18:19.425937 containerd[2004]: time="2024-09-04T17:18:19.424689706Z" level=info msg="Ensure that sandbox a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888 in task-service has been cleanup successfully" Sep 4 17:18:19.541413 containerd[2004]: time="2024-09-04T17:18:19.541336558Z" level=error msg="StopPodSandbox for \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\" failed" error="failed to destroy network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:19.542162 kubelet[3460]: E0904 17:18:19.542103 3460 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:19.542971 kubelet[3460]: E0904 17:18:19.542181 3460 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e"} Sep 4 17:18:19.542971 kubelet[3460]: E0904 17:18:19.542486 3460 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3514e90f-33f1-43e8-8b04-bc96331f0618\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:19.542971 kubelet[3460]: E0904 17:18:19.542546 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3514e90f-33f1-43e8-8b04-bc96331f0618\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ncgw4" podUID="3514e90f-33f1-43e8-8b04-bc96331f0618" Sep 4 17:18:19.544973 containerd[2004]: time="2024-09-04T17:18:19.544901122Z" level=error msg="StopPodSandbox for \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\" failed" error="failed to destroy network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:19.546053 kubelet[3460]: E0904 17:18:19.545696 3460 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:19.546053 kubelet[3460]: E0904 17:18:19.545769 3460 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a"} Sep 4 17:18:19.546053 kubelet[3460]: E0904 17:18:19.545985 3460 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8823db7-aaef-4420-b78a-28c3c358b6ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:19.546529 kubelet[3460]: E0904 17:18:19.546407 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8823db7-aaef-4420-b78a-28c3c358b6ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85bf876798-6rhxn" podUID="d8823db7-aaef-4420-b78a-28c3c358b6ff" Sep 4 17:18:19.566858 containerd[2004]: time="2024-09-04T17:18:19.566753134Z" level=error msg="StopPodSandbox for \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\" failed" error="failed to destroy network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:19.567398 kubelet[3460]: E0904 17:18:19.567363 3460 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:19.567610 kubelet[3460]: E0904 17:18:19.567574 3460 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888"} Sep 4 17:18:19.567915 kubelet[3460]: E0904 17:18:19.567766 3460 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88b22ba2-b48a-4946-b09c-791f6c8e7d48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:19.568147 kubelet[3460]: E0904 17:18:19.567891 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88b22ba2-b48a-4946-b09c-791f6c8e7d48\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8t28" podUID="88b22ba2-b48a-4946-b09c-791f6c8e7d48" Sep 4 17:18:19.574227 containerd[2004]: time="2024-09-04T17:18:19.574059454Z" level=error msg="StopPodSandbox for \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\" failed" error="failed to destroy network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:19.574528 kubelet[3460]: E0904 17:18:19.574411 3460 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:19.574528 kubelet[3460]: E0904 17:18:19.574471 3460 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0"} Sep 4 17:18:19.574661 kubelet[3460]: E0904 17:18:19.574533 3460 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"408c6ec0-ae37-46f2-8a2e-26492274c3b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:19.574661 kubelet[3460]: E0904 17:18:19.574594 3460 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"408c6ec0-ae37-46f2-8a2e-26492274c3b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-cwjsl" podUID="408c6ec0-ae37-46f2-8a2e-26492274c3b6" Sep 4 17:18:22.473419 systemd[1]: Started sshd@9-172.31.22.219:22-139.178.89.65:36608.service - OpenSSH per-connection server daemon (139.178.89.65:36608). Sep 4 17:18:22.671393 sshd[4749]: Accepted publickey for core from 139.178.89.65 port 36608 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:22.677612 sshd[4749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:22.691331 systemd-logind[1987]: New session 10 of user core. Sep 4 17:18:22.698678 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:18:23.027053 sshd[4749]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:23.036801 systemd[1]: sshd@9-172.31.22.219:22-139.178.89.65:36608.service: Deactivated successfully. Sep 4 17:18:23.043976 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:18:23.047406 systemd-logind[1987]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:18:23.051263 systemd-logind[1987]: Removed session 10. Sep 4 17:18:24.590306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933571116.mount: Deactivated successfully. Sep 4 17:18:24.644732 containerd[2004]: time="2024-09-04T17:18:24.644653407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:24.646204 containerd[2004]: time="2024-09-04T17:18:24.646147971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:18:24.647285 containerd[2004]: time="2024-09-04T17:18:24.647187327Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:24.651944 containerd[2004]: time="2024-09-04T17:18:24.651817180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:24.654246 containerd[2004]: time="2024-09-04T17:18:24.653200012Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 6.261515516s" Sep 4 17:18:24.654246 containerd[2004]: time="2024-09-04T17:18:24.653262856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:18:24.680460 containerd[2004]: time="2024-09-04T17:18:24.680389708Z" level=info msg="CreateContainer within sandbox \"be720ec80260831fc0cfa9b42f133792f0252506d2aac7530bd4a205e34d2a22\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:18:24.710833 containerd[2004]: time="2024-09-04T17:18:24.710730400Z" level=info msg="CreateContainer within sandbox \"be720ec80260831fc0cfa9b42f133792f0252506d2aac7530bd4a205e34d2a22\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"32b72fb2f6486cfdfd7b6663b8af291adb84b71cd0fa3a1b869b4bb1884b1742\"" Sep 4 17:18:24.712887 containerd[2004]: time="2024-09-04T17:18:24.711996484Z" level=info msg="StartContainer for \"32b72fb2f6486cfdfd7b6663b8af291adb84b71cd0fa3a1b869b4bb1884b1742\"" Sep 4 17:18:24.761113 systemd[1]: Started cri-containerd-32b72fb2f6486cfdfd7b6663b8af291adb84b71cd0fa3a1b869b4bb1884b1742.scope - libcontainer container 32b72fb2f6486cfdfd7b6663b8af291adb84b71cd0fa3a1b869b4bb1884b1742. Sep 4 17:18:24.820131 containerd[2004]: time="2024-09-04T17:18:24.820041532Z" level=info msg="StartContainer for \"32b72fb2f6486cfdfd7b6663b8af291adb84b71cd0fa3a1b869b4bb1884b1742\" returns successfully" Sep 4 17:18:24.962217 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:18:24.962473 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:18:27.280959 kernel: bpftool[4994]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:18:27.678641 systemd-networkd[1926]: vxlan.calico: Link UP Sep 4 17:18:27.678663 systemd-networkd[1926]: vxlan.calico: Gained carrier Sep 4 17:18:27.683572 (udev-worker)[4804]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:27.721009 (udev-worker)[4803]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:27.722423 (udev-worker)[5026]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:28.071157 systemd[1]: Started sshd@10-172.31.22.219:22-139.178.89.65:42106.service - OpenSSH per-connection server daemon (139.178.89.65:42106). Sep 4 17:18:28.264612 sshd[5055]: Accepted publickey for core from 139.178.89.65 port 42106 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:28.267372 sshd[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:28.277898 systemd-logind[1987]: New session 11 of user core. Sep 4 17:18:28.283045 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:18:28.537204 sshd[5055]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:28.544301 systemd[1]: sshd@10-172.31.22.219:22-139.178.89.65:42106.service: Deactivated successfully. Sep 4 17:18:28.547773 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:18:28.549620 systemd-logind[1987]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:18:28.552490 systemd-logind[1987]: Removed session 11. Sep 4 17:18:28.945537 systemd-networkd[1926]: vxlan.calico: Gained IPv6LL Sep 4 17:18:31.080172 containerd[2004]: time="2024-09-04T17:18:31.080005135Z" level=info msg="StopPodSandbox for \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\"" Sep 4 17:18:31.220732 kubelet[3460]: I0904 17:18:31.219237 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-zm4mp" podStartSLOduration=7.91100149 podStartE2EDuration="20.21917282s" podCreationTimestamp="2024-09-04 17:18:11 +0000 UTC" firstStartedPulling="2024-09-04 17:18:12.345464258 +0000 UTC m=+29.575764436" lastFinishedPulling="2024-09-04 17:18:24.653635588 +0000 UTC m=+41.883935766" observedRunningTime="2024-09-04 17:18:25.484121944 +0000 UTC m=+42.714422158" watchObservedRunningTime="2024-09-04 17:18:31.21917282 +0000 UTC m=+48.449472998" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.221 [INFO][5091] k8s.go 608: Cleaning up netns ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.221 [INFO][5091] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" iface="eth0" netns="/var/run/netns/cni-cfa6c2b6-40bb-342a-23c7-8edffd35b676" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.223 [INFO][5091] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" iface="eth0" netns="/var/run/netns/cni-cfa6c2b6-40bb-342a-23c7-8edffd35b676" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.224 [INFO][5091] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" iface="eth0" netns="/var/run/netns/cni-cfa6c2b6-40bb-342a-23c7-8edffd35b676" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.224 [INFO][5091] k8s.go 615: Releasing IP address(es) ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.224 [INFO][5091] utils.go 188: Calico CNI releasing IP address ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.269 [INFO][5097] ipam_plugin.go 417: Releasing address using handleID ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" HandleID="k8s-pod-network.852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.269 [INFO][5097] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.269 [INFO][5097] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.281 [WARNING][5097] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" HandleID="k8s-pod-network.852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.281 [INFO][5097] ipam_plugin.go 445: Releasing address using workloadID ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" HandleID="k8s-pod-network.852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.283 [INFO][5097] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:31.290825 containerd[2004]: 2024-09-04 17:18:31.288 [INFO][5091] k8s.go 621: Teardown processing complete. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:31.292335 containerd[2004]: time="2024-09-04T17:18:31.292137957Z" level=info msg="TearDown network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\" successfully" Sep 4 17:18:31.292335 containerd[2004]: time="2024-09-04T17:18:31.292189113Z" level=info msg="StopPodSandbox for \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\" returns successfully" Sep 4 17:18:31.295147 containerd[2004]: time="2024-09-04T17:18:31.295075233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cwjsl,Uid:408c6ec0-ae37-46f2-8a2e-26492274c3b6,Namespace:kube-system,Attempt:1,}" Sep 4 17:18:31.297583 systemd[1]: run-netns-cni\x2dcfa6c2b6\x2d40bb\x2d342a\x2d23c7\x2d8edffd35b676.mount: Deactivated successfully. Sep 4 17:18:31.546891 systemd-networkd[1926]: cali082e0eb6787: Link UP Sep 4 17:18:31.550336 (udev-worker)[5121]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:31.551411 systemd-networkd[1926]: cali082e0eb6787: Gained carrier Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.398 [INFO][5104] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0 coredns-76f75df574- kube-system 408c6ec0-ae37-46f2-8a2e-26492274c3b6 879 0 2024-09-04 17:17:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-219 coredns-76f75df574-cwjsl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali082e0eb6787 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Namespace="kube-system" Pod="coredns-76f75df574-cwjsl" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.399 [INFO][5104] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Namespace="kube-system" Pod="coredns-76f75df574-cwjsl" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.457 [INFO][5115] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" HandleID="k8s-pod-network.23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.481 [INFO][5115] ipam_plugin.go 270: Auto assigning IP ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" HandleID="k8s-pod-network.23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ec260), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-219", "pod":"coredns-76f75df574-cwjsl", "timestamp":"2024-09-04 17:18:31.457923573 +0000 UTC"}, Hostname:"ip-172-31-22-219", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.481 [INFO][5115] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.481 [INFO][5115] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.481 [INFO][5115] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-219' Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.484 [INFO][5115] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" host="ip-172-31-22-219" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.490 [INFO][5115] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-219" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.504 [INFO][5115] ipam.go 489: Trying affinity for 192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.509 [INFO][5115] ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.514 [INFO][5115] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.514 [INFO][5115] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" host="ip-172-31-22-219" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.517 [INFO][5115] ipam.go 1685: Creating new handle: k8s-pod-network.23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311 Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.525 [INFO][5115] ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" host="ip-172-31-22-219" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.532 [INFO][5115] ipam.go 1216: Successfully claimed IPs: [192.168.48.1/26] block=192.168.48.0/26 handle="k8s-pod-network.23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" host="ip-172-31-22-219" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.533 [INFO][5115] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.1/26] handle="k8s-pod-network.23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" host="ip-172-31-22-219" Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.533 [INFO][5115] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:31.579033 containerd[2004]: 2024-09-04 17:18:31.533 [INFO][5115] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.48.1/26] IPv6=[] ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" HandleID="k8s-pod-network.23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.580366 containerd[2004]: 2024-09-04 17:18:31.536 [INFO][5104] k8s.go 386: Populated endpoint ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Namespace="kube-system" Pod="coredns-76f75df574-cwjsl" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"408c6ec0-ae37-46f2-8a2e-26492274c3b6", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"", Pod:"coredns-76f75df574-cwjsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082e0eb6787", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:31.580366 containerd[2004]: 2024-09-04 17:18:31.536 [INFO][5104] k8s.go 387: Calico CNI using IPs: [192.168.48.1/32] ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Namespace="kube-system" Pod="coredns-76f75df574-cwjsl" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.580366 containerd[2004]: 2024-09-04 17:18:31.537 [INFO][5104] dataplane_linux.go 68: Setting the host side veth name to cali082e0eb6787 ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Namespace="kube-system" Pod="coredns-76f75df574-cwjsl" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.580366 containerd[2004]: 2024-09-04 17:18:31.546 [INFO][5104] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Namespace="kube-system" Pod="coredns-76f75df574-cwjsl" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.580366 containerd[2004]: 2024-09-04 17:18:31.547 [INFO][5104] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Namespace="kube-system" Pod="coredns-76f75df574-cwjsl" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"408c6ec0-ae37-46f2-8a2e-26492274c3b6", ResourceVersion:"879", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311", Pod:"coredns-76f75df574-cwjsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082e0eb6787", MAC:"e2:1e:8f:50:c4:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:31.580366 containerd[2004]: 2024-09-04 17:18:31.570 [INFO][5104] k8s.go 500: Wrote updated endpoint to datastore ContainerID="23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311" Namespace="kube-system" Pod="coredns-76f75df574-cwjsl" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:31.626898 containerd[2004]: time="2024-09-04T17:18:31.625565986Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:31.626898 containerd[2004]: time="2024-09-04T17:18:31.626403874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:31.626898 containerd[2004]: time="2024-09-04T17:18:31.626434438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:31.626898 containerd[2004]: time="2024-09-04T17:18:31.626605978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:31.677215 systemd[1]: Started cri-containerd-23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311.scope - libcontainer container 23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311. Sep 4 17:18:31.763425 containerd[2004]: time="2024-09-04T17:18:31.763222619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cwjsl,Uid:408c6ec0-ae37-46f2-8a2e-26492274c3b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311\"" Sep 4 17:18:31.785489 containerd[2004]: time="2024-09-04T17:18:31.785325911Z" level=info msg="CreateContainer within sandbox \"23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:18:31.808631 containerd[2004]: time="2024-09-04T17:18:31.806682395Z" level=info msg="CreateContainer within sandbox \"23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5632f651ddf2c23df4d121513e4bc73653eab408ace45a116bfff81f083c346c\"" Sep 4 17:18:31.809835 containerd[2004]: time="2024-09-04T17:18:31.809678543Z" level=info msg="StartContainer for \"5632f651ddf2c23df4d121513e4bc73653eab408ace45a116bfff81f083c346c\"" Sep 4 17:18:31.855102 systemd[1]: Started cri-containerd-5632f651ddf2c23df4d121513e4bc73653eab408ace45a116bfff81f083c346c.scope - libcontainer container 5632f651ddf2c23df4d121513e4bc73653eab408ace45a116bfff81f083c346c. Sep 4 17:18:31.899736 containerd[2004]: time="2024-09-04T17:18:31.899623680Z" level=info msg="StartContainer for \"5632f651ddf2c23df4d121513e4bc73653eab408ace45a116bfff81f083c346c\" returns successfully" Sep 4 17:18:32.073049 containerd[2004]: time="2024-09-04T17:18:32.072272840Z" level=info msg="StopPodSandbox for \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\"" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.154 [INFO][5225] k8s.go 608: Cleaning up netns ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.155 [INFO][5225] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" iface="eth0" netns="/var/run/netns/cni-9b2715a8-939d-b0a2-39ae-fd1024515012" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.156 [INFO][5225] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" iface="eth0" netns="/var/run/netns/cni-9b2715a8-939d-b0a2-39ae-fd1024515012" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.156 [INFO][5225] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" iface="eth0" netns="/var/run/netns/cni-9b2715a8-939d-b0a2-39ae-fd1024515012" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.156 [INFO][5225] k8s.go 615: Releasing IP address(es) ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.156 [INFO][5225] utils.go 188: Calico CNI releasing IP address ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.194 [INFO][5231] ipam_plugin.go 417: Releasing address using handleID ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" HandleID="k8s-pod-network.8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.194 [INFO][5231] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.194 [INFO][5231] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.208 [WARNING][5231] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" HandleID="k8s-pod-network.8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.208 [INFO][5231] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" HandleID="k8s-pod-network.8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.211 [INFO][5231] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:32.215954 containerd[2004]: 2024-09-04 17:18:32.213 [INFO][5225] k8s.go 621: Teardown processing complete. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:32.217271 containerd[2004]: time="2024-09-04T17:18:32.217159881Z" level=info msg="TearDown network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\" successfully" Sep 4 17:18:32.217271 containerd[2004]: time="2024-09-04T17:18:32.217203489Z" level=info msg="StopPodSandbox for \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\" returns successfully" Sep 4 17:18:32.218593 containerd[2004]: time="2024-09-04T17:18:32.218533557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bf876798-6rhxn,Uid:d8823db7-aaef-4420-b78a-28c3c358b6ff,Namespace:calico-system,Attempt:1,}" Sep 4 17:18:32.302317 systemd[1]: run-netns-cni\x2d9b2715a8\x2d939d\x2db0a2\x2d39ae\x2dfd1024515012.mount: Deactivated successfully. Sep 4 17:18:32.448119 (udev-worker)[5124]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:32.450833 systemd-networkd[1926]: calib8ee3a86b7c: Link UP Sep 4 17:18:32.453381 systemd-networkd[1926]: calib8ee3a86b7c: Gained carrier Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.298 [INFO][5238] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0 calico-kube-controllers-85bf876798- calico-system d8823db7-aaef-4420-b78a-28c3c358b6ff 893 0 2024-09-04 17:18:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85bf876798 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-219 calico-kube-controllers-85bf876798-6rhxn eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib8ee3a86b7c [] []}} ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Namespace="calico-system" Pod="calico-kube-controllers-85bf876798-6rhxn" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.300 [INFO][5238] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Namespace="calico-system" Pod="calico-kube-controllers-85bf876798-6rhxn" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.375 [INFO][5248] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" HandleID="k8s-pod-network.99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.397 [INFO][5248] ipam_plugin.go 270: Auto assigning IP ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" HandleID="k8s-pod-network.99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028ea20), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-219", "pod":"calico-kube-controllers-85bf876798-6rhxn", "timestamp":"2024-09-04 17:18:32.375819922 +0000 UTC"}, Hostname:"ip-172-31-22-219", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.397 [INFO][5248] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.397 [INFO][5248] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.398 [INFO][5248] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-219' Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.401 [INFO][5248] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" host="ip-172-31-22-219" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.408 [INFO][5248] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-219" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.415 [INFO][5248] ipam.go 489: Trying affinity for 192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.418 [INFO][5248] ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.422 [INFO][5248] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.422 [INFO][5248] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" host="ip-172-31-22-219" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.425 [INFO][5248] ipam.go 1685: Creating new handle: k8s-pod-network.99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.431 [INFO][5248] ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" host="ip-172-31-22-219" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.439 [INFO][5248] ipam.go 1216: Successfully claimed IPs: [192.168.48.2/26] block=192.168.48.0/26 handle="k8s-pod-network.99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" host="ip-172-31-22-219" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.440 [INFO][5248] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.2/26] handle="k8s-pod-network.99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" host="ip-172-31-22-219" Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.440 [INFO][5248] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:32.485915 containerd[2004]: 2024-09-04 17:18:32.440 [INFO][5248] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.48.2/26] IPv6=[] ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" HandleID="k8s-pod-network.99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.494224 containerd[2004]: 2024-09-04 17:18:32.443 [INFO][5238] k8s.go 386: Populated endpoint ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Namespace="calico-system" Pod="calico-kube-controllers-85bf876798-6rhxn" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0", GenerateName:"calico-kube-controllers-85bf876798-", Namespace:"calico-system", SelfLink:"", UID:"d8823db7-aaef-4420-b78a-28c3c358b6ff", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bf876798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"", Pod:"calico-kube-controllers-85bf876798-6rhxn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8ee3a86b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:32.494224 containerd[2004]: 2024-09-04 17:18:32.444 [INFO][5238] k8s.go 387: Calico CNI using IPs: [192.168.48.2/32] ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Namespace="calico-system" Pod="calico-kube-controllers-85bf876798-6rhxn" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.494224 containerd[2004]: 2024-09-04 17:18:32.444 [INFO][5238] dataplane_linux.go 68: Setting the host side veth name to calib8ee3a86b7c ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Namespace="calico-system" Pod="calico-kube-controllers-85bf876798-6rhxn" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.494224 containerd[2004]: 2024-09-04 17:18:32.453 [INFO][5238] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Namespace="calico-system" Pod="calico-kube-controllers-85bf876798-6rhxn" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.494224 containerd[2004]: 2024-09-04 17:18:32.454 [INFO][5238] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Namespace="calico-system" Pod="calico-kube-controllers-85bf876798-6rhxn" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0", GenerateName:"calico-kube-controllers-85bf876798-", Namespace:"calico-system", SelfLink:"", UID:"d8823db7-aaef-4420-b78a-28c3c358b6ff", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bf876798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b", Pod:"calico-kube-controllers-85bf876798-6rhxn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8ee3a86b7c", MAC:"9a:ec:02:6c:eb:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:32.494224 containerd[2004]: 2024-09-04 17:18:32.480 [INFO][5238] k8s.go 500: Wrote updated endpoint to datastore ContainerID="99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b" Namespace="calico-system" Pod="calico-kube-controllers-85bf876798-6rhxn" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:32.554593 kubelet[3460]: I0904 17:18:32.554519 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cwjsl" podStartSLOduration=37.554455163 podStartE2EDuration="37.554455163s" podCreationTimestamp="2024-09-04 17:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:32.527364671 +0000 UTC m=+49.757664885" watchObservedRunningTime="2024-09-04 17:18:32.554455163 +0000 UTC m=+49.784755353" Sep 4 17:18:32.564449 containerd[2004]: time="2024-09-04T17:18:32.563202755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:32.564449 containerd[2004]: time="2024-09-04T17:18:32.563328227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:32.564449 containerd[2004]: time="2024-09-04T17:18:32.563367167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:32.564449 containerd[2004]: time="2024-09-04T17:18:32.563547215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:32.628267 systemd[1]: Started cri-containerd-99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b.scope - libcontainer container 99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b. Sep 4 17:18:32.727438 containerd[2004]: time="2024-09-04T17:18:32.727378464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85bf876798-6rhxn,Uid:d8823db7-aaef-4420-b78a-28c3c358b6ff,Namespace:calico-system,Attempt:1,} returns sandbox id \"99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b\"" Sep 4 17:18:32.733142 containerd[2004]: time="2024-09-04T17:18:32.730768656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:18:33.073753 containerd[2004]: time="2024-09-04T17:18:33.073570545Z" level=info msg="StopPodSandbox for \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\"" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.166 [INFO][5328] k8s.go 608: Cleaning up netns ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.166 [INFO][5328] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" iface="eth0" netns="/var/run/netns/cni-3306d79a-07e5-cc91-9a82-4454c50395c5" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.167 [INFO][5328] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" iface="eth0" netns="/var/run/netns/cni-3306d79a-07e5-cc91-9a82-4454c50395c5" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.168 [INFO][5328] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" iface="eth0" netns="/var/run/netns/cni-3306d79a-07e5-cc91-9a82-4454c50395c5" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.168 [INFO][5328] k8s.go 615: Releasing IP address(es) ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.168 [INFO][5328] utils.go 188: Calico CNI releasing IP address ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.209 [INFO][5340] ipam_plugin.go 417: Releasing address using handleID ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" HandleID="k8s-pod-network.a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.209 [INFO][5340] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.209 [INFO][5340] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.224 [WARNING][5340] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" HandleID="k8s-pod-network.a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.224 [INFO][5340] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" HandleID="k8s-pod-network.a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.227 [INFO][5340] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:33.232677 containerd[2004]: 2024-09-04 17:18:33.229 [INFO][5328] k8s.go 621: Teardown processing complete. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:33.236048 containerd[2004]: time="2024-09-04T17:18:33.235956058Z" level=info msg="TearDown network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\" successfully" Sep 4 17:18:33.236048 containerd[2004]: time="2024-09-04T17:18:33.236037706Z" level=info msg="StopPodSandbox for \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\" returns successfully" Sep 4 17:18:33.237220 containerd[2004]: time="2024-09-04T17:18:33.237157090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8t28,Uid:88b22ba2-b48a-4946-b09c-791f6c8e7d48,Namespace:calico-system,Attempt:1,}" Sep 4 17:18:33.241700 systemd[1]: run-netns-cni\x2d3306d79a\x2d07e5\x2dcc91\x2d9a82\x2d4454c50395c5.mount: Deactivated successfully. Sep 4 17:18:33.299550 systemd-networkd[1926]: cali082e0eb6787: Gained IPv6LL Sep 4 17:18:33.473751 systemd-networkd[1926]: cali9630875abdc: Link UP Sep 4 17:18:33.474507 systemd-networkd[1926]: cali9630875abdc: Gained carrier Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.326 [INFO][5347] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0 csi-node-driver- calico-system 88b22ba2-b48a-4946-b09c-791f6c8e7d48 914 0 2024-09-04 17:18:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-22-219 csi-node-driver-j8t28 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali9630875abdc [] []}} ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Namespace="calico-system" Pod="csi-node-driver-j8t28" WorkloadEndpoint="ip--172--31--22--219-k8s-csi--node--driver--j8t28-" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.327 [INFO][5347] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Namespace="calico-system" Pod="csi-node-driver-j8t28" WorkloadEndpoint="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.379 [INFO][5357] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" HandleID="k8s-pod-network.ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.398 [INFO][5357] ipam_plugin.go 270: Auto assigning IP ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" HandleID="k8s-pod-network.ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034a380), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-219", "pod":"csi-node-driver-j8t28", "timestamp":"2024-09-04 17:18:33.379309919 +0000 UTC"}, Hostname:"ip-172-31-22-219", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.398 [INFO][5357] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.398 [INFO][5357] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.398 [INFO][5357] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-219' Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.401 [INFO][5357] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" host="ip-172-31-22-219" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.407 [INFO][5357] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-219" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.417 [INFO][5357] ipam.go 489: Trying affinity for 192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.421 [INFO][5357] ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.425 [INFO][5357] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.425 [INFO][5357] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" host="ip-172-31-22-219" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.429 [INFO][5357] ipam.go 1685: Creating new handle: k8s-pod-network.ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.434 [INFO][5357] ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" host="ip-172-31-22-219" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.450 [INFO][5357] ipam.go 1216: Successfully claimed IPs: [192.168.48.3/26] block=192.168.48.0/26 handle="k8s-pod-network.ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" host="ip-172-31-22-219" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.450 [INFO][5357] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.3/26] handle="k8s-pod-network.ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" host="ip-172-31-22-219" Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.450 [INFO][5357] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:33.513445 containerd[2004]: 2024-09-04 17:18:33.450 [INFO][5357] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.48.3/26] IPv6=[] ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" HandleID="k8s-pod-network.ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.514620 containerd[2004]: 2024-09-04 17:18:33.462 [INFO][5347] k8s.go 386: Populated endpoint ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Namespace="calico-system" Pod="csi-node-driver-j8t28" WorkloadEndpoint="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88b22ba2-b48a-4946-b09c-791f6c8e7d48", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"", Pod:"csi-node-driver-j8t28", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9630875abdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:33.514620 containerd[2004]: 2024-09-04 17:18:33.463 [INFO][5347] k8s.go 387: Calico CNI using IPs: [192.168.48.3/32] ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Namespace="calico-system" Pod="csi-node-driver-j8t28" WorkloadEndpoint="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.514620 containerd[2004]: 2024-09-04 17:18:33.463 [INFO][5347] dataplane_linux.go 68: Setting the host side veth name to cali9630875abdc ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Namespace="calico-system" Pod="csi-node-driver-j8t28" WorkloadEndpoint="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.514620 containerd[2004]: 2024-09-04 17:18:33.473 [INFO][5347] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Namespace="calico-system" Pod="csi-node-driver-j8t28" WorkloadEndpoint="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.514620 containerd[2004]: 2024-09-04 17:18:33.477 [INFO][5347] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Namespace="calico-system" Pod="csi-node-driver-j8t28" WorkloadEndpoint="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88b22ba2-b48a-4946-b09c-791f6c8e7d48", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f", Pod:"csi-node-driver-j8t28", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9630875abdc", MAC:"82:89:66:b5:0d:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:33.514620 containerd[2004]: 2024-09-04 17:18:33.509 [INFO][5347] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f" Namespace="calico-system" Pod="csi-node-driver-j8t28" WorkloadEndpoint="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:33.558292 containerd[2004]: time="2024-09-04T17:18:33.558092400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:33.559505 containerd[2004]: time="2024-09-04T17:18:33.558902004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:33.559505 containerd[2004]: time="2024-09-04T17:18:33.558995736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:33.559505 containerd[2004]: time="2024-09-04T17:18:33.559205496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:33.600772 systemd[1]: Started sshd@11-172.31.22.219:22-139.178.89.65:42110.service - OpenSSH per-connection server daemon (139.178.89.65:42110). Sep 4 17:18:33.624399 systemd[1]: run-containerd-runc-k8s.io-ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f-runc.IaHrO7.mount: Deactivated successfully. Sep 4 17:18:33.638129 systemd[1]: Started cri-containerd-ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f.scope - libcontainer container ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f. Sep 4 17:18:33.696415 containerd[2004]: time="2024-09-04T17:18:33.696266436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8t28,Uid:88b22ba2-b48a-4946-b09c-791f6c8e7d48,Namespace:calico-system,Attempt:1,} returns sandbox id \"ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f\"" Sep 4 17:18:33.803719 sshd[5405]: Accepted publickey for core from 139.178.89.65 port 42110 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:33.811116 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:33.831142 systemd-logind[1987]: New session 12 of user core. Sep 4 17:18:33.838116 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:18:34.081413 containerd[2004]: time="2024-09-04T17:18:34.075760330Z" level=info msg="StopPodSandbox for \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\"" Sep 4 17:18:34.190354 sshd[5405]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:34.201588 systemd[1]: sshd@11-172.31.22.219:22-139.178.89.65:42110.service: Deactivated successfully. Sep 4 17:18:34.209909 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:18:34.217958 systemd-logind[1987]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:18:34.244518 systemd[1]: Started sshd@12-172.31.22.219:22-139.178.89.65:42116.service - OpenSSH per-connection server daemon (139.178.89.65:42116). Sep 4 17:18:34.248388 systemd-logind[1987]: Removed session 12. Sep 4 17:18:34.388062 systemd-networkd[1926]: calib8ee3a86b7c: Gained IPv6LL Sep 4 17:18:34.497299 sshd[5456]: Accepted publickey for core from 139.178.89.65 port 42116 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:34.503428 sshd[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.335 [INFO][5448] k8s.go 608: Cleaning up netns ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.342 [INFO][5448] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" iface="eth0" netns="/var/run/netns/cni-d5fe6590-79e5-3a4a-848a-8d37ae5b6a71" Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.349 [INFO][5448] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" iface="eth0" netns="/var/run/netns/cni-d5fe6590-79e5-3a4a-848a-8d37ae5b6a71" Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.353 [INFO][5448] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" iface="eth0" netns="/var/run/netns/cni-d5fe6590-79e5-3a4a-848a-8d37ae5b6a71" Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.354 [INFO][5448] k8s.go 615: Releasing IP address(es) ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.354 [INFO][5448] utils.go 188: Calico CNI releasing IP address ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.466 [INFO][5461] ipam_plugin.go 417: Releasing address using handleID ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" HandleID="k8s-pod-network.dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.466 [INFO][5461] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.466 [INFO][5461] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.489 [WARNING][5461] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" HandleID="k8s-pod-network.dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.489 [INFO][5461] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" HandleID="k8s-pod-network.dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.492 [INFO][5461] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:34.517833 containerd[2004]: 2024-09-04 17:18:34.509 [INFO][5448] k8s.go 621: Teardown processing complete. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:34.524309 containerd[2004]: time="2024-09-04T17:18:34.516508501Z" level=info msg="TearDown network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\" successfully" Sep 4 17:18:34.524309 containerd[2004]: time="2024-09-04T17:18:34.519193309Z" level=info msg="StopPodSandbox for \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\" returns successfully" Sep 4 17:18:34.528064 systemd-logind[1987]: New session 13 of user core. Sep 4 17:18:34.532467 systemd[1]: run-netns-cni\x2dd5fe6590\x2d79e5\x2d3a4a\x2d848a\x2d8d37ae5b6a71.mount: Deactivated successfully. Sep 4 17:18:34.543140 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:18:34.549490 containerd[2004]: time="2024-09-04T17:18:34.548310097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ncgw4,Uid:3514e90f-33f1-43e8-8b04-bc96331f0618,Namespace:kube-system,Attempt:1,}" Sep 4 17:18:35.011391 sshd[5456]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:35.024842 systemd[1]: sshd@12-172.31.22.219:22-139.178.89.65:42116.service: Deactivated successfully. Sep 4 17:18:35.034954 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:18:35.039703 systemd-logind[1987]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:18:35.063485 systemd[1]: Started sshd@13-172.31.22.219:22-139.178.89.65:42128.service - OpenSSH per-connection server daemon (139.178.89.65:42128). Sep 4 17:18:35.071262 systemd-logind[1987]: Removed session 13. Sep 4 17:18:35.153059 systemd-networkd[1926]: cali9630875abdc: Gained IPv6LL Sep 4 17:18:35.208268 systemd-networkd[1926]: calid2734644c25: Link UP Sep 4 17:18:35.208770 systemd-networkd[1926]: calid2734644c25: Gained carrier Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:34.781 [INFO][5470] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0 coredns-76f75df574- kube-system 3514e90f-33f1-43e8-8b04-bc96331f0618 922 0 2024-09-04 17:17:55 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-219 coredns-76f75df574-ncgw4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2734644c25 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Namespace="kube-system" Pod="coredns-76f75df574-ncgw4" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:34.784 [INFO][5470] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Namespace="kube-system" Pod="coredns-76f75df574-ncgw4" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:34.941 [INFO][5487] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" HandleID="k8s-pod-network.0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:34.967 [INFO][5487] ipam_plugin.go 270: Auto assigning IP ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" HandleID="k8s-pod-network.0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004dec50), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-219", "pod":"coredns-76f75df574-ncgw4", "timestamp":"2024-09-04 17:18:34.941455047 +0000 UTC"}, Hostname:"ip-172-31-22-219", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:34.969 [INFO][5487] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:34.969 [INFO][5487] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:34.969 [INFO][5487] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-219' Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:34.978 [INFO][5487] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" host="ip-172-31-22-219" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:34.997 [INFO][5487] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-219" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.023 [INFO][5487] ipam.go 489: Trying affinity for 192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.034 [INFO][5487] ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.098 [INFO][5487] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.099 [INFO][5487] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" host="ip-172-31-22-219" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.122 [INFO][5487] ipam.go 1685: Creating new handle: k8s-pod-network.0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59 Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.148 [INFO][5487] ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" host="ip-172-31-22-219" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.176 [INFO][5487] ipam.go 1216: Successfully claimed IPs: [192.168.48.4/26] block=192.168.48.0/26 handle="k8s-pod-network.0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" host="ip-172-31-22-219" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.176 [INFO][5487] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.4/26] handle="k8s-pod-network.0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" host="ip-172-31-22-219" Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.176 [INFO][5487] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:35.261944 containerd[2004]: 2024-09-04 17:18:35.177 [INFO][5487] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.48.4/26] IPv6=[] ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" HandleID="k8s-pod-network.0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:35.264515 containerd[2004]: 2024-09-04 17:18:35.189 [INFO][5470] k8s.go 386: Populated endpoint ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Namespace="kube-system" Pod="coredns-76f75df574-ncgw4" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3514e90f-33f1-43e8-8b04-bc96331f0618", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"", Pod:"coredns-76f75df574-ncgw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2734644c25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:35.264515 containerd[2004]: 2024-09-04 17:18:35.191 [INFO][5470] k8s.go 387: Calico CNI using IPs: [192.168.48.4/32] ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Namespace="kube-system" Pod="coredns-76f75df574-ncgw4" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:35.264515 containerd[2004]: 2024-09-04 17:18:35.192 [INFO][5470] dataplane_linux.go 68: Setting the host side veth name to calid2734644c25 ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Namespace="kube-system" Pod="coredns-76f75df574-ncgw4" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:35.264515 containerd[2004]: 2024-09-04 17:18:35.208 [INFO][5470] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Namespace="kube-system" Pod="coredns-76f75df574-ncgw4" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:35.264515 containerd[2004]: 2024-09-04 17:18:35.211 [INFO][5470] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Namespace="kube-system" Pod="coredns-76f75df574-ncgw4" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3514e90f-33f1-43e8-8b04-bc96331f0618", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59", Pod:"coredns-76f75df574-ncgw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2734644c25", MAC:"1e:4a:97:a3:07:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:35.264515 containerd[2004]: 2024-09-04 17:18:35.246 [INFO][5470] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59" Namespace="kube-system" Pod="coredns-76f75df574-ncgw4" WorkloadEndpoint="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:35.286392 sshd[5496]: Accepted publickey for core from 139.178.89.65 port 42128 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:35.291112 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:35.311950 systemd-logind[1987]: New session 14 of user core. Sep 4 17:18:35.318139 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:18:35.355884 containerd[2004]: time="2024-09-04T17:18:35.354594481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:35.355884 containerd[2004]: time="2024-09-04T17:18:35.354935953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:35.355884 containerd[2004]: time="2024-09-04T17:18:35.354975361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:35.355884 containerd[2004]: time="2024-09-04T17:18:35.355136977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:35.418124 systemd[1]: Started cri-containerd-0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59.scope - libcontainer container 0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59. Sep 4 17:18:35.593119 containerd[2004]: time="2024-09-04T17:18:35.592954430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ncgw4,Uid:3514e90f-33f1-43e8-8b04-bc96331f0618,Namespace:kube-system,Attempt:1,} returns sandbox id \"0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59\"" Sep 4 17:18:35.613434 containerd[2004]: time="2024-09-04T17:18:35.612575462Z" level=info msg="CreateContainer within sandbox \"0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:18:35.676852 containerd[2004]: time="2024-09-04T17:18:35.673897034Z" level=info msg="CreateContainer within sandbox \"0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"60f5e4b391f5e17789ed9c583e9e4f957574acdf372f858c377d7d5bf28916cf\"" Sep 4 17:18:35.687688 containerd[2004]: time="2024-09-04T17:18:35.685419002Z" level=info msg="StartContainer for \"60f5e4b391f5e17789ed9c583e9e4f957574acdf372f858c377d7d5bf28916cf\"" Sep 4 17:18:35.709038 sshd[5496]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:35.718816 systemd[1]: sshd@13-172.31.22.219:22-139.178.89.65:42128.service: Deactivated successfully. Sep 4 17:18:35.724348 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:18:35.731275 systemd-logind[1987]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:18:35.736079 systemd-logind[1987]: Removed session 14. Sep 4 17:18:35.792393 systemd[1]: Started cri-containerd-60f5e4b391f5e17789ed9c583e9e4f957574acdf372f858c377d7d5bf28916cf.scope - libcontainer container 60f5e4b391f5e17789ed9c583e9e4f957574acdf372f858c377d7d5bf28916cf. Sep 4 17:18:35.864521 containerd[2004]: time="2024-09-04T17:18:35.864335835Z" level=info msg="StartContainer for \"60f5e4b391f5e17789ed9c583e9e4f957574acdf372f858c377d7d5bf28916cf\" returns successfully" Sep 4 17:18:36.110826 containerd[2004]: time="2024-09-04T17:18:36.110746044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:36.112449 containerd[2004]: time="2024-09-04T17:18:36.112366584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:18:36.113210 containerd[2004]: time="2024-09-04T17:18:36.113103108Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:36.119486 containerd[2004]: time="2024-09-04T17:18:36.119325336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:36.122554 containerd[2004]: time="2024-09-04T17:18:36.122372076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 3.388832872s" Sep 4 17:18:36.122554 containerd[2004]: time="2024-09-04T17:18:36.122432160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:18:36.125022 containerd[2004]: time="2024-09-04T17:18:36.124961425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:18:36.137183 containerd[2004]: time="2024-09-04T17:18:36.137120077Z" level=info msg="CreateContainer within sandbox \"99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:18:36.154371 containerd[2004]: time="2024-09-04T17:18:36.154124953Z" level=info msg="CreateContainer within sandbox \"99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f0f2226f737b81b8fbfd579e74dedc9fa1a794612b7df4967240790880cb6175\"" Sep 4 17:18:36.156800 containerd[2004]: time="2024-09-04T17:18:36.156460393Z" level=info msg="StartContainer for \"f0f2226f737b81b8fbfd579e74dedc9fa1a794612b7df4967240790880cb6175\"" Sep 4 17:18:36.211119 systemd[1]: Started cri-containerd-f0f2226f737b81b8fbfd579e74dedc9fa1a794612b7df4967240790880cb6175.scope - libcontainer container f0f2226f737b81b8fbfd579e74dedc9fa1a794612b7df4967240790880cb6175. Sep 4 17:18:36.286872 containerd[2004]: time="2024-09-04T17:18:36.286738165Z" level=info msg="StartContainer for \"f0f2226f737b81b8fbfd579e74dedc9fa1a794612b7df4967240790880cb6175\" returns successfully" Sep 4 17:18:36.643591 kubelet[3460]: I0904 17:18:36.643501 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85bf876798-6rhxn" podStartSLOduration=27.249560446 podStartE2EDuration="30.643387623s" podCreationTimestamp="2024-09-04 17:18:06 +0000 UTC" firstStartedPulling="2024-09-04 17:18:32.729872604 +0000 UTC m=+49.960172794" lastFinishedPulling="2024-09-04 17:18:36.123699709 +0000 UTC m=+53.353999971" observedRunningTime="2024-09-04 17:18:36.637320351 +0000 UTC m=+53.867620541" watchObservedRunningTime="2024-09-04 17:18:36.643387623 +0000 UTC m=+53.873687813" Sep 4 17:18:37.074583 systemd-networkd[1926]: calid2734644c25: Gained IPv6LL Sep 4 17:18:37.685260 containerd[2004]: time="2024-09-04T17:18:37.684846808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:37.689134 containerd[2004]: time="2024-09-04T17:18:37.689019340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:18:37.691854 containerd[2004]: time="2024-09-04T17:18:37.690998080Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:37.696172 containerd[2004]: time="2024-09-04T17:18:37.696110128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:37.699107 containerd[2004]: time="2024-09-04T17:18:37.699046180Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.574020603s" Sep 4 17:18:37.699322 containerd[2004]: time="2024-09-04T17:18:37.699290164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:18:37.709773 containerd[2004]: time="2024-09-04T17:18:37.709559980Z" level=info msg="CreateContainer within sandbox \"ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:18:37.763889 containerd[2004]: time="2024-09-04T17:18:37.763827797Z" level=info msg="CreateContainer within sandbox \"ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f31703d66bcb9682d6607b29861fa0ebd4a32a883ba674cf8d97c49ee82cb008\"" Sep 4 17:18:37.765633 containerd[2004]: time="2024-09-04T17:18:37.765060821Z" level=info msg="StartContainer for \"f31703d66bcb9682d6607b29861fa0ebd4a32a883ba674cf8d97c49ee82cb008\"" Sep 4 17:18:37.866278 systemd[1]: Started cri-containerd-f31703d66bcb9682d6607b29861fa0ebd4a32a883ba674cf8d97c49ee82cb008.scope - libcontainer container f31703d66bcb9682d6607b29861fa0ebd4a32a883ba674cf8d97c49ee82cb008. Sep 4 17:18:37.906248 kubelet[3460]: I0904 17:18:37.905600 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ncgw4" podStartSLOduration=42.905515913 podStartE2EDuration="42.905515913s" podCreationTimestamp="2024-09-04 17:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:36.680951607 +0000 UTC m=+53.911251809" watchObservedRunningTime="2024-09-04 17:18:37.905515913 +0000 UTC m=+55.135816115" Sep 4 17:18:37.983878 containerd[2004]: time="2024-09-04T17:18:37.983814078Z" level=info msg="StartContainer for \"f31703d66bcb9682d6607b29861fa0ebd4a32a883ba674cf8d97c49ee82cb008\" returns successfully" Sep 4 17:18:37.987895 containerd[2004]: time="2024-09-04T17:18:37.987816378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:18:39.482992 containerd[2004]: time="2024-09-04T17:18:39.482763269Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:39.485066 containerd[2004]: time="2024-09-04T17:18:39.484098077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:18:39.486777 containerd[2004]: time="2024-09-04T17:18:39.485906405Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:39.493582 containerd[2004]: time="2024-09-04T17:18:39.493522097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:39.496615 containerd[2004]: time="2024-09-04T17:18:39.496555805Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.508380219s" Sep 4 17:18:39.496858 containerd[2004]: time="2024-09-04T17:18:39.496824329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:18:39.501285 containerd[2004]: time="2024-09-04T17:18:39.501230489Z" level=info msg="CreateContainer within sandbox \"ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:18:39.542822 containerd[2004]: time="2024-09-04T17:18:39.539302757Z" level=info msg="CreateContainer within sandbox \"ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"66af49ade190b7d18e0faf27eeabe2853bf2b4e42a77a0e16d8841e7575bd7a0\"" Sep 4 17:18:39.543284 containerd[2004]: time="2024-09-04T17:18:39.543240773Z" level=info msg="StartContainer for \"66af49ade190b7d18e0faf27eeabe2853bf2b4e42a77a0e16d8841e7575bd7a0\"" Sep 4 17:18:39.552308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2045287373.mount: Deactivated successfully. Sep 4 17:18:39.621423 systemd[1]: run-containerd-runc-k8s.io-66af49ade190b7d18e0faf27eeabe2853bf2b4e42a77a0e16d8841e7575bd7a0-runc.sskSA8.mount: Deactivated successfully. Sep 4 17:18:39.633486 systemd[1]: Started cri-containerd-66af49ade190b7d18e0faf27eeabe2853bf2b4e42a77a0e16d8841e7575bd7a0.scope - libcontainer container 66af49ade190b7d18e0faf27eeabe2853bf2b4e42a77a0e16d8841e7575bd7a0. Sep 4 17:18:39.674316 ntpd[1980]: Listen normally on 8 vxlan.calico 192.168.48.0:123 Sep 4 17:18:39.674458 ntpd[1980]: Listen normally on 9 vxlan.calico [fe80::64f0:28ff:fe3a:af8c%4]:123 Sep 4 17:18:39.676241 ntpd[1980]: 4 Sep 17:18:39 ntpd[1980]: Listen normally on 8 vxlan.calico 192.168.48.0:123 Sep 4 17:18:39.676241 ntpd[1980]: 4 Sep 17:18:39 ntpd[1980]: Listen normally on 9 vxlan.calico [fe80::64f0:28ff:fe3a:af8c%4]:123 Sep 4 17:18:39.676241 ntpd[1980]: 4 Sep 17:18:39 ntpd[1980]: Listen normally on 10 cali082e0eb6787 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:18:39.676241 ntpd[1980]: 4 Sep 17:18:39 ntpd[1980]: Listen normally on 11 calib8ee3a86b7c [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:18:39.676241 ntpd[1980]: 4 Sep 17:18:39 ntpd[1980]: Listen normally on 12 cali9630875abdc [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:18:39.676241 ntpd[1980]: 4 Sep 17:18:39 ntpd[1980]: Listen normally on 13 calid2734644c25 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:18:39.674567 ntpd[1980]: Listen normally on 10 cali082e0eb6787 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:18:39.674644 ntpd[1980]: Listen normally on 11 calib8ee3a86b7c [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:18:39.674718 ntpd[1980]: Listen normally on 12 cali9630875abdc [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:18:39.674832 ntpd[1980]: Listen normally on 13 calid2734644c25 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:18:39.758928 containerd[2004]: time="2024-09-04T17:18:39.758636227Z" level=info msg="StartContainer for \"66af49ade190b7d18e0faf27eeabe2853bf2b4e42a77a0e16d8841e7575bd7a0\" returns successfully" Sep 4 17:18:40.250205 kubelet[3460]: I0904 17:18:40.249827 3460 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:18:40.250205 kubelet[3460]: I0904 17:18:40.249873 3460 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:18:40.752154 systemd[1]: Started sshd@14-172.31.22.219:22-139.178.89.65:57950.service - OpenSSH per-connection server daemon (139.178.89.65:57950). Sep 4 17:18:40.943131 sshd[5760]: Accepted publickey for core from 139.178.89.65 port 57950 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:40.949036 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:40.964628 systemd-logind[1987]: New session 15 of user core. Sep 4 17:18:40.971217 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:18:41.291733 sshd[5760]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:41.301541 systemd[1]: sshd@14-172.31.22.219:22-139.178.89.65:57950.service: Deactivated successfully. Sep 4 17:18:41.309350 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:18:41.313633 systemd-logind[1987]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:18:41.318142 systemd-logind[1987]: Removed session 15. Sep 4 17:18:42.981261 containerd[2004]: time="2024-09-04T17:18:42.981156443Z" level=info msg="StopPodSandbox for \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\"" Sep 4 17:18:42.982072 containerd[2004]: time="2024-09-04T17:18:42.981329639Z" level=info msg="TearDown network for sandbox \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\" successfully" Sep 4 17:18:42.982072 containerd[2004]: time="2024-09-04T17:18:42.981360875Z" level=info msg="StopPodSandbox for \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\" returns successfully" Sep 4 17:18:42.982899 containerd[2004]: time="2024-09-04T17:18:42.982489787Z" level=info msg="RemovePodSandbox for \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\"" Sep 4 17:18:42.982899 containerd[2004]: time="2024-09-04T17:18:42.982542899Z" level=info msg="Forcibly stopping sandbox \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\"" Sep 4 17:18:42.982899 containerd[2004]: time="2024-09-04T17:18:42.982661555Z" level=info msg="TearDown network for sandbox \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\" successfully" Sep 4 17:18:42.987626 containerd[2004]: time="2024-09-04T17:18:42.987539339Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:42.988337 containerd[2004]: time="2024-09-04T17:18:42.987672239Z" level=info msg="RemovePodSandbox \"829841a450640eb5e7ae1c2a429488c221247ba032cd0b0f26688e3f5f37ae46\" returns successfully" Sep 4 17:18:42.989084 containerd[2004]: time="2024-09-04T17:18:42.988864403Z" level=info msg="StopPodSandbox for \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\"" Sep 4 17:18:42.989084 containerd[2004]: time="2024-09-04T17:18:42.989022071Z" level=info msg="TearDown network for sandbox \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\" successfully" Sep 4 17:18:42.989084 containerd[2004]: time="2024-09-04T17:18:42.989049071Z" level=info msg="StopPodSandbox for \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\" returns successfully" Sep 4 17:18:42.991930 containerd[2004]: time="2024-09-04T17:18:42.990208163Z" level=info msg="RemovePodSandbox for \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\"" Sep 4 17:18:42.991930 containerd[2004]: time="2024-09-04T17:18:42.990269339Z" level=info msg="Forcibly stopping sandbox \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\"" Sep 4 17:18:42.991930 containerd[2004]: time="2024-09-04T17:18:42.990404531Z" level=info msg="TearDown network for sandbox \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\" successfully" Sep 4 17:18:42.996190 containerd[2004]: time="2024-09-04T17:18:42.996111707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:42.996581 containerd[2004]: time="2024-09-04T17:18:42.996534083Z" level=info msg="RemovePodSandbox \"6b61e944f2f2a52069a0d4f1d754bc00678cefe09e122e00aa5dd89a87fdecda\" returns successfully" Sep 4 17:18:42.997992 containerd[2004]: time="2024-09-04T17:18:42.997761635Z" level=info msg="StopPodSandbox for \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\"" Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.136 [WARNING][5784] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0", GenerateName:"calico-kube-controllers-85bf876798-", Namespace:"calico-system", SelfLink:"", UID:"d8823db7-aaef-4420-b78a-28c3c358b6ff", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bf876798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b", Pod:"calico-kube-controllers-85bf876798-6rhxn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8ee3a86b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.137 [INFO][5784] k8s.go 608: Cleaning up netns ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.137 [INFO][5784] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" iface="eth0" netns="" Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.137 [INFO][5784] k8s.go 615: Releasing IP address(es) ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.137 [INFO][5784] utils.go 188: Calico CNI releasing IP address ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.236 [INFO][5792] ipam_plugin.go 417: Releasing address using handleID ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" HandleID="k8s-pod-network.8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.236 [INFO][5792] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.237 [INFO][5792] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.253 [WARNING][5792] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" HandleID="k8s-pod-network.8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.255 [INFO][5792] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" HandleID="k8s-pod-network.8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.259 [INFO][5792] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:43.266396 containerd[2004]: 2024-09-04 17:18:43.261 [INFO][5784] k8s.go 621: Teardown processing complete. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:43.269366 containerd[2004]: time="2024-09-04T17:18:43.268676168Z" level=info msg="TearDown network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\" successfully" Sep 4 17:18:43.269366 containerd[2004]: time="2024-09-04T17:18:43.268726832Z" level=info msg="StopPodSandbox for \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\" returns successfully" Sep 4 17:18:43.270002 containerd[2004]: time="2024-09-04T17:18:43.269929592Z" level=info msg="RemovePodSandbox for \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\"" Sep 4 17:18:43.270002 containerd[2004]: time="2024-09-04T17:18:43.269982176Z" level=info msg="Forcibly stopping sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\"" Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.370 [WARNING][5810] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0", GenerateName:"calico-kube-controllers-85bf876798-", Namespace:"calico-system", SelfLink:"", UID:"d8823db7-aaef-4420-b78a-28c3c358b6ff", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85bf876798", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"99ad8240a3d28f4e6da652d84538e74f0240d7a1cedebb9b91939883b4e7eb3b", Pod:"calico-kube-controllers-85bf876798-6rhxn", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8ee3a86b7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.371 [INFO][5810] k8s.go 608: Cleaning up netns ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.371 [INFO][5810] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" iface="eth0" netns="" Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.371 [INFO][5810] k8s.go 615: Releasing IP address(es) ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.371 [INFO][5810] utils.go 188: Calico CNI releasing IP address ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.421 [INFO][5816] ipam_plugin.go 417: Releasing address using handleID ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" HandleID="k8s-pod-network.8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.422 [INFO][5816] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.422 [INFO][5816] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.437 [WARNING][5816] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" HandleID="k8s-pod-network.8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.438 [INFO][5816] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" HandleID="k8s-pod-network.8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Workload="ip--172--31--22--219-k8s-calico--kube--controllers--85bf876798--6rhxn-eth0" Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.440 [INFO][5816] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:43.446189 containerd[2004]: 2024-09-04 17:18:43.443 [INFO][5810] k8s.go 621: Teardown processing complete. ContainerID="8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a" Sep 4 17:18:43.447054 containerd[2004]: time="2024-09-04T17:18:43.446230437Z" level=info msg="TearDown network for sandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\" successfully" Sep 4 17:18:43.453600 containerd[2004]: time="2024-09-04T17:18:43.452303181Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:43.453600 containerd[2004]: time="2024-09-04T17:18:43.452403189Z" level=info msg="RemovePodSandbox \"8e5acf5ed728435d7f5e78bbd39b7335fa3cbc83ea67b3399c4e231ee703164a\" returns successfully" Sep 4 17:18:43.453600 containerd[2004]: time="2024-09-04T17:18:43.453117729Z" level=info msg="StopPodSandbox for \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\"" Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.532 [WARNING][5835] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88b22ba2-b48a-4946-b09c-791f6c8e7d48", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f", Pod:"csi-node-driver-j8t28", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9630875abdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.532 [INFO][5835] k8s.go 608: Cleaning up netns ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.532 [INFO][5835] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" iface="eth0" netns="" Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.532 [INFO][5835] k8s.go 615: Releasing IP address(es) ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.532 [INFO][5835] utils.go 188: Calico CNI releasing IP address ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.593 [INFO][5841] ipam_plugin.go 417: Releasing address using handleID ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" HandleID="k8s-pod-network.a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.593 [INFO][5841] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.593 [INFO][5841] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.609 [WARNING][5841] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" HandleID="k8s-pod-network.a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.609 [INFO][5841] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" HandleID="k8s-pod-network.a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.611 [INFO][5841] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:43.617199 containerd[2004]: 2024-09-04 17:18:43.614 [INFO][5835] k8s.go 621: Teardown processing complete. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:43.618986 containerd[2004]: time="2024-09-04T17:18:43.617387074Z" level=info msg="TearDown network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\" successfully" Sep 4 17:18:43.618986 containerd[2004]: time="2024-09-04T17:18:43.617429878Z" level=info msg="StopPodSandbox for \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\" returns successfully" Sep 4 17:18:43.619778 containerd[2004]: time="2024-09-04T17:18:43.619729378Z" level=info msg="RemovePodSandbox for \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\"" Sep 4 17:18:43.619778 containerd[2004]: time="2024-09-04T17:18:43.619814518Z" level=info msg="Forcibly stopping sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\"" Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.684 [WARNING][5859] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88b22ba2-b48a-4946-b09c-791f6c8e7d48", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"ce8f41f689749a5a4e8afe59d1d11f493bd3da95d0f49d86db07b15856748b6f", Pod:"csi-node-driver-j8t28", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9630875abdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.685 [INFO][5859] k8s.go 608: Cleaning up netns ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.685 [INFO][5859] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" iface="eth0" netns="" Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.685 [INFO][5859] k8s.go 615: Releasing IP address(es) ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.685 [INFO][5859] utils.go 188: Calico CNI releasing IP address ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.726 [INFO][5865] ipam_plugin.go 417: Releasing address using handleID ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" HandleID="k8s-pod-network.a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.727 [INFO][5865] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.727 [INFO][5865] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.739 [WARNING][5865] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" HandleID="k8s-pod-network.a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.739 [INFO][5865] ipam_plugin.go 445: Releasing address using workloadID ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" HandleID="k8s-pod-network.a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Workload="ip--172--31--22--219-k8s-csi--node--driver--j8t28-eth0" Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.741 [INFO][5865] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:43.746221 containerd[2004]: 2024-09-04 17:18:43.744 [INFO][5859] k8s.go 621: Teardown processing complete. ContainerID="a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888" Sep 4 17:18:43.747650 containerd[2004]: time="2024-09-04T17:18:43.746258422Z" level=info msg="TearDown network for sandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\" successfully" Sep 4 17:18:43.751505 containerd[2004]: time="2024-09-04T17:18:43.751428622Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:43.752718 containerd[2004]: time="2024-09-04T17:18:43.751527490Z" level=info msg="RemovePodSandbox \"a057a10ed6d69fe1ee36b23660e3a630c5ce76159e92db765423b0056fe08888\" returns successfully" Sep 4 17:18:43.753843 containerd[2004]: time="2024-09-04T17:18:43.753377710Z" level=info msg="StopPodSandbox for \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\"" Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.820 [WARNING][5884] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"408c6ec0-ae37-46f2-8a2e-26492274c3b6", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311", Pod:"coredns-76f75df574-cwjsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082e0eb6787", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.821 [INFO][5884] k8s.go 608: Cleaning up netns ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.821 [INFO][5884] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" iface="eth0" netns="" Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.821 [INFO][5884] k8s.go 615: Releasing IP address(es) ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.821 [INFO][5884] utils.go 188: Calico CNI releasing IP address ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.857 [INFO][5890] ipam_plugin.go 417: Releasing address using handleID ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" HandleID="k8s-pod-network.852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.857 [INFO][5890] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.857 [INFO][5890] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.871 [WARNING][5890] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" HandleID="k8s-pod-network.852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.871 [INFO][5890] ipam_plugin.go 445: Releasing address using workloadID ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" HandleID="k8s-pod-network.852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.874 [INFO][5890] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:43.879315 containerd[2004]: 2024-09-04 17:18:43.876 [INFO][5884] k8s.go 621: Teardown processing complete. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:43.879315 containerd[2004]: time="2024-09-04T17:18:43.879272207Z" level=info msg="TearDown network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\" successfully" Sep 4 17:18:43.881918 containerd[2004]: time="2024-09-04T17:18:43.879328079Z" level=info msg="StopPodSandbox for \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\" returns successfully" Sep 4 17:18:43.881918 containerd[2004]: time="2024-09-04T17:18:43.880307279Z" level=info msg="RemovePodSandbox for \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\"" Sep 4 17:18:43.881918 containerd[2004]: time="2024-09-04T17:18:43.880354343Z" level=info msg="Forcibly stopping sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\"" Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.944 [WARNING][5908] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"408c6ec0-ae37-46f2-8a2e-26492274c3b6", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"23f4fc8a4ec22d5e18446a0780934322b683cfa4f492968555b9631676638311", Pod:"coredns-76f75df574-cwjsl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali082e0eb6787", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.945 [INFO][5908] k8s.go 608: Cleaning up netns ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.945 [INFO][5908] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" iface="eth0" netns="" Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.945 [INFO][5908] k8s.go 615: Releasing IP address(es) ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.945 [INFO][5908] utils.go 188: Calico CNI releasing IP address ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.984 [INFO][5914] ipam_plugin.go 417: Releasing address using handleID ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" HandleID="k8s-pod-network.852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.984 [INFO][5914] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.984 [INFO][5914] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.999 [WARNING][5914] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" HandleID="k8s-pod-network.852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:43.999 [INFO][5914] ipam_plugin.go 445: Releasing address using workloadID ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" HandleID="k8s-pod-network.852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--cwjsl-eth0" Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:44.002 [INFO][5914] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:44.007569 containerd[2004]: 2024-09-04 17:18:44.005 [INFO][5908] k8s.go 621: Teardown processing complete. ContainerID="852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0" Sep 4 17:18:44.009165 containerd[2004]: time="2024-09-04T17:18:44.007649924Z" level=info msg="TearDown network for sandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\" successfully" Sep 4 17:18:44.012343 containerd[2004]: time="2024-09-04T17:18:44.012256628Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:44.012650 containerd[2004]: time="2024-09-04T17:18:44.012359240Z" level=info msg="RemovePodSandbox \"852edfe29e32fdb0e560172127efa6835156d2d972de77602f7df0d889a123a0\" returns successfully" Sep 4 17:18:44.013573 containerd[2004]: time="2024-09-04T17:18:44.013244000Z" level=info msg="StopPodSandbox for \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\"" Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.080 [WARNING][5933] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3514e90f-33f1-43e8-8b04-bc96331f0618", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59", Pod:"coredns-76f75df574-ncgw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2734644c25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.080 [INFO][5933] k8s.go 608: Cleaning up netns ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.081 [INFO][5933] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" iface="eth0" netns="" Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.081 [INFO][5933] k8s.go 615: Releasing IP address(es) ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.081 [INFO][5933] utils.go 188: Calico CNI releasing IP address ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.126 [INFO][5939] ipam_plugin.go 417: Releasing address using handleID ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" HandleID="k8s-pod-network.dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.128 [INFO][5939] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.128 [INFO][5939] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.140 [WARNING][5939] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" HandleID="k8s-pod-network.dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.141 [INFO][5939] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" HandleID="k8s-pod-network.dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.143 [INFO][5939] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:44.149920 containerd[2004]: 2024-09-04 17:18:44.146 [INFO][5933] k8s.go 621: Teardown processing complete. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:44.149920 containerd[2004]: time="2024-09-04T17:18:44.149356748Z" level=info msg="TearDown network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\" successfully" Sep 4 17:18:44.149920 containerd[2004]: time="2024-09-04T17:18:44.149395124Z" level=info msg="StopPodSandbox for \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\" returns successfully" Sep 4 17:18:44.152243 containerd[2004]: time="2024-09-04T17:18:44.151168496Z" level=info msg="RemovePodSandbox for \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\"" Sep 4 17:18:44.152243 containerd[2004]: time="2024-09-04T17:18:44.151302128Z" level=info msg="Forcibly stopping sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\"" Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.220 [WARNING][5957] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"3514e90f-33f1-43e8-8b04-bc96331f0618", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"0e7b1643387edaf068e478619ea827d1728403454178401452e2e67fa0259b59", Pod:"coredns-76f75df574-ncgw4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2734644c25", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.220 [INFO][5957] k8s.go 608: Cleaning up netns ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.220 [INFO][5957] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" iface="eth0" netns="" Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.221 [INFO][5957] k8s.go 615: Releasing IP address(es) ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.221 [INFO][5957] utils.go 188: Calico CNI releasing IP address ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.265 [INFO][5963] ipam_plugin.go 417: Releasing address using handleID ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" HandleID="k8s-pod-network.dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.265 [INFO][5963] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.265 [INFO][5963] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.278 [WARNING][5963] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" HandleID="k8s-pod-network.dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.278 [INFO][5963] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" HandleID="k8s-pod-network.dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Workload="ip--172--31--22--219-k8s-coredns--76f75df574--ncgw4-eth0" Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.282 [INFO][5963] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:44.287376 containerd[2004]: 2024-09-04 17:18:44.284 [INFO][5957] k8s.go 621: Teardown processing complete. ContainerID="dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e" Sep 4 17:18:44.288630 containerd[2004]: time="2024-09-04T17:18:44.287425101Z" level=info msg="TearDown network for sandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\" successfully" Sep 4 17:18:44.293538 containerd[2004]: time="2024-09-04T17:18:44.293437533Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:44.294640 containerd[2004]: time="2024-09-04T17:18:44.294228381Z" level=info msg="RemovePodSandbox \"dfbc4f470daaf5a26737bf190ffcd2c34a934c7cc4c1a059535d73d4a022560e\" returns successfully" Sep 4 17:18:46.335331 systemd[1]: Started sshd@15-172.31.22.219:22-139.178.89.65:57956.service - OpenSSH per-connection server daemon (139.178.89.65:57956). Sep 4 17:18:46.523686 sshd[5972]: Accepted publickey for core from 139.178.89.65 port 57956 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:46.526982 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:46.535912 systemd-logind[1987]: New session 16 of user core. Sep 4 17:18:46.543070 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:18:46.799081 sshd[5972]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:46.804072 systemd-logind[1987]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:18:46.805077 systemd[1]: sshd@15-172.31.22.219:22-139.178.89.65:57956.service: Deactivated successfully. Sep 4 17:18:46.808482 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:18:46.813360 systemd-logind[1987]: Removed session 16. Sep 4 17:18:51.222136 kubelet[3460]: I0904 17:18:51.221144 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-j8t28" podStartSLOduration=41.42361657 podStartE2EDuration="47.221068731s" podCreationTimestamp="2024-09-04 17:18:04 +0000 UTC" firstStartedPulling="2024-09-04 17:18:33.699731076 +0000 UTC m=+50.930031266" lastFinishedPulling="2024-09-04 17:18:39.497183237 +0000 UTC m=+56.727483427" observedRunningTime="2024-09-04 17:18:40.687398011 +0000 UTC m=+57.917698237" watchObservedRunningTime="2024-09-04 17:18:51.221068731 +0000 UTC m=+68.451368921" Sep 4 17:18:51.841362 systemd[1]: Started sshd@16-172.31.22.219:22-139.178.89.65:47268.service - OpenSSH per-connection server daemon (139.178.89.65:47268). Sep 4 17:18:52.020770 sshd[6054]: Accepted publickey for core from 139.178.89.65 port 47268 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:52.023844 sshd[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:52.033383 systemd-logind[1987]: New session 17 of user core. Sep 4 17:18:52.041101 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:18:52.294265 sshd[6054]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:52.302010 systemd[1]: sshd@16-172.31.22.219:22-139.178.89.65:47268.service: Deactivated successfully. Sep 4 17:18:52.306367 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:18:52.308488 systemd-logind[1987]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:18:52.311684 systemd-logind[1987]: Removed session 17. Sep 4 17:18:56.745991 kubelet[3460]: I0904 17:18:56.745926 3460 topology_manager.go:215] "Topology Admit Handler" podUID="a69398e9-5252-47b9-8661-5941ec56b7ad" podNamespace="calico-apiserver" podName="calico-apiserver-7775869cb7-kjlt4" Sep 4 17:18:56.766261 systemd[1]: Created slice kubepods-besteffort-poda69398e9_5252_47b9_8661_5941ec56b7ad.slice - libcontainer container kubepods-besteffort-poda69398e9_5252_47b9_8661_5941ec56b7ad.slice. Sep 4 17:18:56.815723 kubelet[3460]: I0904 17:18:56.815209 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a69398e9-5252-47b9-8661-5941ec56b7ad-calico-apiserver-certs\") pod \"calico-apiserver-7775869cb7-kjlt4\" (UID: \"a69398e9-5252-47b9-8661-5941ec56b7ad\") " pod="calico-apiserver/calico-apiserver-7775869cb7-kjlt4" Sep 4 17:18:56.815723 kubelet[3460]: I0904 17:18:56.815292 3460 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmz6k\" (UniqueName: \"kubernetes.io/projected/a69398e9-5252-47b9-8661-5941ec56b7ad-kube-api-access-nmz6k\") pod \"calico-apiserver-7775869cb7-kjlt4\" (UID: \"a69398e9-5252-47b9-8661-5941ec56b7ad\") " pod="calico-apiserver/calico-apiserver-7775869cb7-kjlt4" Sep 4 17:18:56.917312 kubelet[3460]: E0904 17:18:56.916900 3460 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:18:56.917312 kubelet[3460]: E0904 17:18:56.917007 3460 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a69398e9-5252-47b9-8661-5941ec56b7ad-calico-apiserver-certs podName:a69398e9-5252-47b9-8661-5941ec56b7ad nodeName:}" failed. No retries permitted until 2024-09-04 17:18:57.416980588 +0000 UTC m=+74.647280766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a69398e9-5252-47b9-8661-5941ec56b7ad-calico-apiserver-certs") pod "calico-apiserver-7775869cb7-kjlt4" (UID: "a69398e9-5252-47b9-8661-5941ec56b7ad") : secret "calico-apiserver-certs" not found Sep 4 17:18:57.339331 systemd[1]: Started sshd@17-172.31.22.219:22-139.178.89.65:47274.service - OpenSSH per-connection server daemon (139.178.89.65:47274). Sep 4 17:18:57.527223 sshd[6076]: Accepted publickey for core from 139.178.89.65 port 47274 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:57.530257 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:57.540448 systemd-logind[1987]: New session 18 of user core. Sep 4 17:18:57.550158 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:18:57.673950 containerd[2004]: time="2024-09-04T17:18:57.673554432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7775869cb7-kjlt4,Uid:a69398e9-5252-47b9-8661-5941ec56b7ad,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:18:57.925370 sshd[6076]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:57.936466 systemd[1]: sshd@17-172.31.22.219:22-139.178.89.65:47274.service: Deactivated successfully. Sep 4 17:18:57.948854 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:18:57.956004 systemd-logind[1987]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:18:57.987888 systemd[1]: Started sshd@18-172.31.22.219:22-139.178.89.65:37020.service - OpenSSH per-connection server daemon (139.178.89.65:37020). Sep 4 17:18:57.990535 systemd-logind[1987]: Removed session 18. Sep 4 17:18:58.098227 systemd-networkd[1926]: cali71a8a08051a: Link UP Sep 4 17:18:58.103117 systemd-networkd[1926]: cali71a8a08051a: Gained carrier Sep 4 17:18:58.105329 (udev-worker)[6111]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:57.841 [INFO][6087] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0 calico-apiserver-7775869cb7- calico-apiserver a69398e9-5252-47b9-8661-5941ec56b7ad 1114 0 2024-09-04 17:18:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7775869cb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-219 calico-apiserver-7775869cb7-kjlt4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali71a8a08051a [] []}} ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Namespace="calico-apiserver" Pod="calico-apiserver-7775869cb7-kjlt4" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:57.841 [INFO][6087] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Namespace="calico-apiserver" Pod="calico-apiserver-7775869cb7-kjlt4" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:57.950 [INFO][6098] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" HandleID="k8s-pod-network.70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Workload="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.009 [INFO][6098] ipam_plugin.go 270: Auto assigning IP ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" HandleID="k8s-pod-network.70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Workload="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-219", "pod":"calico-apiserver-7775869cb7-kjlt4", "timestamp":"2024-09-04 17:18:57.950115865 +0000 UTC"}, Hostname:"ip-172-31-22-219", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.009 [INFO][6098] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.009 [INFO][6098] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.011 [INFO][6098] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-219' Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.015 [INFO][6098] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" host="ip-172-31-22-219" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.029 [INFO][6098] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-219" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.043 [INFO][6098] ipam.go 489: Trying affinity for 192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.048 [INFO][6098] ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.054 [INFO][6098] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="ip-172-31-22-219" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.054 [INFO][6098] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" host="ip-172-31-22-219" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.059 [INFO][6098] ipam.go 1685: Creating new handle: k8s-pod-network.70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1 Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.067 [INFO][6098] ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" host="ip-172-31-22-219" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.081 [INFO][6098] ipam.go 1216: Successfully claimed IPs: [192.168.48.5/26] block=192.168.48.0/26 handle="k8s-pod-network.70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" host="ip-172-31-22-219" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.081 [INFO][6098] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.5/26] handle="k8s-pod-network.70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" host="ip-172-31-22-219" Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.081 [INFO][6098] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:58.134533 containerd[2004]: 2024-09-04 17:18:58.081 [INFO][6098] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.48.5/26] IPv6=[] ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" HandleID="k8s-pod-network.70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Workload="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" Sep 4 17:18:58.144906 containerd[2004]: 2024-09-04 17:18:58.086 [INFO][6087] k8s.go 386: Populated endpoint ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Namespace="calico-apiserver" Pod="calico-apiserver-7775869cb7-kjlt4" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0", GenerateName:"calico-apiserver-7775869cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a69398e9-5252-47b9-8661-5941ec56b7ad", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7775869cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"", Pod:"calico-apiserver-7775869cb7-kjlt4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71a8a08051a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:58.144906 containerd[2004]: 2024-09-04 17:18:58.086 [INFO][6087] k8s.go 387: Calico CNI using IPs: [192.168.48.5/32] ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Namespace="calico-apiserver" Pod="calico-apiserver-7775869cb7-kjlt4" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" Sep 4 17:18:58.144906 containerd[2004]: 2024-09-04 17:18:58.086 [INFO][6087] dataplane_linux.go 68: Setting the host side veth name to cali71a8a08051a ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Namespace="calico-apiserver" Pod="calico-apiserver-7775869cb7-kjlt4" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" Sep 4 17:18:58.144906 containerd[2004]: 2024-09-04 17:18:58.101 [INFO][6087] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Namespace="calico-apiserver" Pod="calico-apiserver-7775869cb7-kjlt4" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" Sep 4 17:18:58.144906 containerd[2004]: 2024-09-04 17:18:58.104 [INFO][6087] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Namespace="calico-apiserver" Pod="calico-apiserver-7775869cb7-kjlt4" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0", GenerateName:"calico-apiserver-7775869cb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"a69398e9-5252-47b9-8661-5941ec56b7ad", ResourceVersion:"1114", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7775869cb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-219", ContainerID:"70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1", Pod:"calico-apiserver-7775869cb7-kjlt4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.48.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali71a8a08051a", MAC:"a6:56:12:0b:c3:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:58.144906 containerd[2004]: 2024-09-04 17:18:58.126 [INFO][6087] k8s.go 500: Wrote updated endpoint to datastore ContainerID="70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1" Namespace="calico-apiserver" Pod="calico-apiserver-7775869cb7-kjlt4" WorkloadEndpoint="ip--172--31--22--219-k8s-calico--apiserver--7775869cb7--kjlt4-eth0" Sep 4 17:18:58.217895 sshd[6107]: Accepted publickey for core from 139.178.89.65 port 37020 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:58.219307 sshd[6107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:58.234004 systemd-logind[1987]: New session 19 of user core. Sep 4 17:18:58.244332 containerd[2004]: time="2024-09-04T17:18:58.244221826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:58.246272 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:18:58.247340 containerd[2004]: time="2024-09-04T17:18:58.244313602Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:58.247340 containerd[2004]: time="2024-09-04T17:18:58.244339546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:58.247340 containerd[2004]: time="2024-09-04T17:18:58.244484818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:58.319120 systemd[1]: Started cri-containerd-70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1.scope - libcontainer container 70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1. Sep 4 17:18:58.389748 containerd[2004]: time="2024-09-04T17:18:58.389545331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7775869cb7-kjlt4,Uid:a69398e9-5252-47b9-8661-5941ec56b7ad,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1\"" Sep 4 17:18:58.395864 containerd[2004]: time="2024-09-04T17:18:58.394978295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:18:58.744018 sshd[6107]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:58.753636 systemd[1]: sshd@18-172.31.22.219:22-139.178.89.65:37020.service: Deactivated successfully. Sep 4 17:18:58.758232 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:18:58.761469 systemd-logind[1987]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:18:58.788409 systemd[1]: Started sshd@19-172.31.22.219:22-139.178.89.65:37028.service - OpenSSH per-connection server daemon (139.178.89.65:37028). Sep 4 17:18:58.793408 systemd-logind[1987]: Removed session 19. Sep 4 17:18:58.976987 sshd[6175]: Accepted publickey for core from 139.178.89.65 port 37028 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:58.979765 sshd[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:58.990343 systemd-logind[1987]: New session 20 of user core. Sep 4 17:18:58.996102 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:18:59.409607 systemd-networkd[1926]: cali71a8a08051a: Gained IPv6LL Sep 4 17:19:01.674736 ntpd[1980]: Listen normally on 14 cali71a8a08051a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:19:01.679902 ntpd[1980]: 4 Sep 17:19:01 ntpd[1980]: Listen normally on 14 cali71a8a08051a [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:19:02.675599 sshd[6175]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:02.692268 systemd-logind[1987]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:19:02.693321 systemd[1]: sshd@19-172.31.22.219:22-139.178.89.65:37028.service: Deactivated successfully. Sep 4 17:19:02.705894 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:19:02.711291 systemd[1]: session-20.scope: Consumed 1.044s CPU time. Sep 4 17:19:02.759439 systemd[1]: Started sshd@20-172.31.22.219:22-139.178.89.65:37036.service - OpenSSH per-connection server daemon (139.178.89.65:37036). Sep 4 17:19:02.761568 systemd-logind[1987]: Removed session 20. Sep 4 17:19:02.983386 sshd[6202]: Accepted publickey for core from 139.178.89.65 port 37036 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:02.990896 sshd[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:03.012424 systemd-logind[1987]: New session 21 of user core. Sep 4 17:19:03.020696 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:19:03.590941 containerd[2004]: time="2024-09-04T17:19:03.590694713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:03.595049 containerd[2004]: time="2024-09-04T17:19:03.594289133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Sep 4 17:19:03.598750 containerd[2004]: time="2024-09-04T17:19:03.597371801Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:03.604405 containerd[2004]: time="2024-09-04T17:19:03.604217189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:19:03.606270 containerd[2004]: time="2024-09-04T17:19:03.606102905Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 5.211058922s" Sep 4 17:19:03.606498 containerd[2004]: time="2024-09-04T17:19:03.606466997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:19:03.616108 containerd[2004]: time="2024-09-04T17:19:03.616050245Z" level=info msg="CreateContainer within sandbox \"70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:19:03.698654 containerd[2004]: time="2024-09-04T17:19:03.698247137Z" level=info msg="CreateContainer within sandbox \"70fceb1a2cb4f775868b378fdad78762b43a999194db060e8f164bb291f6f0e1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"162e65e89bd66e139895d51b8e1fbd27ae8a5a18fe66e84c632a0045a7689a10\"" Sep 4 17:19:03.703819 containerd[2004]: time="2024-09-04T17:19:03.701316845Z" level=info msg="StartContainer for \"162e65e89bd66e139895d51b8e1fbd27ae8a5a18fe66e84c632a0045a7689a10\"" Sep 4 17:19:03.716473 sshd[6202]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:03.738330 systemd[1]: sshd@20-172.31.22.219:22-139.178.89.65:37036.service: Deactivated successfully. Sep 4 17:19:03.753287 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:19:03.759968 systemd-logind[1987]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:19:03.815318 systemd[1]: Started sshd@21-172.31.22.219:22-139.178.89.65:37046.service - OpenSSH per-connection server daemon (139.178.89.65:37046). Sep 4 17:19:03.820855 systemd-logind[1987]: Removed session 21. Sep 4 17:19:03.884242 systemd[1]: run-containerd-runc-k8s.io-162e65e89bd66e139895d51b8e1fbd27ae8a5a18fe66e84c632a0045a7689a10-runc.iO0skl.mount: Deactivated successfully. Sep 4 17:19:03.896131 systemd[1]: Started cri-containerd-162e65e89bd66e139895d51b8e1fbd27ae8a5a18fe66e84c632a0045a7689a10.scope - libcontainer container 162e65e89bd66e139895d51b8e1fbd27ae8a5a18fe66e84c632a0045a7689a10. Sep 4 17:19:03.972018 containerd[2004]: time="2024-09-04T17:19:03.971661223Z" level=info msg="StartContainer for \"162e65e89bd66e139895d51b8e1fbd27ae8a5a18fe66e84c632a0045a7689a10\" returns successfully" Sep 4 17:19:04.034172 sshd[6231]: Accepted publickey for core from 139.178.89.65 port 37046 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:04.038056 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:04.048207 systemd-logind[1987]: New session 22 of user core. Sep 4 17:19:04.058179 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:19:04.341422 sshd[6231]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:04.350177 systemd[1]: sshd@21-172.31.22.219:22-139.178.89.65:37046.service: Deactivated successfully. Sep 4 17:19:04.357513 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:19:04.361736 systemd-logind[1987]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:19:04.363756 systemd-logind[1987]: Removed session 22. Sep 4 17:19:04.825567 kubelet[3460]: I0904 17:19:04.825488 3460 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7775869cb7-kjlt4" podStartSLOduration=3.611727397 podStartE2EDuration="8.825392911s" podCreationTimestamp="2024-09-04 17:18:56 +0000 UTC" firstStartedPulling="2024-09-04 17:18:58.393579107 +0000 UTC m=+75.623879297" lastFinishedPulling="2024-09-04 17:19:03.607244621 +0000 UTC m=+80.837544811" observedRunningTime="2024-09-04 17:19:04.824875519 +0000 UTC m=+82.055175793" watchObservedRunningTime="2024-09-04 17:19:04.825392911 +0000 UTC m=+82.055693113" Sep 4 17:19:09.389748 systemd[1]: Started sshd@22-172.31.22.219:22-139.178.89.65:46356.service - OpenSSH per-connection server daemon (139.178.89.65:46356). Sep 4 17:19:09.569444 sshd[6287]: Accepted publickey for core from 139.178.89.65 port 46356 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:09.572242 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:09.581909 systemd-logind[1987]: New session 23 of user core. Sep 4 17:19:09.590618 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:19:09.872168 sshd[6287]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:09.877978 systemd[1]: sshd@22-172.31.22.219:22-139.178.89.65:46356.service: Deactivated successfully. Sep 4 17:19:09.878376 systemd-logind[1987]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:19:09.883270 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:19:09.887981 systemd-logind[1987]: Removed session 23. Sep 4 17:19:14.912017 systemd[1]: Started sshd@23-172.31.22.219:22-139.178.89.65:46370.service - OpenSSH per-connection server daemon (139.178.89.65:46370). Sep 4 17:19:15.089689 sshd[6308]: Accepted publickey for core from 139.178.89.65 port 46370 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:15.092490 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:15.101633 systemd-logind[1987]: New session 24 of user core. Sep 4 17:19:15.108228 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:19:15.371703 sshd[6308]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:15.379136 systemd-logind[1987]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:19:15.379314 systemd[1]: sshd@23-172.31.22.219:22-139.178.89.65:46370.service: Deactivated successfully. Sep 4 17:19:15.383874 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:19:15.386770 systemd-logind[1987]: Removed session 24. Sep 4 17:19:20.411278 systemd[1]: Started sshd@24-172.31.22.219:22-139.178.89.65:60202.service - OpenSSH per-connection server daemon (139.178.89.65:60202). Sep 4 17:19:20.597965 sshd[6343]: Accepted publickey for core from 139.178.89.65 port 60202 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:20.601425 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:20.611061 systemd-logind[1987]: New session 25 of user core. Sep 4 17:19:20.621069 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:19:20.871458 sshd[6343]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:20.878320 systemd[1]: sshd@24-172.31.22.219:22-139.178.89.65:60202.service: Deactivated successfully. Sep 4 17:19:20.883540 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:19:20.886491 systemd-logind[1987]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:19:20.889161 systemd-logind[1987]: Removed session 25. Sep 4 17:19:25.917374 systemd[1]: Started sshd@25-172.31.22.219:22-139.178.89.65:60208.service - OpenSSH per-connection server daemon (139.178.89.65:60208). Sep 4 17:19:26.119538 sshd[6384]: Accepted publickey for core from 139.178.89.65 port 60208 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:26.123537 sshd[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:26.133512 systemd-logind[1987]: New session 26 of user core. Sep 4 17:19:26.138091 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:19:26.388398 sshd[6384]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:26.395101 systemd[1]: sshd@25-172.31.22.219:22-139.178.89.65:60208.service: Deactivated successfully. Sep 4 17:19:26.399307 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:19:26.401987 systemd-logind[1987]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:19:26.404483 systemd-logind[1987]: Removed session 26. Sep 4 17:19:31.428532 systemd[1]: Started sshd@26-172.31.22.219:22-139.178.89.65:41450.service - OpenSSH per-connection server daemon (139.178.89.65:41450). Sep 4 17:19:31.629381 sshd[6403]: Accepted publickey for core from 139.178.89.65 port 41450 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:31.636026 sshd[6403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:31.645908 systemd-logind[1987]: New session 27 of user core. Sep 4 17:19:31.650044 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 4 17:19:31.892585 sshd[6403]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:31.898189 systemd-logind[1987]: Session 27 logged out. Waiting for processes to exit. Sep 4 17:19:31.899472 systemd[1]: sshd@26-172.31.22.219:22-139.178.89.65:41450.service: Deactivated successfully. Sep 4 17:19:31.904491 systemd[1]: session-27.scope: Deactivated successfully. Sep 4 17:19:31.908672 systemd-logind[1987]: Removed session 27. Sep 4 17:19:36.933314 systemd[1]: Started sshd@27-172.31.22.219:22-139.178.89.65:41452.service - OpenSSH per-connection server daemon (139.178.89.65:41452). Sep 4 17:19:37.109658 sshd[6416]: Accepted publickey for core from 139.178.89.65 port 41452 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:37.112745 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:37.121959 systemd-logind[1987]: New session 28 of user core. Sep 4 17:19:37.136078 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 4 17:19:37.376013 sshd[6416]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:37.382776 systemd-logind[1987]: Session 28 logged out. Waiting for processes to exit. Sep 4 17:19:37.384381 systemd[1]: sshd@27-172.31.22.219:22-139.178.89.65:41452.service: Deactivated successfully. Sep 4 17:19:37.387932 systemd[1]: session-28.scope: Deactivated successfully. Sep 4 17:19:37.390392 systemd-logind[1987]: Removed session 28. Sep 4 17:19:51.118271 systemd[1]: run-containerd-runc-k8s.io-32b72fb2f6486cfdfd7b6663b8af291adb84b71cd0fa3a1b869b4bb1884b1742-runc.RGaYRK.mount: Deactivated successfully. Sep 4 17:19:51.480979 systemd[1]: cri-containerd-75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e.scope: Deactivated successfully. Sep 4 17:19:51.483354 systemd[1]: cri-containerd-75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e.scope: Consumed 6.581s CPU time, 21.9M memory peak, 0B memory swap peak. Sep 4 17:19:51.532816 containerd[2004]: time="2024-09-04T17:19:51.529585347Z" level=info msg="shim disconnected" id=75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e namespace=k8s.io Sep 4 17:19:51.532816 containerd[2004]: time="2024-09-04T17:19:51.529825251Z" level=warning msg="cleaning up after shim disconnected" id=75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e namespace=k8s.io Sep 4 17:19:51.532816 containerd[2004]: time="2024-09-04T17:19:51.529849899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:51.543254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e-rootfs.mount: Deactivated successfully. Sep 4 17:19:51.595441 systemd[1]: cri-containerd-3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3.scope: Deactivated successfully. Sep 4 17:19:51.596437 systemd[1]: cri-containerd-3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3.scope: Consumed 11.405s CPU time. Sep 4 17:19:51.635611 containerd[2004]: time="2024-09-04T17:19:51.635269288Z" level=info msg="shim disconnected" id=3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3 namespace=k8s.io Sep 4 17:19:51.635611 containerd[2004]: time="2024-09-04T17:19:51.635345380Z" level=warning msg="cleaning up after shim disconnected" id=3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3 namespace=k8s.io Sep 4 17:19:51.635611 containerd[2004]: time="2024-09-04T17:19:51.635367244Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:51.642278 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3-rootfs.mount: Deactivated successfully. Sep 4 17:19:51.948178 kubelet[3460]: I0904 17:19:51.947064 3460 scope.go:117] "RemoveContainer" containerID="75fefe193a818a7e5b4d13c9b7a19ad0e3bf1183320c045cc4fc9ad2ee3f152e" Sep 4 17:19:51.954689 kubelet[3460]: I0904 17:19:51.954300 3460 scope.go:117] "RemoveContainer" containerID="3eaabfee59ee3b89ef1c297a5825dc18a1c3fc81294b03dddbd01f832301a6e3" Sep 4 17:19:51.954889 containerd[2004]: time="2024-09-04T17:19:51.954369209Z" level=info msg="CreateContainer within sandbox \"31d5f044a3745fcf8ad480287a0f07428c2b9a913d92b696ae62180f9de00ab2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:19:51.959408 containerd[2004]: time="2024-09-04T17:19:51.959143913Z" level=info msg="CreateContainer within sandbox \"9fd2dc5d73c5fd8a652e00418f4e533a000484bab3bc9a6bc3995fe8d6555aca\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:19:51.989635 containerd[2004]: time="2024-09-04T17:19:51.989572673Z" level=info msg="CreateContainer within sandbox \"9fd2dc5d73c5fd8a652e00418f4e533a000484bab3bc9a6bc3995fe8d6555aca\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"83b26a278e901725b781558561dc349045a381f1bec67854a878ae010877cfbb\"" Sep 4 17:19:51.990996 containerd[2004]: time="2024-09-04T17:19:51.990517577Z" level=info msg="StartContainer for \"83b26a278e901725b781558561dc349045a381f1bec67854a878ae010877cfbb\"" Sep 4 17:19:51.993528 containerd[2004]: time="2024-09-04T17:19:51.993279797Z" level=info msg="CreateContainer within sandbox \"31d5f044a3745fcf8ad480287a0f07428c2b9a913d92b696ae62180f9de00ab2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2d1401dfe323c8ffe6f54f10129f04d3ad848943b955f8e87cfabcfd5559648a\"" Sep 4 17:19:51.995206 containerd[2004]: time="2024-09-04T17:19:51.995076209Z" level=info msg="StartContainer for \"2d1401dfe323c8ffe6f54f10129f04d3ad848943b955f8e87cfabcfd5559648a\"" Sep 4 17:19:52.053177 systemd[1]: Started cri-containerd-83b26a278e901725b781558561dc349045a381f1bec67854a878ae010877cfbb.scope - libcontainer container 83b26a278e901725b781558561dc349045a381f1bec67854a878ae010877cfbb. Sep 4 17:19:52.066071 systemd[1]: Started cri-containerd-2d1401dfe323c8ffe6f54f10129f04d3ad848943b955f8e87cfabcfd5559648a.scope - libcontainer container 2d1401dfe323c8ffe6f54f10129f04d3ad848943b955f8e87cfabcfd5559648a. Sep 4 17:19:52.115997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1394450385.mount: Deactivated successfully. Sep 4 17:19:52.145704 containerd[2004]: time="2024-09-04T17:19:52.145496942Z" level=info msg="StartContainer for \"83b26a278e901725b781558561dc349045a381f1bec67854a878ae010877cfbb\" returns successfully" Sep 4 17:19:52.177820 containerd[2004]: time="2024-09-04T17:19:52.177646022Z" level=info msg="StartContainer for \"2d1401dfe323c8ffe6f54f10129f04d3ad848943b955f8e87cfabcfd5559648a\" returns successfully" Sep 4 17:19:55.331257 kubelet[3460]: E0904 17:19:55.330674 3460 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-219?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:19:57.163034 systemd[1]: cri-containerd-b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95.scope: Deactivated successfully. Sep 4 17:19:57.163511 systemd[1]: cri-containerd-b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95.scope: Consumed 3.608s CPU time, 16.2M memory peak, 0B memory swap peak. Sep 4 17:19:57.214598 containerd[2004]: time="2024-09-04T17:19:57.208211383Z" level=info msg="shim disconnected" id=b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95 namespace=k8s.io Sep 4 17:19:57.214598 containerd[2004]: time="2024-09-04T17:19:57.209918971Z" level=warning msg="cleaning up after shim disconnected" id=b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95 namespace=k8s.io Sep 4 17:19:57.214598 containerd[2004]: time="2024-09-04T17:19:57.209946583Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:57.210859 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95-rootfs.mount: Deactivated successfully. Sep 4 17:19:57.983364 kubelet[3460]: I0904 17:19:57.982739 3460 scope.go:117] "RemoveContainer" containerID="b11ecc58bffbae5bbd715217d97039e4132b355448615ec93b9a953c0806bb95" Sep 4 17:19:57.987334 containerd[2004]: time="2024-09-04T17:19:57.987204623Z" level=info msg="CreateContainer within sandbox \"aa00979f0bafa152996b9baa3aed3a7dca56ee5663b44b4b1da12776145f5604\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:19:58.013482 containerd[2004]: time="2024-09-04T17:19:58.008875147Z" level=info msg="CreateContainer within sandbox \"aa00979f0bafa152996b9baa3aed3a7dca56ee5663b44b4b1da12776145f5604\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ab050fcf61197be9b7d94b04bbc897b0ae0c201a046870584a3ba0b7be8cac5d\"" Sep 4 17:19:58.013482 containerd[2004]: time="2024-09-04T17:19:58.011388943Z" level=info msg="StartContainer for \"ab050fcf61197be9b7d94b04bbc897b0ae0c201a046870584a3ba0b7be8cac5d\"" Sep 4 17:19:58.082320 systemd[1]: Started cri-containerd-ab050fcf61197be9b7d94b04bbc897b0ae0c201a046870584a3ba0b7be8cac5d.scope - libcontainer container ab050fcf61197be9b7d94b04bbc897b0ae0c201a046870584a3ba0b7be8cac5d. Sep 4 17:19:58.147602 containerd[2004]: time="2024-09-04T17:19:58.147535520Z" level=info msg="StartContainer for \"ab050fcf61197be9b7d94b04bbc897b0ae0c201a046870584a3ba0b7be8cac5d\" returns successfully" Sep 4 17:19:58.211286 systemd[1]: run-containerd-runc-k8s.io-ab050fcf61197be9b7d94b04bbc897b0ae0c201a046870584a3ba0b7be8cac5d-runc.kISJYo.mount: Deactivated successfully. Sep 4 17:20:05.332039 kubelet[3460]: E0904 17:20:05.331946 3460 controller.go:195] "Failed to update lease" err="Put \"https://172.31.22.219:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-219?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"