Sep 4 17:16:25.220087 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 17:16:25.222088 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Sep 4 15:58:01 -00 2024 Sep 4 17:16:25.222149 kernel: KASLR disabled due to lack of seed Sep 4 17:16:25.222168 kernel: efi: EFI v2.7 by EDK II Sep 4 17:16:25.222186 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Sep 4 17:16:25.222203 kernel: ACPI: Early table checksum verification disabled Sep 4 17:16:25.222221 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 17:16:25.222238 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 17:16:25.222254 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:16:25.222271 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 17:16:25.222294 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:16:25.222310 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 17:16:25.222326 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 17:16:25.222343 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 17:16:25.222362 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:16:25.222383 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 17:16:25.222401 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 17:16:25.222418 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 17:16:25.222435 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 17:16:25.222452 kernel: printk: bootconsole [uart0] enabled Sep 4 17:16:25.222469 kernel: NUMA: Failed to initialise from firmware Sep 4 17:16:25.222487 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:16:25.222504 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 17:16:25.222521 kernel: Zone ranges: Sep 4 17:16:25.222538 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 17:16:25.222555 kernel: DMA32 empty Sep 4 17:16:25.222575 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 17:16:25.222593 kernel: Movable zone start for each node Sep 4 17:16:25.222609 kernel: Early memory node ranges Sep 4 17:16:25.222626 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 17:16:25.222644 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 17:16:25.222661 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 17:16:25.222678 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 17:16:25.222695 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 17:16:25.222714 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 17:16:25.222731 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 17:16:25.222748 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 17:16:25.222766 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:16:25.222787 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 17:16:25.222805 kernel: psci: probing for conduit method from ACPI. Sep 4 17:16:25.222831 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 17:16:25.222849 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:16:25.222867 kernel: psci: Trusted OS migration not required Sep 4 17:16:25.222890 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:16:25.222909 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:16:25.222927 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:16:25.222945 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:16:25.222963 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:16:25.222980 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:16:25.222999 kernel: CPU features: detected: Spectre-v2 Sep 4 17:16:25.223017 kernel: CPU features: detected: Spectre-v3a Sep 4 17:16:25.223035 kernel: CPU features: detected: Spectre-BHB Sep 4 17:16:25.223053 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 17:16:25.223071 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 17:16:25.223153 kernel: alternatives: applying boot alternatives Sep 4 17:16:25.223179 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:16:25.223213 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:16:25.223245 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:16:25.223275 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:16:25.223296 kernel: Fallback order for Node 0: 0 Sep 4 17:16:25.223327 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 17:16:25.223357 kernel: Policy zone: Normal Sep 4 17:16:25.223385 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:16:25.223408 kernel: software IO TLB: area num 2. Sep 4 17:16:25.223437 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 17:16:25.223473 kernel: Memory: 3820280K/4030464K available (10240K kernel code, 2184K rwdata, 8084K rodata, 39296K init, 897K bss, 210184K reserved, 0K cma-reserved) Sep 4 17:16:25.223496 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:16:25.223527 kernel: trace event string verifier disabled Sep 4 17:16:25.223554 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:16:25.223578 kernel: rcu: RCU event tracing is enabled. Sep 4 17:16:25.223608 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:16:25.223638 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:16:25.223660 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:16:25.223691 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:16:25.223719 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:16:25.223741 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:16:25.223776 kernel: GICv3: 96 SPIs implemented Sep 4 17:16:25.223796 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:16:25.223814 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:16:25.223848 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 17:16:25.223886 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 17:16:25.223928 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 17:16:25.223964 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:16:25.224002 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:16:25.224046 kernel: GICv3: using LPI property table @0x00000004000e0000 Sep 4 17:16:25.224083 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 17:16:25.224112 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Sep 4 17:16:25.229253 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:16:25.229285 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 17:16:25.229303 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 17:16:25.229322 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 17:16:25.229340 kernel: Console: colour dummy device 80x25 Sep 4 17:16:25.229359 kernel: printk: console [tty1] enabled Sep 4 17:16:25.229377 kernel: ACPI: Core revision 20230628 Sep 4 17:16:25.229396 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 17:16:25.229415 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:16:25.229433 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 4 17:16:25.229451 kernel: landlock: Up and running. Sep 4 17:16:25.229474 kernel: SELinux: Initializing. Sep 4 17:16:25.229492 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:16:25.229524 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:16:25.229547 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:16:25.229566 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:16:25.229585 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:16:25.229604 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:16:25.229622 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 17:16:25.229641 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 17:16:25.229665 kernel: Remapping and enabling EFI services. Sep 4 17:16:25.229683 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:16:25.229702 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:16:25.229720 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 17:16:25.229739 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Sep 4 17:16:25.229757 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 17:16:25.229775 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:16:25.229793 kernel: SMP: Total of 2 processors activated. Sep 4 17:16:25.229811 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:16:25.229834 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 17:16:25.229853 kernel: CPU features: detected: CRC32 instructions Sep 4 17:16:25.229883 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:16:25.229906 kernel: alternatives: applying system-wide alternatives Sep 4 17:16:25.229925 kernel: devtmpfs: initialized Sep 4 17:16:25.229944 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:16:25.229963 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:16:25.229982 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:16:25.230001 kernel: SMBIOS 3.0.0 present. Sep 4 17:16:25.230025 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 17:16:25.230044 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:16:25.230063 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:16:25.230082 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:16:25.230101 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:16:25.230139 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:16:25.230161 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Sep 4 17:16:25.230188 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:16:25.230208 kernel: cpuidle: using governor menu Sep 4 17:16:25.230228 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:16:25.230248 kernel: ASID allocator initialised with 65536 entries Sep 4 17:16:25.230267 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:16:25.230287 kernel: Serial: AMBA PL011 UART driver Sep 4 17:16:25.230306 kernel: Modules: 17536 pages in range for non-PLT usage Sep 4 17:16:25.230326 kernel: Modules: 509056 pages in range for PLT usage Sep 4 17:16:25.230345 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:16:25.230369 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:16:25.230388 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:16:25.230408 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:16:25.230427 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:16:25.230447 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:16:25.230466 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:16:25.230486 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:16:25.230505 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:16:25.230524 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:16:25.230549 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:16:25.230571 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:16:25.230590 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:16:25.230610 kernel: ACPI: Interpreter enabled Sep 4 17:16:25.230629 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:16:25.230648 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:16:25.230667 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 17:16:25.230979 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:16:25.233333 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:16:25.233609 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:16:25.233822 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 17:16:25.234030 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 17:16:25.234056 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 17:16:25.234076 kernel: acpiphp: Slot [1] registered Sep 4 17:16:25.234096 kernel: acpiphp: Slot [2] registered Sep 4 17:16:25.234143 kernel: acpiphp: Slot [3] registered Sep 4 17:16:25.234168 kernel: acpiphp: Slot [4] registered Sep 4 17:16:25.234196 kernel: acpiphp: Slot [5] registered Sep 4 17:16:25.234216 kernel: acpiphp: Slot [6] registered Sep 4 17:16:25.234235 kernel: acpiphp: Slot [7] registered Sep 4 17:16:25.234254 kernel: acpiphp: Slot [8] registered Sep 4 17:16:25.234272 kernel: acpiphp: Slot [9] registered Sep 4 17:16:25.234291 kernel: acpiphp: Slot [10] registered Sep 4 17:16:25.234310 kernel: acpiphp: Slot [11] registered Sep 4 17:16:25.234329 kernel: acpiphp: Slot [12] registered Sep 4 17:16:25.234348 kernel: acpiphp: Slot [13] registered Sep 4 17:16:25.234371 kernel: acpiphp: Slot [14] registered Sep 4 17:16:25.234391 kernel: acpiphp: Slot [15] registered Sep 4 17:16:25.234410 kernel: acpiphp: Slot [16] registered Sep 4 17:16:25.234428 kernel: acpiphp: Slot [17] registered Sep 4 17:16:25.234447 kernel: acpiphp: Slot [18] registered Sep 4 17:16:25.234466 kernel: acpiphp: Slot [19] registered Sep 4 17:16:25.234485 kernel: acpiphp: Slot [20] registered Sep 4 17:16:25.234504 kernel: acpiphp: Slot [21] registered Sep 4 17:16:25.234523 kernel: acpiphp: Slot [22] registered Sep 4 17:16:25.234542 kernel: acpiphp: Slot [23] registered Sep 4 17:16:25.234566 kernel: acpiphp: Slot [24] registered Sep 4 17:16:25.234585 kernel: acpiphp: Slot [25] registered Sep 4 17:16:25.234604 kernel: acpiphp: Slot [26] registered Sep 4 17:16:25.234623 kernel: acpiphp: Slot [27] registered Sep 4 17:16:25.234641 kernel: acpiphp: Slot [28] registered Sep 4 17:16:25.234660 kernel: acpiphp: Slot [29] registered Sep 4 17:16:25.234679 kernel: acpiphp: Slot [30] registered Sep 4 17:16:25.234698 kernel: acpiphp: Slot [31] registered Sep 4 17:16:25.234717 kernel: PCI host bridge to bus 0000:00 Sep 4 17:16:25.234931 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 17:16:25.236257 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:16:25.236492 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 17:16:25.236677 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 17:16:25.236918 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 17:16:25.237165 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 17:16:25.237412 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 17:16:25.237659 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:16:25.237867 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 17:16:25.238081 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:16:25.241439 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:16:25.241703 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 17:16:25.241916 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 17:16:25.242183 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 17:16:25.242402 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:16:25.242613 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 17:16:25.242824 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 17:16:25.243514 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 17:16:25.243746 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 17:16:25.243970 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 17:16:25.244216 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 17:16:25.244416 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:16:25.244608 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 17:16:25.244634 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:16:25.244654 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:16:25.244674 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:16:25.244694 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:16:25.244713 kernel: iommu: Default domain type: Translated Sep 4 17:16:25.244740 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:16:25.244759 kernel: efivars: Registered efivars operations Sep 4 17:16:25.244778 kernel: vgaarb: loaded Sep 4 17:16:25.244797 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:16:25.244816 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:16:25.244836 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:16:25.244855 kernel: pnp: PnP ACPI init Sep 4 17:16:25.245079 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 17:16:25.245111 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:16:25.246909 kernel: NET: Registered PF_INET protocol family Sep 4 17:16:25.246932 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:16:25.246953 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:16:25.246973 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:16:25.246994 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:16:25.247014 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:16:25.247033 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:16:25.247052 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:16:25.247082 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:16:25.247102 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:16:25.247175 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:16:25.247197 kernel: kvm [1]: HYP mode not available Sep 4 17:16:25.247217 kernel: Initialise system trusted keyrings Sep 4 17:16:25.247237 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:16:25.247256 kernel: Key type asymmetric registered Sep 4 17:16:25.247275 kernel: Asymmetric key parser 'x509' registered Sep 4 17:16:25.247294 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:16:25.247319 kernel: io scheduler mq-deadline registered Sep 4 17:16:25.247339 kernel: io scheduler kyber registered Sep 4 17:16:25.247358 kernel: io scheduler bfq registered Sep 4 17:16:25.247625 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 17:16:25.247654 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:16:25.247674 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:16:25.247693 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 17:16:25.247713 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 17:16:25.247732 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:16:25.247758 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 17:16:25.247971 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 17:16:25.247998 kernel: printk: console [ttyS0] disabled Sep 4 17:16:25.248018 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 17:16:25.248038 kernel: printk: console [ttyS0] enabled Sep 4 17:16:25.248057 kernel: printk: bootconsole [uart0] disabled Sep 4 17:16:25.248076 kernel: thunder_xcv, ver 1.0 Sep 4 17:16:25.248095 kernel: thunder_bgx, ver 1.0 Sep 4 17:16:25.248129 kernel: nicpf, ver 1.0 Sep 4 17:16:25.248160 kernel: nicvf, ver 1.0 Sep 4 17:16:25.248379 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:16:25.248576 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:16:24 UTC (1725470184) Sep 4 17:16:25.248602 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:16:25.248622 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 17:16:25.248642 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:16:25.248661 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:16:25.248680 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:16:25.248705 kernel: Segment Routing with IPv6 Sep 4 17:16:25.248725 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:16:25.248744 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:16:25.248763 kernel: Key type dns_resolver registered Sep 4 17:16:25.248781 kernel: registered taskstats version 1 Sep 4 17:16:25.248801 kernel: Loading compiled-in X.509 certificates Sep 4 17:16:25.248820 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 6782952639b29daf968f5d0c3e73fb25e5af1d5e' Sep 4 17:16:25.248839 kernel: Key type .fscrypt registered Sep 4 17:16:25.248858 kernel: Key type fscrypt-provisioning registered Sep 4 17:16:25.248882 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:16:25.248901 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:16:25.248919 kernel: ima: No architecture policies found Sep 4 17:16:25.248938 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:16:25.248958 kernel: clk: Disabling unused clocks Sep 4 17:16:25.248977 kernel: Freeing unused kernel memory: 39296K Sep 4 17:16:25.248996 kernel: Run /init as init process Sep 4 17:16:25.249015 kernel: with arguments: Sep 4 17:16:25.249034 kernel: /init Sep 4 17:16:25.249057 kernel: with environment: Sep 4 17:16:25.249076 kernel: HOME=/ Sep 4 17:16:25.249095 kernel: TERM=linux Sep 4 17:16:25.249128 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:16:25.249157 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:16:25.249181 systemd[1]: Detected virtualization amazon. Sep 4 17:16:25.249203 systemd[1]: Detected architecture arm64. Sep 4 17:16:25.249223 systemd[1]: Running in initrd. Sep 4 17:16:25.249250 systemd[1]: No hostname configured, using default hostname. Sep 4 17:16:25.249270 systemd[1]: Hostname set to . Sep 4 17:16:25.249292 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:16:25.249312 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:16:25.249333 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:16:25.249354 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:16:25.249376 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:16:25.249397 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:16:25.249423 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:16:25.249445 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:16:25.249469 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:16:25.249490 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:16:25.249526 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:16:25.249552 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:16:25.249579 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:16:25.249601 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:16:25.249621 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:16:25.249642 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:16:25.249663 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:16:25.249684 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:16:25.249705 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:16:25.249726 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:16:25.249747 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:16:25.249773 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:16:25.249794 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:16:25.249814 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:16:25.249835 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:16:25.249856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:16:25.249877 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:16:25.249899 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:16:25.249919 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:16:25.249940 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:16:25.249966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:25.249987 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:16:25.250008 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:16:25.250029 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:16:25.250086 systemd-journald[251]: Collecting audit messages is disabled. Sep 4 17:16:25.250160 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:16:25.250183 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:16:25.250202 kernel: Bridge firewalling registered Sep 4 17:16:25.250229 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:16:25.250250 systemd-journald[251]: Journal started Sep 4 17:16:25.250287 systemd-journald[251]: Runtime Journal (/run/log/journal/ec243b1e21358b643a9452a19e245262) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:16:25.204195 systemd-modules-load[252]: Inserted module 'overlay' Sep 4 17:16:25.255446 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:16:25.241991 systemd-modules-load[252]: Inserted module 'br_netfilter' Sep 4 17:16:25.259269 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:25.265304 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:16:25.290520 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:16:25.297556 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:16:25.301379 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:16:25.303134 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:16:25.338493 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:16:25.356526 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:16:25.370037 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:25.385449 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:16:25.389022 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:16:25.416515 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:16:25.432329 dracut-cmdline[284]: dracut-dracut-053 Sep 4 17:16:25.438032 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=28a986328b36e7de6a755f88bb335afbeb3e3932bc9a20c5f8e57b952c2d23a9 Sep 4 17:16:25.505302 systemd-resolved[287]: Positive Trust Anchors: Sep 4 17:16:25.505339 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:16:25.505404 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:16:25.592157 kernel: SCSI subsystem initialized Sep 4 17:16:25.602139 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:16:25.612154 kernel: iscsi: registered transport (tcp) Sep 4 17:16:25.634495 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:16:25.634580 kernel: QLogic iSCSI HBA Driver Sep 4 17:16:25.733146 kernel: random: crng init done Sep 4 17:16:25.733464 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 4 17:16:25.738835 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:16:25.743639 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:16:25.768186 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:16:25.786531 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:16:25.818476 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:16:25.818551 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:16:25.818590 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:16:25.885165 kernel: raid6: neonx8 gen() 6719 MB/s Sep 4 17:16:25.902160 kernel: raid6: neonx4 gen() 6524 MB/s Sep 4 17:16:25.919154 kernel: raid6: neonx2 gen() 5433 MB/s Sep 4 17:16:25.936148 kernel: raid6: neonx1 gen() 3953 MB/s Sep 4 17:16:25.953147 kernel: raid6: int64x8 gen() 3824 MB/s Sep 4 17:16:25.970149 kernel: raid6: int64x4 gen() 3720 MB/s Sep 4 17:16:25.987149 kernel: raid6: int64x2 gen() 3602 MB/s Sep 4 17:16:26.004931 kernel: raid6: int64x1 gen() 2750 MB/s Sep 4 17:16:26.004973 kernel: raid6: using algorithm neonx8 gen() 6719 MB/s Sep 4 17:16:26.022915 kernel: raid6: .... xor() 4825 MB/s, rmw enabled Sep 4 17:16:26.022960 kernel: raid6: using neon recovery algorithm Sep 4 17:16:26.031157 kernel: xor: measuring software checksum speed Sep 4 17:16:26.031216 kernel: 8regs : 11029 MB/sec Sep 4 17:16:26.034146 kernel: 32regs : 11923 MB/sec Sep 4 17:16:26.036453 kernel: arm64_neon : 9601 MB/sec Sep 4 17:16:26.036487 kernel: xor: using function: 32regs (11923 MB/sec) Sep 4 17:16:26.119170 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:16:26.139202 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:16:26.152443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:16:26.185050 systemd-udevd[469]: Using default interface naming scheme 'v255'. Sep 4 17:16:26.193012 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:16:26.211360 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:16:26.235793 dracut-pre-trigger[477]: rd.md=0: removing MD RAID activation Sep 4 17:16:26.293044 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:16:26.306519 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:16:26.427516 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:16:26.444596 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:16:26.491623 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:16:26.505584 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:16:26.516263 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:16:26.525685 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:16:26.540894 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:16:26.579985 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:16:26.619596 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:16:26.619660 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 17:16:26.631565 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:16:26.631879 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:16:26.641149 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:20:45:58:70:99 Sep 4 17:16:26.648390 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 17:16:26.648453 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:16:26.659142 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:16:26.658673 (udev-worker)[526]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:16:26.668676 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:16:26.683492 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:16:26.683541 kernel: GPT:9289727 != 16777215 Sep 4 17:16:26.683567 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:16:26.683619 kernel: GPT:9289727 != 16777215 Sep 4 17:16:26.683660 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:16:26.683687 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:26.668934 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:26.688200 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:16:26.692001 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:16:26.697649 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:26.701962 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:26.717633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:26.746180 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:26.776502 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:16:26.811253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:26.846151 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (524) Sep 4 17:16:26.862146 kernel: BTRFS: device fsid 3e706a0f-a579-4862-bc52-e66e95e66d87 devid 1 transid 42 /dev/nvme0n1p3 scanned by (udev-worker) (516) Sep 4 17:16:26.897749 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:16:26.947856 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:16:26.976447 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:16:26.995072 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:16:26.998184 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:16:27.017476 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:16:27.037828 disk-uuid[660]: Primary Header is updated. Sep 4 17:16:27.037828 disk-uuid[660]: Secondary Entries is updated. Sep 4 17:16:27.037828 disk-uuid[660]: Secondary Header is updated. Sep 4 17:16:27.047250 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:27.069181 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:27.074302 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:28.082550 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:16:28.084187 disk-uuid[661]: The operation has completed successfully. Sep 4 17:16:28.262557 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:16:28.262779 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:16:28.318371 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:16:28.325922 sh[1004]: Success Sep 4 17:16:28.355161 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:16:28.456057 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:16:28.461510 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:16:28.473330 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:16:28.509896 kernel: BTRFS info (device dm-0): first mount of filesystem 3e706a0f-a579-4862-bc52-e66e95e66d87 Sep 4 17:16:28.509958 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:28.509985 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:16:28.512800 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:16:28.512834 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:16:28.623163 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:16:28.653323 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:16:28.658916 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:16:28.668424 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:16:28.679463 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:16:28.710010 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:28.710087 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:28.710148 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:16:28.717621 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:16:28.733714 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:16:28.739842 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:28.759886 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:16:28.775426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:16:28.861189 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:16:28.879393 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:16:28.924435 systemd-networkd[1196]: lo: Link UP Sep 4 17:16:28.924449 systemd-networkd[1196]: lo: Gained carrier Sep 4 17:16:28.927541 systemd-networkd[1196]: Enumeration completed Sep 4 17:16:28.928561 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:28.928568 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:16:28.931674 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:16:28.938346 systemd-networkd[1196]: eth0: Link UP Sep 4 17:16:28.938354 systemd-networkd[1196]: eth0: Gained carrier Sep 4 17:16:28.938372 systemd-networkd[1196]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:28.966211 systemd[1]: Reached target network.target - Network. Sep 4 17:16:28.966394 systemd-networkd[1196]: eth0: DHCPv4 address 172.31.23.29/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:16:29.201021 ignition[1127]: Ignition 2.19.0 Sep 4 17:16:29.201171 ignition[1127]: Stage: fetch-offline Sep 4 17:16:29.205300 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:16:29.201716 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:29.201741 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:29.202218 ignition[1127]: Ignition finished successfully Sep 4 17:16:29.227873 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:16:29.250001 ignition[1205]: Ignition 2.19.0 Sep 4 17:16:29.250029 ignition[1205]: Stage: fetch Sep 4 17:16:29.250697 ignition[1205]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:29.250724 ignition[1205]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:29.250876 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:29.265017 ignition[1205]: PUT result: OK Sep 4 17:16:29.269282 ignition[1205]: parsed url from cmdline: "" Sep 4 17:16:29.269308 ignition[1205]: no config URL provided Sep 4 17:16:29.269324 ignition[1205]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:16:29.269354 ignition[1205]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:16:29.269386 ignition[1205]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:29.274064 ignition[1205]: PUT result: OK Sep 4 17:16:29.274476 ignition[1205]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:16:29.283458 ignition[1205]: GET result: OK Sep 4 17:16:29.283643 ignition[1205]: parsing config with SHA512: eda2fe8850bde735cefcc15af527eda1c38c7f5d4f4892f908f92e5a260f1c0e82b8771b7458a21f787a6a37cf7abb1ad17a83c930a7e7556622743e2e6d4897 Sep 4 17:16:29.292280 unknown[1205]: fetched base config from "system" Sep 4 17:16:29.293001 unknown[1205]: fetched base config from "system" Sep 4 17:16:29.294180 ignition[1205]: fetch: fetch complete Sep 4 17:16:29.293153 unknown[1205]: fetched user config from "aws" Sep 4 17:16:29.294192 ignition[1205]: fetch: fetch passed Sep 4 17:16:29.294285 ignition[1205]: Ignition finished successfully Sep 4 17:16:29.305438 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:16:29.317415 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:16:29.348390 ignition[1212]: Ignition 2.19.0 Sep 4 17:16:29.348418 ignition[1212]: Stage: kargs Sep 4 17:16:29.349160 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:29.349188 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:29.349348 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:29.353208 ignition[1212]: PUT result: OK Sep 4 17:16:29.367868 ignition[1212]: kargs: kargs passed Sep 4 17:16:29.367986 ignition[1212]: Ignition finished successfully Sep 4 17:16:29.375181 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:16:29.392503 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:16:29.419723 ignition[1219]: Ignition 2.19.0 Sep 4 17:16:29.419751 ignition[1219]: Stage: disks Sep 4 17:16:29.420932 ignition[1219]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:29.420959 ignition[1219]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:29.421451 ignition[1219]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:29.425466 ignition[1219]: PUT result: OK Sep 4 17:16:29.434902 ignition[1219]: disks: disks passed Sep 4 17:16:29.435061 ignition[1219]: Ignition finished successfully Sep 4 17:16:29.440468 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:16:29.445617 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:16:29.451420 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:16:29.454322 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:16:29.456723 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:16:29.459161 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:16:29.481054 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:16:29.549662 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:16:29.560058 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:16:29.580448 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:16:29.663178 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 901d46b0-2319-4536-8a6d-46889db73e8c r/w with ordered data mode. Quota mode: none. Sep 4 17:16:29.664467 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:16:29.665344 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:16:29.690392 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:16:29.694413 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:16:29.706149 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:16:29.706286 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:16:29.724300 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247) Sep 4 17:16:29.724339 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:29.706340 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:16:29.735768 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:29.735807 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:16:29.737762 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:16:29.744155 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:16:29.751440 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:16:29.759915 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:16:30.319390 initrd-setup-root[1272]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:16:30.341623 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:16:30.350952 initrd-setup-root[1286]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:16:30.360755 initrd-setup-root[1293]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:16:30.614765 systemd-networkd[1196]: eth0: Gained IPv6LL Sep 4 17:16:30.784336 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:16:30.801903 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:16:30.807422 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:16:30.834132 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:16:30.839285 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:30.868942 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:16:30.888708 ignition[1362]: INFO : Ignition 2.19.0 Sep 4 17:16:30.892472 ignition[1362]: INFO : Stage: mount Sep 4 17:16:30.894901 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:30.898075 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:30.898075 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:30.905706 ignition[1362]: INFO : PUT result: OK Sep 4 17:16:30.913129 ignition[1362]: INFO : mount: mount passed Sep 4 17:16:30.915071 ignition[1362]: INFO : Ignition finished successfully Sep 4 17:16:30.920200 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:16:30.929325 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:16:30.963452 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:16:30.984158 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1373) Sep 4 17:16:30.989284 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e85e5091-8620-4def-b250-7009f4048f6e Sep 4 17:16:30.989332 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:16:30.989359 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:16:30.994140 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:16:30.997712 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:16:31.032729 ignition[1390]: INFO : Ignition 2.19.0 Sep 4 17:16:31.032729 ignition[1390]: INFO : Stage: files Sep 4 17:16:31.037874 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:31.037874 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:31.037874 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:31.048053 ignition[1390]: INFO : PUT result: OK Sep 4 17:16:31.056669 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:16:31.060419 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:16:31.060419 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:16:31.074382 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:16:31.077917 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:16:31.081585 unknown[1390]: wrote ssh authorized keys file for user: core Sep 4 17:16:31.083912 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:16:31.094873 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:16:31.094873 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 4 17:16:31.094873 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:16:31.094873 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:16:31.158864 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 4 17:16:31.237164 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:16:31.237164 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:16:31.237164 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:16:31.237164 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:16:31.237164 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:16:31.237164 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:16:31.261450 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:16:31.261450 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:16:31.261450 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:16:31.261450 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:16:31.261450 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:16:31.261450 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:16:31.261450 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:16:31.261450 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:16:31.261450 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Sep 4 17:16:31.750256 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 4 17:16:32.126207 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Sep 4 17:16:32.126207 ignition[1390]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:16:32.136233 ignition[1390]: INFO : files: files passed Sep 4 17:16:32.136233 ignition[1390]: INFO : Ignition finished successfully Sep 4 17:16:32.151162 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:16:32.173679 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:16:32.200320 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:16:32.220456 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:16:32.224252 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:16:32.236271 initrd-setup-root-after-ignition[1418]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:16:32.236271 initrd-setup-root-after-ignition[1418]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:16:32.245908 initrd-setup-root-after-ignition[1422]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:16:32.252806 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:16:32.259357 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:16:32.274492 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:16:32.335514 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:16:32.335714 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:16:32.339960 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:16:32.343174 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:16:32.343869 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:16:32.371522 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:16:32.397707 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:16:32.415600 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:16:32.441872 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:16:32.445339 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:16:32.454346 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:16:32.456787 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:16:32.457136 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:16:32.466723 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:16:32.469490 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:16:32.475923 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:16:32.478809 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:16:32.486462 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:16:32.489435 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:16:32.496381 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:16:32.499495 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:16:32.502231 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:16:32.512055 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:16:32.514222 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:16:32.514475 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:16:32.523493 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:16:32.526489 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:16:32.534474 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:16:32.538234 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:16:32.543791 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:16:32.544197 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:16:32.550340 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:16:32.550569 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:16:32.553383 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:16:32.554265 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:16:32.574291 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:16:32.577819 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:16:32.578091 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:16:32.600696 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:16:32.606149 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:16:32.608718 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:16:32.613350 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:16:32.614192 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:16:32.639848 ignition[1442]: INFO : Ignition 2.19.0 Sep 4 17:16:32.642424 ignition[1442]: INFO : Stage: umount Sep 4 17:16:32.642424 ignition[1442]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:16:32.642424 ignition[1442]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:16:32.642195 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:16:32.657625 ignition[1442]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:16:32.657625 ignition[1442]: INFO : PUT result: OK Sep 4 17:16:32.642421 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:16:32.669778 ignition[1442]: INFO : umount: umount passed Sep 4 17:16:32.669778 ignition[1442]: INFO : Ignition finished successfully Sep 4 17:16:32.677368 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:16:32.678584 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:16:32.680461 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:16:32.692083 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:16:32.695247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:16:32.700391 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:16:32.700554 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:16:32.704096 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:16:32.704227 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:16:32.714203 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:16:32.714303 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:16:32.716647 systemd[1]: Stopped target network.target - Network. Sep 4 17:16:32.718634 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:16:32.718724 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:16:32.721558 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:16:32.723655 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:16:32.742221 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:16:32.744668 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:16:32.746476 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:16:32.748387 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:16:32.748466 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:16:32.750472 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:16:32.750548 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:16:32.752578 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:16:32.752667 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:16:32.754678 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:16:32.754755 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:16:32.756866 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:16:32.756941 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:16:32.759927 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:16:32.760670 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:16:32.800210 systemd-networkd[1196]: eth0: DHCPv6 lease lost Sep 4 17:16:32.804911 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:16:32.806185 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:16:32.813884 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:16:32.814253 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:16:32.819670 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:16:32.819759 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:16:32.843445 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:16:32.845857 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:16:32.845964 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:16:32.854426 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:16:32.854520 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:16:32.856640 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:16:32.856719 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:16:32.859385 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:16:32.859469 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:16:32.863834 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:16:32.904380 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:16:32.905626 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:16:32.914679 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:16:32.914915 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:16:32.920470 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:16:32.920596 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:16:32.923229 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:16:32.923709 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:16:32.938175 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:16:32.938282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:16:32.940985 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:16:32.941074 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:16:32.946011 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:16:32.946103 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:16:32.968862 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:16:32.972366 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:16:32.972475 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:16:32.978128 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:16:32.978241 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:32.989858 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:16:32.990084 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:16:33.010843 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:16:33.028515 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:16:33.066356 systemd[1]: Switching root. Sep 4 17:16:33.102198 systemd-journald[251]: Journal stopped Sep 4 17:16:36.676673 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Sep 4 17:16:36.676819 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:16:36.676863 kernel: SELinux: policy capability open_perms=1 Sep 4 17:16:36.676895 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:16:36.676926 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:16:36.676958 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:16:36.676988 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:16:36.677019 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:16:36.678264 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:16:36.678324 kernel: audit: type=1403 audit(1725470195.017:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:16:36.678365 systemd[1]: Successfully loaded SELinux policy in 56.723ms. Sep 4 17:16:36.678415 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.854ms. Sep 4 17:16:36.678451 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:16:36.678484 systemd[1]: Detected virtualization amazon. Sep 4 17:16:36.678516 systemd[1]: Detected architecture arm64. Sep 4 17:16:36.678547 systemd[1]: Detected first boot. Sep 4 17:16:36.678581 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:16:36.678613 zram_generator::config[1501]: No configuration found. Sep 4 17:16:36.678650 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:16:36.678681 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:16:36.678714 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:16:36.678748 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:16:36.678780 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:16:36.678812 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:16:36.678841 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:16:36.678875 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:16:36.678908 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:16:36.678939 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:16:36.678971 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:16:36.679004 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:16:36.679038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:16:36.679070 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:16:36.679100 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:16:36.682221 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:16:36.682271 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:16:36.682304 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:16:36.682337 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:16:36.682378 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:16:36.682408 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:16:36.682443 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:16:36.682475 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:16:36.682511 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:16:36.682541 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:16:36.682573 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:16:36.682603 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:16:36.682636 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:16:36.682666 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:16:36.682699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:16:36.682732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:16:36.682764 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:16:36.682795 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:16:36.682832 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:16:36.682862 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:16:36.682894 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:16:36.682927 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:16:36.682957 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:16:36.682987 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:16:36.683017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:16:36.683048 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:16:36.683083 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:16:36.683129 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:16:36.683168 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:16:36.683211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:16:36.683243 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:16:36.683276 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:16:36.683307 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:16:36.683338 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 4 17:16:36.683376 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 4 17:16:36.683410 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:16:36.683441 kernel: fuse: init (API version 7.39) Sep 4 17:16:36.683472 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:16:36.683501 kernel: ACPI: bus type drm_connector registered Sep 4 17:16:36.683528 kernel: loop: module loaded Sep 4 17:16:36.683556 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:16:36.683612 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:16:36.683653 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:16:36.683686 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:16:36.683721 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:16:36.683754 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:16:36.683832 systemd-journald[1604]: Collecting audit messages is disabled. Sep 4 17:16:36.683885 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:16:36.683916 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:16:36.683945 systemd-journald[1604]: Journal started Sep 4 17:16:36.683998 systemd-journald[1604]: Runtime Journal (/run/log/journal/ec243b1e21358b643a9452a19e245262) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:16:36.687203 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:16:36.696733 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:16:36.705521 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:16:36.711236 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:16:36.717397 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:16:36.717780 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:16:36.723334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:16:36.723682 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:16:36.728968 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:16:36.729349 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:16:36.734804 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:16:36.735318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:16:36.740980 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:16:36.741366 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:16:36.746976 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:16:36.747413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:16:36.755543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:16:36.761269 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:16:36.768234 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:16:36.797312 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:16:36.811454 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:16:36.822310 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:16:36.831598 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:16:36.850387 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:16:36.859406 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:16:36.864424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:16:36.872689 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:16:36.881407 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:16:36.889326 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:16:36.899745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:16:36.903625 systemd-journald[1604]: Time spent on flushing to /var/log/journal/ec243b1e21358b643a9452a19e245262 is 85.887ms for 893 entries. Sep 4 17:16:36.903625 systemd-journald[1604]: System Journal (/var/log/journal/ec243b1e21358b643a9452a19e245262) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:16:37.041356 systemd-journald[1604]: Received client request to flush runtime journal. Sep 4 17:16:36.928222 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:16:36.941694 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:16:36.947374 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:16:36.957985 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:16:36.965581 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:16:36.979450 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:16:37.020824 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:16:37.033233 udevadm[1663]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:16:37.040258 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Sep 4 17:16:37.040283 systemd-tmpfiles[1653]: ACLs are not supported, ignoring. Sep 4 17:16:37.052736 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:16:37.059191 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:16:37.078550 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:16:37.139535 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:16:37.155591 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:16:37.188718 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Sep 4 17:16:37.188760 systemd-tmpfiles[1677]: ACLs are not supported, ignoring. Sep 4 17:16:37.198934 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:16:38.048676 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:16:38.062506 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:16:38.121689 systemd-udevd[1683]: Using default interface naming scheme 'v255'. Sep 4 17:16:38.174236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:16:38.187485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:16:38.227686 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:16:38.358307 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:16:38.364249 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 4 17:16:38.377148 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1691) Sep 4 17:16:38.381459 (udev-worker)[1703]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:16:38.407168 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1691) Sep 4 17:16:38.521178 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (1706) Sep 4 17:16:38.528469 systemd-networkd[1692]: lo: Link UP Sep 4 17:16:38.528489 systemd-networkd[1692]: lo: Gained carrier Sep 4 17:16:38.531897 systemd-networkd[1692]: Enumeration completed Sep 4 17:16:38.532158 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:16:38.537508 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:38.537528 systemd-networkd[1692]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:16:38.540174 systemd-networkd[1692]: eth0: Link UP Sep 4 17:16:38.540527 systemd-networkd[1692]: eth0: Gained carrier Sep 4 17:16:38.540561 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:38.552308 systemd-networkd[1692]: eth0: DHCPv4 address 172.31.23.29/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:16:38.552649 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:16:38.560377 systemd-networkd[1692]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:16:38.814576 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:16:38.840261 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:16:38.846359 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:16:38.857453 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:16:38.895057 lvm[1813]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:16:38.928698 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:16:38.929916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:16:38.943644 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:16:38.959617 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:16:38.962962 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:16:39.005331 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:16:39.011215 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:16:39.014414 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:16:39.014469 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:16:39.021484 systemd[1]: Reached target machines.target - Containers. Sep 4 17:16:39.025774 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:16:39.038459 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:16:39.046426 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:16:39.051079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:16:39.059517 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:16:39.069447 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:16:39.086858 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:16:39.091265 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:16:39.113625 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:16:39.144529 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:16:39.147234 kernel: loop0: detected capacity change from 0 to 65520 Sep 4 17:16:39.145961 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:16:39.228336 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:16:39.260146 kernel: loop1: detected capacity change from 0 to 52536 Sep 4 17:16:39.346184 kernel: loop2: detected capacity change from 0 to 114288 Sep 4 17:16:39.431143 kernel: loop3: detected capacity change from 0 to 193208 Sep 4 17:16:39.474244 kernel: loop4: detected capacity change from 0 to 65520 Sep 4 17:16:39.486151 kernel: loop5: detected capacity change from 0 to 52536 Sep 4 17:16:39.496183 kernel: loop6: detected capacity change from 0 to 114288 Sep 4 17:16:39.514178 kernel: loop7: detected capacity change from 0 to 193208 Sep 4 17:16:39.533136 (sd-merge)[1842]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:16:39.535645 (sd-merge)[1842]: Merged extensions into '/usr'. Sep 4 17:16:39.543330 systemd[1]: Reloading requested from client PID 1828 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:16:39.543361 systemd[1]: Reloading... Sep 4 17:16:39.663152 zram_generator::config[1868]: No configuration found. Sep 4 17:16:39.766263 systemd-networkd[1692]: eth0: Gained IPv6LL Sep 4 17:16:39.945292 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:16:40.089181 systemd[1]: Reloading finished in 544 ms. Sep 4 17:16:40.113853 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:16:40.122242 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:16:40.136447 systemd[1]: Starting ensure-sysext.service... Sep 4 17:16:40.148002 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 4 17:16:40.179391 systemd[1]: Reloading requested from client PID 1927 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:16:40.179593 systemd[1]: Reloading... Sep 4 17:16:40.208987 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:16:40.210376 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:16:40.212804 systemd-tmpfiles[1928]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:16:40.214033 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Sep 4 17:16:40.214565 systemd-tmpfiles[1928]: ACLs are not supported, ignoring. Sep 4 17:16:40.224270 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:16:40.224289 systemd-tmpfiles[1928]: Skipping /boot Sep 4 17:16:40.246556 ldconfig[1824]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:16:40.249612 systemd-tmpfiles[1928]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:16:40.249781 systemd-tmpfiles[1928]: Skipping /boot Sep 4 17:16:40.347201 zram_generator::config[1959]: No configuration found. Sep 4 17:16:40.584380 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:16:40.724826 systemd[1]: Reloading finished in 544 ms. Sep 4 17:16:40.749664 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:16:40.766379 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 4 17:16:40.784453 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:16:40.799404 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:16:40.809413 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:16:40.827452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:16:40.840417 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:16:40.869640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:16:40.874616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:16:40.897095 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:16:40.916963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:16:40.922495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:16:40.928718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:16:40.929081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:16:40.935491 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:16:40.965078 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:16:40.966881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:16:40.973106 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:16:40.976521 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:16:40.993568 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:16:41.007368 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:16:41.015667 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:16:41.020307 augenrules[2051]: No rules Sep 4 17:16:41.033752 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:16:41.043443 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:16:41.050655 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:16:41.069226 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:16:41.078105 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:16:41.082303 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:16:41.086401 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:16:41.086747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:16:41.109041 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:16:41.115803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:16:41.120551 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:16:41.120903 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:16:41.142807 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:16:41.154347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:16:41.170459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:16:41.181108 systemd-resolved[2021]: Positive Trust Anchors: Sep 4 17:16:41.181735 systemd-resolved[2021]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:16:41.181890 systemd-resolved[2021]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 4 17:16:41.188701 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:16:41.193603 systemd-resolved[2021]: Defaulting to hostname 'linux'. Sep 4 17:16:41.201733 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:16:41.204314 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:16:41.204688 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:16:41.214315 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:16:41.220061 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:16:41.225901 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:16:41.230286 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:16:41.230639 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:16:41.234419 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:16:41.234774 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:16:41.239167 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:16:41.239526 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:16:41.243558 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:16:41.246653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:16:41.258950 systemd[1]: Finished ensure-sysext.service. Sep 4 17:16:41.268489 systemd[1]: Reached target network.target - Network. Sep 4 17:16:41.271318 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:16:41.274004 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:16:41.276787 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:16:41.276943 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:16:41.279572 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:16:41.282432 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:16:41.285634 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:16:41.288350 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:16:41.291130 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:16:41.294068 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:16:41.294140 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:16:41.296215 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:16:41.299802 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:16:41.305523 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:16:41.309822 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:16:41.313969 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:16:41.314848 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:16:41.317618 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:16:41.320221 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:16:41.322879 systemd[1]: System is tainted: cgroupsv1 Sep 4 17:16:41.322956 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:16:41.323003 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:16:41.334457 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:16:41.343469 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:16:41.354435 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:16:41.361360 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:16:41.366735 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:16:41.371745 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:16:41.385973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:16:41.405778 jq[2094]: false Sep 4 17:16:41.398455 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:16:41.423375 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:16:41.436437 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:16:41.449162 extend-filesystems[2095]: Found loop4 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found loop5 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found loop6 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found loop7 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found nvme0n1 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found nvme0n1p1 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found nvme0n1p2 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found nvme0n1p3 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found usr Sep 4 17:16:41.459732 extend-filesystems[2095]: Found nvme0n1p4 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found nvme0n1p6 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found nvme0n1p7 Sep 4 17:16:41.459732 extend-filesystems[2095]: Found nvme0n1p9 Sep 4 17:16:41.459732 extend-filesystems[2095]: Checking size of /dev/nvme0n1p9 Sep 4 17:16:41.483478 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:16:41.503420 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:16:41.519291 dbus-daemon[2093]: [system] SELinux support is enabled Sep 4 17:16:41.531425 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:16:41.540737 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:16:41.546003 dbus-daemon[2093]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1692 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:16:41.585521 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:16:41.590275 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:16:41.601477 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:16:41.610793 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:16:41.618902 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:16:41.660413 extend-filesystems[2095]: Resized partition /dev/nvme0n1p9 Sep 4 17:16:41.667504 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:16:41.668037 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:16:41.684993 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:16:41.694672 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:18:26 UTC 2024 (1): Starting Sep 4 17:16:41.694672 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:16:41.694672 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: ---------------------------------------------------- Sep 4 17:16:41.694672 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:16:41.694672 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:16:41.694672 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: corporation. Support and training for ntp-4 are Sep 4 17:16:41.694672 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: available at https://www.nwtime.org/support Sep 4 17:16:41.694672 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: ---------------------------------------------------- Sep 4 17:16:41.687233 ntpd[2100]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:18:26 UTC 2024 (1): Starting Sep 4 17:16:41.686628 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:16:41.731672 extend-filesystems[2133]: resize2fs 1.47.1 (20-May-2024) Sep 4 17:16:41.751026 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: proto: precision = 0.108 usec (-23) Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: basedate set to 2024-08-23 Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: gps base set to 2024-08-25 (week 2329) Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: Listen normally on 3 eth0 172.31.23.29:123 Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: Listen normally on 4 lo [::1]:123 Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: Listen normally on 5 eth0 [fe80::420:45ff:fe58:7099%2]:123 Sep 4 17:16:41.751090 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: Listening on routing socket on fd #22 for interface updates Sep 4 17:16:41.751605 coreos-metadata[2092]: Sep 04 17:16:41.715 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:16:41.751605 coreos-metadata[2092]: Sep 04 17:16:41.719 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:16:41.751605 coreos-metadata[2092]: Sep 04 17:16:41.724 INFO Fetch successful Sep 4 17:16:41.751605 coreos-metadata[2092]: Sep 04 17:16:41.724 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:16:41.751605 coreos-metadata[2092]: Sep 04 17:16:41.732 INFO Fetch successful Sep 4 17:16:41.751605 coreos-metadata[2092]: Sep 04 17:16:41.732 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:16:41.687288 ntpd[2100]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:16:41.726400 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:16:41.752480 jq[2125]: true Sep 4 17:16:41.687310 ntpd[2100]: ---------------------------------------------------- Sep 4 17:16:41.726899 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:16:41.687332 ntpd[2100]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:16:41.765649 coreos-metadata[2092]: Sep 04 17:16:41.754 INFO Fetch successful Sep 4 17:16:41.765649 coreos-metadata[2092]: Sep 04 17:16:41.754 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:16:41.765649 coreos-metadata[2092]: Sep 04 17:16:41.756 INFO Fetch successful Sep 4 17:16:41.765649 coreos-metadata[2092]: Sep 04 17:16:41.756 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:16:41.765879 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:41.765879 ntpd[2100]: 4 Sep 17:16:41 ntpd[2100]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:41.687353 ntpd[2100]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:16:41.687373 ntpd[2100]: corporation. Support and training for ntp-4 are Sep 4 17:16:41.778906 coreos-metadata[2092]: Sep 04 17:16:41.772 INFO Fetch failed with 404: resource not found Sep 4 17:16:41.778906 coreos-metadata[2092]: Sep 04 17:16:41.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:16:41.687395 ntpd[2100]: available at https://www.nwtime.org/support Sep 4 17:16:41.687415 ntpd[2100]: ---------------------------------------------------- Sep 4 17:16:41.697308 ntpd[2100]: proto: precision = 0.108 usec (-23) Sep 4 17:16:41.697767 ntpd[2100]: basedate set to 2024-08-23 Sep 4 17:16:41.697793 ntpd[2100]: gps base set to 2024-08-25 (week 2329) Sep 4 17:16:41.706543 ntpd[2100]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:16:41.706616 ntpd[2100]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:16:41.706870 ntpd[2100]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:16:41.706933 ntpd[2100]: Listen normally on 3 eth0 172.31.23.29:123 Sep 4 17:16:41.707003 ntpd[2100]: Listen normally on 4 lo [::1]:123 Sep 4 17:16:41.707074 ntpd[2100]: Listen normally on 5 eth0 [fe80::420:45ff:fe58:7099%2]:123 Sep 4 17:16:41.709226 ntpd[2100]: Listening on routing socket on fd #22 for interface updates Sep 4 17:16:41.761478 ntpd[2100]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:41.761533 ntpd[2100]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:16:41.829459 coreos-metadata[2092]: Sep 04 17:16:41.781 INFO Fetch successful Sep 4 17:16:41.829459 coreos-metadata[2092]: Sep 04 17:16:41.781 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:16:41.829459 coreos-metadata[2092]: Sep 04 17:16:41.795 INFO Fetch successful Sep 4 17:16:41.829459 coreos-metadata[2092]: Sep 04 17:16:41.795 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:16:41.829459 coreos-metadata[2092]: Sep 04 17:16:41.800 INFO Fetch successful Sep 4 17:16:41.829459 coreos-metadata[2092]: Sep 04 17:16:41.802 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:16:41.829459 coreos-metadata[2092]: Sep 04 17:16:41.809 INFO Fetch successful Sep 4 17:16:41.829459 coreos-metadata[2092]: Sep 04 17:16:41.811 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:16:41.829459 coreos-metadata[2092]: Sep 04 17:16:41.815 INFO Fetch successful Sep 4 17:16:41.826755 dbus-daemon[2093]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:16:41.783672 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:16:41.826975 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:16:41.827035 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:16:41.835799 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:16:41.835837 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:16:41.857663 (ntainerd)[2154]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:16:41.876357 tar[2137]: linux-arm64/helm Sep 4 17:16:41.885165 jq[2148]: true Sep 4 17:16:41.911399 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:16:41.919339 update_engine[2123]: I0904 17:16:41.919195 2123 main.cc:92] Flatcar Update Engine starting Sep 4 17:16:41.932424 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:16:41.934001 update_engine[2123]: I0904 17:16:41.933516 2123 update_check_scheduler.cc:74] Next update check in 4m2s Sep 4 17:16:41.941679 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:16:41.949277 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:16:42.006231 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:16:42.053164 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:16:42.065998 systemd-logind[2121]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:16:42.068699 systemd-logind[2121]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 17:16:42.070671 extend-filesystems[2133]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:16:42.070671 extend-filesystems[2133]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:16:42.070671 extend-filesystems[2133]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:16:42.107517 extend-filesystems[2095]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:16:42.078075 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:16:42.087926 systemd-logind[2121]: New seat seat0. Sep 4 17:16:42.117034 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:16:42.117617 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:16:42.149044 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:16:42.159741 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:16:42.178710 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:16:42.259465 bash[2211]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:16:42.265581 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:16:42.307864 systemd[1]: Starting sshkeys.service... Sep 4 17:16:42.400173 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:16:42.418080 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:16:42.443528 amazon-ssm-agent[2189]: Initializing new seelog logger Sep 4 17:16:42.452902 amazon-ssm-agent[2189]: New Seelog Logger Creation Complete Sep 4 17:16:42.452902 amazon-ssm-agent[2189]: 2024/09/04 17:16:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:42.452902 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:42.462989 amazon-ssm-agent[2189]: 2024/09/04 17:16:42 processing appconfig overrides Sep 4 17:16:42.462989 amazon-ssm-agent[2189]: 2024/09/04 17:16:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:42.462989 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:42.462989 amazon-ssm-agent[2189]: 2024/09/04 17:16:42 processing appconfig overrides Sep 4 17:16:42.462989 amazon-ssm-agent[2189]: 2024/09/04 17:16:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:42.462989 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:42.462989 amazon-ssm-agent[2189]: 2024/09/04 17:16:42 processing appconfig overrides Sep 4 17:16:42.462989 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO Proxy environment variables: Sep 4 17:16:42.479445 amazon-ssm-agent[2189]: 2024/09/04 17:16:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:42.479445 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:16:42.479445 amazon-ssm-agent[2189]: 2024/09/04 17:16:42 processing appconfig overrides Sep 4 17:16:42.500146 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (2225) Sep 4 17:16:42.562235 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO https_proxy: Sep 4 17:16:42.612579 containerd[2154]: time="2024-09-04T17:16:42.612446125Z" level=info msg="starting containerd" revision=8ccfc03e4e2b73c22899202ae09d0caf906d3863 version=v1.7.20 Sep 4 17:16:42.664437 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO http_proxy: Sep 4 17:16:42.685219 containerd[2154]: time="2024-09-04T17:16:42.684858986Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.690367730Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.690437630Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.690472310Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.690778262Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.690813218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.690930326Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.690996410Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.691399910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.691439126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.691474970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692153 containerd[2154]: time="2024-09-04T17:16:42.691499546Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692706 containerd[2154]: time="2024-09-04T17:16:42.691671218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:42.692706 containerd[2154]: time="2024-09-04T17:16:42.692073734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:16:42.696954 containerd[2154]: time="2024-09-04T17:16:42.696361946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:16:42.696954 containerd[2154]: time="2024-09-04T17:16:42.696416186Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:16:42.696954 containerd[2154]: time="2024-09-04T17:16:42.696624410Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:16:42.696954 containerd[2154]: time="2024-09-04T17:16:42.696720266Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:16:42.721020 containerd[2154]: time="2024-09-04T17:16:42.720942206Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:16:42.729146 containerd[2154]: time="2024-09-04T17:16:42.726258266Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:16:42.729146 containerd[2154]: time="2024-09-04T17:16:42.726324494Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:16:42.729146 containerd[2154]: time="2024-09-04T17:16:42.726365966Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:16:42.729146 containerd[2154]: time="2024-09-04T17:16:42.726406238Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:16:42.729146 containerd[2154]: time="2024-09-04T17:16:42.726669650Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:16:42.731827 containerd[2154]: time="2024-09-04T17:16:42.731770958Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737583074Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737670026Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737701910Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737734910Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737766062Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737795198Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737830190Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737862254Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737893358Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737928662Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737957258Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.737998130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.738029450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.738680 containerd[2154]: time="2024-09-04T17:16:42.738059474Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.739508 containerd[2154]: time="2024-09-04T17:16:42.738092690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745181642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745264082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745300790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745352810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745385558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745444790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745477802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745511138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745542338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745588778Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745638434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745667678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745694546Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:16:42.748206 containerd[2154]: time="2024-09-04T17:16:42.745810142Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:16:42.748866 containerd[2154]: time="2024-09-04T17:16:42.745846682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:16:42.748866 containerd[2154]: time="2024-09-04T17:16:42.745877726Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:16:42.748866 containerd[2154]: time="2024-09-04T17:16:42.745906502Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:16:42.748866 containerd[2154]: time="2024-09-04T17:16:42.745934750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.748866 containerd[2154]: time="2024-09-04T17:16:42.745966370Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:16:42.748866 containerd[2154]: time="2024-09-04T17:16:42.745991102Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:16:42.748866 containerd[2154]: time="2024-09-04T17:16:42.746016986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:16:42.749211 containerd[2154]: time="2024-09-04T17:16:42.746628950Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:16:42.749211 containerd[2154]: time="2024-09-04T17:16:42.746745206Z" level=info msg="Connect containerd service" Sep 4 17:16:42.749211 containerd[2154]: time="2024-09-04T17:16:42.746836454Z" level=info msg="using legacy CRI server" Sep 4 17:16:42.749211 containerd[2154]: time="2024-09-04T17:16:42.746856890Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:16:42.749211 containerd[2154]: time="2024-09-04T17:16:42.747020666Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:16:42.760523 containerd[2154]: time="2024-09-04T17:16:42.754856642Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:16:42.762820 containerd[2154]: time="2024-09-04T17:16:42.762519122Z" level=info msg="Start subscribing containerd event" Sep 4 17:16:42.763596 coreos-metadata[2231]: Sep 04 17:16:42.763 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:16:42.767849 containerd[2154]: time="2024-09-04T17:16:42.764207378Z" level=info msg="Start recovering state" Sep 4 17:16:42.767849 containerd[2154]: time="2024-09-04T17:16:42.767289866Z" level=info msg="Start event monitor" Sep 4 17:16:42.767849 containerd[2154]: time="2024-09-04T17:16:42.767543726Z" level=info msg="Start snapshots syncer" Sep 4 17:16:42.770264 coreos-metadata[2231]: Sep 04 17:16:42.768 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:16:42.770416 coreos-metadata[2231]: Sep 04 17:16:42.770 INFO Fetch successful Sep 4 17:16:42.771227 containerd[2154]: time="2024-09-04T17:16:42.770680430Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:16:42.771227 containerd[2154]: time="2024-09-04T17:16:42.770857094Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:16:42.771654 containerd[2154]: time="2024-09-04T17:16:42.771483326Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:16:42.771654 containerd[2154]: time="2024-09-04T17:16:42.771526562Z" level=info msg="Start streaming server" Sep 4 17:16:42.773406 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:16:42.774103 containerd[2154]: time="2024-09-04T17:16:42.773254670Z" level=info msg="containerd successfully booted in 0.162869s" Sep 4 17:16:42.784071 coreos-metadata[2231]: Sep 04 17:16:42.779 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:16:42.784071 coreos-metadata[2231]: Sep 04 17:16:42.782 INFO Fetch successful Sep 4 17:16:42.784255 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO no_proxy: Sep 4 17:16:42.787032 unknown[2231]: wrote ssh authorized keys file for user: core Sep 4 17:16:42.885989 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:16:42.906513 locksmithd[2173]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:16:42.929306 update-ssh-keys[2285]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:16:42.948564 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:16:42.970506 systemd[1]: Finished sshkeys.service. Sep 4 17:16:42.987158 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:16:43.012487 dbus-daemon[2093]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:16:43.012775 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:16:43.018809 dbus-daemon[2093]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2171 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:16:43.030627 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:16:43.085715 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO Agent will take identity from EC2 Sep 4 17:16:43.135675 polkitd[2318]: Started polkitd version 121 Sep 4 17:16:43.173237 polkitd[2318]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:16:43.173511 polkitd[2318]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:16:43.190157 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:16:43.192054 polkitd[2318]: Finished loading, compiling and executing 2 rules Sep 4 17:16:43.210766 dbus-daemon[2093]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:16:43.211020 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:16:43.218046 polkitd[2318]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:16:43.279518 systemd-hostnamed[2171]: Hostname set to (transient) Sep 4 17:16:43.279686 systemd-resolved[2021]: System hostname changed to 'ip-172-31-23-29'. Sep 4 17:16:43.293520 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:16:43.388891 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:16:43.488351 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:16:43.590804 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 17:16:43.689968 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:16:43.700093 sshd_keygen[2155]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:16:43.791578 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:16:43.821910 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:16:43.845271 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:16:43.891560 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO [Registrar] Starting registrar module Sep 4 17:16:43.896744 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:16:43.901374 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:16:43.930679 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:16:43.987437 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:16:43.996560 amazon-ssm-agent[2189]: 2024-09-04 17:16:42 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:16:43.999739 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:16:44.015619 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:16:44.020595 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:16:44.197221 tar[2137]: linux-arm64/LICENSE Sep 4 17:16:44.197221 tar[2137]: linux-arm64/README.md Sep 4 17:16:44.227558 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:16:44.294436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:16:44.300813 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:16:44.307685 systemd[1]: Startup finished in 11.355s (kernel) + 9.345s (userspace) = 20.700s. Sep 4 17:16:44.318540 (kubelet)[2388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:16:45.088100 kubelet[2388]: E0904 17:16:45.087997 2388 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:16:45.093532 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:16:45.094573 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:16:46.086628 amazon-ssm-agent[2189]: 2024-09-04 17:16:46 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:16:46.122686 amazon-ssm-agent[2189]: 2024-09-04 17:16:46 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:16:46.122889 amazon-ssm-agent[2189]: 2024-09-04 17:16:46 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:16:46.122889 amazon-ssm-agent[2189]: 2024-09-04 17:16:46 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:16:46.187559 amazon-ssm-agent[2189]: 2024-09-04 17:16:46 INFO [CredentialRefresher] Next credential rotation will be in 32.48329717763333 minutes Sep 4 17:16:47.150843 amazon-ssm-agent[2189]: 2024-09-04 17:16:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:16:47.250865 amazon-ssm-agent[2189]: 2024-09-04 17:16:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2402) started Sep 4 17:16:47.352078 amazon-ssm-agent[2189]: 2024-09-04 17:16:47 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:16:48.189275 systemd-resolved[2021]: Clock change detected. Flushing caches. Sep 4 17:16:48.633171 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:16:48.643665 systemd[1]: Started sshd@0-172.31.23.29:22-139.178.89.65:54964.service - OpenSSH per-connection server daemon (139.178.89.65:54964). Sep 4 17:16:48.817781 sshd[2411]: Accepted publickey for core from 139.178.89.65 port 54964 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:48.819589 sshd[2411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:48.834151 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:16:48.842720 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:16:48.847395 systemd-logind[2121]: New session 1 of user core. Sep 4 17:16:48.886644 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:16:48.903212 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:16:48.910377 (systemd)[2417]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:16:49.120766 systemd[2417]: Queued start job for default target default.target. Sep 4 17:16:49.122558 systemd[2417]: Created slice app.slice - User Application Slice. Sep 4 17:16:49.123022 systemd[2417]: Reached target paths.target - Paths. Sep 4 17:16:49.123060 systemd[2417]: Reached target timers.target - Timers. Sep 4 17:16:49.129439 systemd[2417]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:16:49.155519 systemd[2417]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:16:49.155638 systemd[2417]: Reached target sockets.target - Sockets. Sep 4 17:16:49.155671 systemd[2417]: Reached target basic.target - Basic System. Sep 4 17:16:49.155767 systemd[2417]: Reached target default.target - Main User Target. Sep 4 17:16:49.155829 systemd[2417]: Startup finished in 233ms. Sep 4 17:16:49.157363 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:16:49.166082 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:16:49.313462 systemd[1]: Started sshd@1-172.31.23.29:22-139.178.89.65:54972.service - OpenSSH per-connection server daemon (139.178.89.65:54972). Sep 4 17:16:49.493809 sshd[2430]: Accepted publickey for core from 139.178.89.65 port 54972 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:49.496670 sshd[2430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:49.504697 systemd-logind[2121]: New session 2 of user core. Sep 4 17:16:49.518109 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:16:49.648581 sshd[2430]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:49.653463 systemd[1]: sshd@1-172.31.23.29:22-139.178.89.65:54972.service: Deactivated successfully. Sep 4 17:16:49.660872 systemd-logind[2121]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:16:49.662226 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:16:49.664150 systemd-logind[2121]: Removed session 2. Sep 4 17:16:49.683701 systemd[1]: Started sshd@2-172.31.23.29:22-139.178.89.65:54976.service - OpenSSH per-connection server daemon (139.178.89.65:54976). Sep 4 17:16:49.856139 sshd[2438]: Accepted publickey for core from 139.178.89.65 port 54976 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:49.859805 sshd[2438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:49.867725 systemd-logind[2121]: New session 3 of user core. Sep 4 17:16:49.879851 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:16:50.005145 sshd[2438]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:50.011146 systemd-logind[2121]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:16:50.015003 systemd[1]: sshd@2-172.31.23.29:22-139.178.89.65:54976.service: Deactivated successfully. Sep 4 17:16:50.020148 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:16:50.021987 systemd-logind[2121]: Removed session 3. Sep 4 17:16:50.034741 systemd[1]: Started sshd@3-172.31.23.29:22-139.178.89.65:54982.service - OpenSSH per-connection server daemon (139.178.89.65:54982). Sep 4 17:16:50.206339 sshd[2446]: Accepted publickey for core from 139.178.89.65 port 54982 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:50.209051 sshd[2446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:50.217978 systemd-logind[2121]: New session 4 of user core. Sep 4 17:16:50.224849 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:16:50.351587 sshd[2446]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:50.358793 systemd[1]: sshd@3-172.31.23.29:22-139.178.89.65:54982.service: Deactivated successfully. Sep 4 17:16:50.360531 systemd-logind[2121]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:16:50.364918 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:16:50.367092 systemd-logind[2121]: Removed session 4. Sep 4 17:16:50.381716 systemd[1]: Started sshd@4-172.31.23.29:22-139.178.89.65:54984.service - OpenSSH per-connection server daemon (139.178.89.65:54984). Sep 4 17:16:50.558984 sshd[2454]: Accepted publickey for core from 139.178.89.65 port 54984 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:50.562144 sshd[2454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:50.569567 systemd-logind[2121]: New session 5 of user core. Sep 4 17:16:50.579948 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:16:50.704901 sudo[2458]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:16:50.705578 sudo[2458]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:16:50.720807 sudo[2458]: pam_unix(sudo:session): session closed for user root Sep 4 17:16:50.744652 sshd[2454]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:50.749812 systemd[1]: sshd@4-172.31.23.29:22-139.178.89.65:54984.service: Deactivated successfully. Sep 4 17:16:50.757084 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:16:50.759207 systemd-logind[2121]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:16:50.761382 systemd-logind[2121]: Removed session 5. Sep 4 17:16:50.773760 systemd[1]: Started sshd@5-172.31.23.29:22-139.178.89.65:54986.service - OpenSSH per-connection server daemon (139.178.89.65:54986). Sep 4 17:16:50.954497 sshd[2463]: Accepted publickey for core from 139.178.89.65 port 54986 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:50.957135 sshd[2463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:50.965360 systemd-logind[2121]: New session 6 of user core. Sep 4 17:16:50.972728 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:16:51.080351 sudo[2468]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:16:51.081506 sudo[2468]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:16:51.088156 sudo[2468]: pam_unix(sudo:session): session closed for user root Sep 4 17:16:51.098074 sudo[2467]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:16:51.098738 sudo[2467]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:16:51.123686 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:16:51.127724 auditctl[2471]: No rules Sep 4 17:16:51.128570 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:16:51.129078 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:16:51.144894 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:16:51.185815 augenrules[2490]: No rules Sep 4 17:16:51.189053 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:16:51.193574 sudo[2467]: pam_unix(sudo:session): session closed for user root Sep 4 17:16:51.218537 sshd[2463]: pam_unix(sshd:session): session closed for user core Sep 4 17:16:51.224616 systemd[1]: sshd@5-172.31.23.29:22-139.178.89.65:54986.service: Deactivated successfully. Sep 4 17:16:51.228965 systemd-logind[2121]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:16:51.233226 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:16:51.234630 systemd-logind[2121]: Removed session 6. Sep 4 17:16:51.245772 systemd[1]: Started sshd@6-172.31.23.29:22-139.178.89.65:54992.service - OpenSSH per-connection server daemon (139.178.89.65:54992). Sep 4 17:16:51.421173 sshd[2499]: Accepted publickey for core from 139.178.89.65 port 54992 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:16:51.423759 sshd[2499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:16:51.432587 systemd-logind[2121]: New session 7 of user core. Sep 4 17:16:51.439858 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:16:51.543663 sudo[2503]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:16:51.544883 sudo[2503]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 4 17:16:51.753127 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:16:51.766931 (dockerd)[2512]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:16:52.189609 dockerd[2512]: time="2024-09-04T17:16:52.189513919Z" level=info msg="Starting up" Sep 4 17:16:53.772639 dockerd[2512]: time="2024-09-04T17:16:53.772561823Z" level=info msg="Loading containers: start." Sep 4 17:16:53.962281 kernel: Initializing XFRM netlink socket Sep 4 17:16:53.995711 (udev-worker)[2577]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:16:54.077417 systemd-networkd[1692]: docker0: Link UP Sep 4 17:16:54.103706 dockerd[2512]: time="2024-09-04T17:16:54.103656225Z" level=info msg="Loading containers: done." Sep 4 17:16:54.128992 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2738694878-merged.mount: Deactivated successfully. Sep 4 17:16:54.131317 dockerd[2512]: time="2024-09-04T17:16:54.130222749Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:16:54.131475 dockerd[2512]: time="2024-09-04T17:16:54.131433357Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 4 17:16:54.131692 dockerd[2512]: time="2024-09-04T17:16:54.131643273Z" level=info msg="Daemon has completed initialization" Sep 4 17:16:54.198441 dockerd[2512]: time="2024-09-04T17:16:54.197571753Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:16:54.197866 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:16:54.642798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:16:54.652276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:16:55.608545 containerd[2154]: time="2024-09-04T17:16:55.608124072Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\"" Sep 4 17:16:55.858567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:16:55.872844 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:16:55.967956 kubelet[2667]: E0904 17:16:55.967887 2667 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:16:55.977201 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:16:55.977841 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:16:56.345722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141236759.mount: Deactivated successfully. Sep 4 17:16:58.062297 containerd[2154]: time="2024-09-04T17:16:58.062127649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:16:58.064451 containerd[2154]: time="2024-09-04T17:16:58.064378513Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.13: active requests=0, bytes read=31599022" Sep 4 17:16:58.066548 containerd[2154]: time="2024-09-04T17:16:58.066456841Z" level=info msg="ImageCreate event name:\"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:16:58.072322 containerd[2154]: time="2024-09-04T17:16:58.072222277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:16:58.074756 containerd[2154]: time="2024-09-04T17:16:58.074473897Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.13\" with image id \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7d2c9256ad576a0b3745b749efe7f4fa8b276ec7ef448fc0f45794ca78eb8625\", size \"31595822\" in 2.466291481s" Sep 4 17:16:58.074756 containerd[2154]: time="2024-09-04T17:16:58.074533225Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.13\" returns image reference \"sha256:a339bb1c702d4062f524851aa528a3feed19ee9f717d14911cc30771e13491ea\"" Sep 4 17:16:58.111511 containerd[2154]: time="2024-09-04T17:16:58.111395965Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\"" Sep 4 17:16:59.835171 containerd[2154]: time="2024-09-04T17:16:59.834376553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:16:59.836482 containerd[2154]: time="2024-09-04T17:16:59.836413949Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.13: active requests=0, bytes read=29019496" Sep 4 17:16:59.837985 containerd[2154]: time="2024-09-04T17:16:59.837892829Z" level=info msg="ImageCreate event name:\"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:16:59.843737 containerd[2154]: time="2024-09-04T17:16:59.843650285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:16:59.846333 containerd[2154]: time="2024-09-04T17:16:59.846082889Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.13\" with image id \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e7b44c1741fe1802d159ffdbd0d1f78d48a4185d7fb1cdf8a112fbb50696f7e1\", size \"30506763\" in 1.734528584s" Sep 4 17:16:59.846333 containerd[2154]: time="2024-09-04T17:16:59.846139229Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.13\" returns image reference \"sha256:1e81172b17d2d45f9e0ff1ac37a042d34a1be80722b8c8bcab67d9250065fa6d\"" Sep 4 17:16:59.884617 containerd[2154]: time="2024-09-04T17:16:59.884541150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\"" Sep 4 17:17:00.947278 containerd[2154]: time="2024-09-04T17:17:00.945292303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:00.949037 containerd[2154]: time="2024-09-04T17:17:00.948988615Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.13: active requests=0, bytes read=15533681" Sep 4 17:17:00.950877 containerd[2154]: time="2024-09-04T17:17:00.950831755Z" level=info msg="ImageCreate event name:\"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:00.956434 containerd[2154]: time="2024-09-04T17:17:00.956367559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:00.958865 containerd[2154]: time="2024-09-04T17:17:00.958803187Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.13\" with image id \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:efeb791718f4b9c62bd683f5b403da520f3651cb36ad9f800e0f98b595beafa4\", size \"17020966\" in 1.074100481s" Sep 4 17:17:00.959010 containerd[2154]: time="2024-09-04T17:17:00.958861123Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.13\" returns image reference \"sha256:42bbd5a6799fefc25b4b3269d8ad07628893c29d7b26d8fab57f6785b976ec7a\"" Sep 4 17:17:00.995147 containerd[2154]: time="2024-09-04T17:17:00.995072743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\"" Sep 4 17:17:02.230023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794486797.mount: Deactivated successfully. Sep 4 17:17:02.755128 containerd[2154]: time="2024-09-04T17:17:02.755047412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:02.757179 containerd[2154]: time="2024-09-04T17:17:02.757120004Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.13: active requests=0, bytes read=24977930" Sep 4 17:17:02.759112 containerd[2154]: time="2024-09-04T17:17:02.759010592Z" level=info msg="ImageCreate event name:\"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:02.763045 containerd[2154]: time="2024-09-04T17:17:02.762934436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:02.764691 containerd[2154]: time="2024-09-04T17:17:02.764336684Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.13\" with image id \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\", repo tag \"registry.k8s.io/kube-proxy:v1.28.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:537633f399f87ce85d44fc8471ece97a83632198f99b3f7e08770beca95e9fa1\", size \"24976949\" in 1.769198325s" Sep 4 17:17:02.764691 containerd[2154]: time="2024-09-04T17:17:02.764398004Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.13\" returns image reference \"sha256:28cc84306a40b12ede33c1df2d3219e0061b4d0e5309eb874034dd77e9154393\"" Sep 4 17:17:02.802450 containerd[2154]: time="2024-09-04T17:17:02.802339220Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:17:03.348162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3716777848.mount: Deactivated successfully. Sep 4 17:17:03.528283 containerd[2154]: time="2024-09-04T17:17:03.527508944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:03.529273 containerd[2154]: time="2024-09-04T17:17:03.529195436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:17:03.532423 containerd[2154]: time="2024-09-04T17:17:03.532353200Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:03.539366 containerd[2154]: time="2024-09-04T17:17:03.537315404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:03.539366 containerd[2154]: time="2024-09-04T17:17:03.538916000Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 736.520392ms" Sep 4 17:17:03.539366 containerd[2154]: time="2024-09-04T17:17:03.538962164Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:17:03.576823 containerd[2154]: time="2024-09-04T17:17:03.576768632Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:17:04.181585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1739516518.mount: Deactivated successfully. Sep 4 17:17:06.142829 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:17:06.149632 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:08.152598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:08.168866 (kubelet)[2827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:17:08.279752 kubelet[2827]: E0904 17:17:08.279614 2827 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:17:08.285183 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:17:08.285607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:17:09.630365 containerd[2154]: time="2024-09-04T17:17:09.630275126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:09.632568 containerd[2154]: time="2024-09-04T17:17:09.632513210Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Sep 4 17:17:09.633863 containerd[2154]: time="2024-09-04T17:17:09.633779354Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:09.640041 containerd[2154]: time="2024-09-04T17:17:09.639979358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:09.643803 containerd[2154]: time="2024-09-04T17:17:09.643585742Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 6.066432426s" Sep 4 17:17:09.643803 containerd[2154]: time="2024-09-04T17:17:09.643660910Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:17:09.683690 containerd[2154]: time="2024-09-04T17:17:09.683636666Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Sep 4 17:17:10.228607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3472333123.mount: Deactivated successfully. Sep 4 17:17:10.887333 containerd[2154]: time="2024-09-04T17:17:10.886601476Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:10.888279 containerd[2154]: time="2024-09-04T17:17:10.888204928Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Sep 4 17:17:10.890123 containerd[2154]: time="2024-09-04T17:17:10.890052412Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:10.894512 containerd[2154]: time="2024-09-04T17:17:10.894423184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:10.896321 containerd[2154]: time="2024-09-04T17:17:10.896096020Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.212397938s" Sep 4 17:17:10.896321 containerd[2154]: time="2024-09-04T17:17:10.896149432Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Sep 4 17:17:12.798006 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:17:16.980832 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:16.995690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:17.046378 systemd[1]: Reloading requested from client PID 2925 ('systemctl') (unit session-7.scope)... Sep 4 17:17:17.046403 systemd[1]: Reloading... Sep 4 17:17:17.195271 zram_generator::config[2963]: No configuration found. Sep 4 17:17:17.463157 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:17.623480 systemd[1]: Reloading finished in 576 ms. Sep 4 17:17:17.708203 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 4 17:17:17.708724 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 4 17:17:17.709700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:17.721431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:18.674601 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:18.681822 (kubelet)[3033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:17:18.762394 kubelet[3033]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:18.762394 kubelet[3033]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:17:18.762394 kubelet[3033]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:18.762965 kubelet[3033]: I0904 17:17:18.762483 3033 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:17:19.635805 kubelet[3033]: I0904 17:17:19.635767 3033 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:17:19.637259 kubelet[3033]: I0904 17:17:19.635986 3033 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:17:19.637259 kubelet[3033]: I0904 17:17:19.636335 3033 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:17:19.664908 kubelet[3033]: I0904 17:17:19.664867 3033 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:17:19.667533 kubelet[3033]: E0904 17:17:19.667499 3033 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:19.687078 kubelet[3033]: W0904 17:17:19.687041 3033 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:17:19.688570 kubelet[3033]: I0904 17:17:19.688541 3033 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:17:19.689365 kubelet[3033]: I0904 17:17:19.689334 3033 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:17:19.689769 kubelet[3033]: I0904 17:17:19.689743 3033 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:17:19.690008 kubelet[3033]: I0904 17:17:19.689987 3033 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:17:19.690110 kubelet[3033]: I0904 17:17:19.690092 3033 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:17:19.690455 kubelet[3033]: I0904 17:17:19.690435 3033 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:19.693724 kubelet[3033]: I0904 17:17:19.693695 3033 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:17:19.693880 kubelet[3033]: I0904 17:17:19.693861 3033 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:17:19.694037 kubelet[3033]: I0904 17:17:19.694018 3033 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:17:19.694161 kubelet[3033]: I0904 17:17:19.694141 3033 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:17:19.696624 kubelet[3033]: W0904 17:17:19.696556 3033 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.23.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-29&limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:19.696879 kubelet[3033]: E0904 17:17:19.696827 3033 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-29&limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:19.697206 kubelet[3033]: W0904 17:17:19.697162 3033 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.23.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:19.697464 kubelet[3033]: E0904 17:17:19.697415 3033 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:19.698663 kubelet[3033]: I0904 17:17:19.698045 3033 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:17:19.700710 kubelet[3033]: W0904 17:17:19.700673 3033 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:17:19.702186 kubelet[3033]: I0904 17:17:19.702150 3033 server.go:1232] "Started kubelet" Sep 4 17:17:19.705087 kubelet[3033]: I0904 17:17:19.704335 3033 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:17:19.705716 kubelet[3033]: I0904 17:17:19.705672 3033 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:17:19.709089 kubelet[3033]: I0904 17:17:19.708524 3033 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:17:19.709089 kubelet[3033]: I0904 17:17:19.708914 3033 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:17:19.710300 kubelet[3033]: E0904 17:17:19.710228 3033 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:17:19.710603 kubelet[3033]: E0904 17:17:19.710580 3033 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:17:19.711078 kubelet[3033]: I0904 17:17:19.710927 3033 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:17:19.711531 kubelet[3033]: E0904 17:17:19.711279 3033 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-23-29.17f21a0a83f3da08", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-23-29", UID:"ip-172-31-23-29", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-23-29"}, FirstTimestamp:time.Date(2024, time.September, 4, 17, 17, 19, 702112776, time.Local), LastTimestamp:time.Date(2024, time.September, 4, 17, 17, 19, 702112776, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-23-29"}': 'Post "https://172.31.23.29:6443/api/v1/namespaces/default/events": dial tcp 172.31.23.29:6443: connect: connection refused'(may retry after sleeping) Sep 4 17:17:19.720506 kubelet[3033]: E0904 17:17:19.720267 3033 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-23-29\" not found" Sep 4 17:17:19.720506 kubelet[3033]: I0904 17:17:19.720374 3033 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:17:19.721321 kubelet[3033]: I0904 17:17:19.720959 3033 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:17:19.721321 kubelet[3033]: I0904 17:17:19.721062 3033 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:17:19.722883 kubelet[3033]: W0904 17:17:19.722812 3033 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.23.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:19.723064 kubelet[3033]: E0904 17:17:19.723044 3033 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:19.723379 kubelet[3033]: E0904 17:17:19.723355 3033 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-29?timeout=10s\": dial tcp 172.31.23.29:6443: connect: connection refused" interval="200ms" Sep 4 17:17:19.743275 kubelet[3033]: I0904 17:17:19.742141 3033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:17:19.744978 kubelet[3033]: I0904 17:17:19.744269 3033 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:17:19.744978 kubelet[3033]: I0904 17:17:19.744320 3033 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:17:19.744978 kubelet[3033]: I0904 17:17:19.744369 3033 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:17:19.744978 kubelet[3033]: E0904 17:17:19.744459 3033 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:17:19.759645 kubelet[3033]: W0904 17:17:19.759221 3033 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.23.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:19.762376 kubelet[3033]: E0904 17:17:19.762341 3033 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:19.826828 kubelet[3033]: I0904 17:17:19.826794 3033 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-29" Sep 4 17:17:19.828105 kubelet[3033]: I0904 17:17:19.828064 3033 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:17:19.829103 kubelet[3033]: I0904 17:17:19.828703 3033 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:17:19.829103 kubelet[3033]: I0904 17:17:19.828744 3033 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:19.829518 kubelet[3033]: E0904 17:17:19.828179 3033 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.23.29:6443/api/v1/nodes\": dial tcp 172.31.23.29:6443: connect: connection refused" node="ip-172-31-23-29" Sep 4 17:17:19.833674 kubelet[3033]: I0904 17:17:19.832602 3033 policy_none.go:49] "None policy: Start" Sep 4 17:17:19.834973 kubelet[3033]: I0904 17:17:19.834916 3033 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:17:19.834973 kubelet[3033]: I0904 17:17:19.834969 3033 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:17:19.845392 kubelet[3033]: E0904 17:17:19.845069 3033 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:17:19.852160 kubelet[3033]: I0904 17:17:19.852106 3033 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:17:19.854298 kubelet[3033]: I0904 17:17:19.852724 3033 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:17:19.858622 kubelet[3033]: E0904 17:17:19.858487 3033 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-29\" not found" Sep 4 17:17:19.924743 kubelet[3033]: E0904 17:17:19.924691 3033 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-29?timeout=10s\": dial tcp 172.31.23.29:6443: connect: connection refused" interval="400ms" Sep 4 17:17:20.032361 kubelet[3033]: I0904 17:17:20.032282 3033 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-29" Sep 4 17:17:20.032785 kubelet[3033]: E0904 17:17:20.032752 3033 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.23.29:6443/api/v1/nodes\": dial tcp 172.31.23.29:6443: connect: connection refused" node="ip-172-31-23-29" Sep 4 17:17:20.046360 kubelet[3033]: I0904 17:17:20.046007 3033 topology_manager.go:215] "Topology Admit Handler" podUID="44c77efcc47763029a2786699c6f19c7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-29" Sep 4 17:17:20.048336 kubelet[3033]: I0904 17:17:20.048186 3033 topology_manager.go:215] "Topology Admit Handler" podUID="53f30815a2e4b6fe0bcf3d5845e22795" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:20.051516 kubelet[3033]: I0904 17:17:20.051479 3033 topology_manager.go:215] "Topology Admit Handler" podUID="51fd2660440bd6d237dbe757c4f89474" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-29" Sep 4 17:17:20.124098 kubelet[3033]: I0904 17:17:20.123986 3033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:20.124098 kubelet[3033]: I0904 17:17:20.124057 3033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:20.124098 kubelet[3033]: I0904 17:17:20.124111 3033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:20.124717 kubelet[3033]: I0904 17:17:20.124157 3033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51fd2660440bd6d237dbe757c4f89474-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-29\" (UID: \"51fd2660440bd6d237dbe757c4f89474\") " pod="kube-system/kube-scheduler-ip-172-31-23-29" Sep 4 17:17:20.124717 kubelet[3033]: I0904 17:17:20.124210 3033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44c77efcc47763029a2786699c6f19c7-ca-certs\") pod \"kube-apiserver-ip-172-31-23-29\" (UID: \"44c77efcc47763029a2786699c6f19c7\") " pod="kube-system/kube-apiserver-ip-172-31-23-29" Sep 4 17:17:20.124717 kubelet[3033]: I0904 17:17:20.124298 3033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44c77efcc47763029a2786699c6f19c7-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-29\" (UID: \"44c77efcc47763029a2786699c6f19c7\") " pod="kube-system/kube-apiserver-ip-172-31-23-29" Sep 4 17:17:20.124717 kubelet[3033]: I0904 17:17:20.124345 3033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44c77efcc47763029a2786699c6f19c7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-29\" (UID: \"44c77efcc47763029a2786699c6f19c7\") " pod="kube-system/kube-apiserver-ip-172-31-23-29" Sep 4 17:17:20.124717 kubelet[3033]: I0904 17:17:20.124396 3033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:20.124947 kubelet[3033]: I0904 17:17:20.124442 3033 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:20.325408 kubelet[3033]: E0904 17:17:20.325275 3033 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-29?timeout=10s\": dial tcp 172.31.23.29:6443: connect: connection refused" interval="800ms" Sep 4 17:17:20.361060 containerd[2154]: time="2024-09-04T17:17:20.360696659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-29,Uid:53f30815a2e4b6fe0bcf3d5845e22795,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:20.361060 containerd[2154]: time="2024-09-04T17:17:20.360705311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-29,Uid:44c77efcc47763029a2786699c6f19c7,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:20.366585 containerd[2154]: time="2024-09-04T17:17:20.366534587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-29,Uid:51fd2660440bd6d237dbe757c4f89474,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:20.436538 kubelet[3033]: I0904 17:17:20.436054 3033 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-29" Sep 4 17:17:20.436538 kubelet[3033]: E0904 17:17:20.436504 3033 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.23.29:6443/api/v1/nodes\": dial tcp 172.31.23.29:6443: connect: connection refused" node="ip-172-31-23-29" Sep 4 17:17:20.690222 kubelet[3033]: W0904 17:17:20.690083 3033 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.23.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:20.690400 kubelet[3033]: E0904 17:17:20.690261 3033 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:20.855112 kubelet[3033]: W0904 17:17:20.855045 3033 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.23.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:20.855112 kubelet[3033]: E0904 17:17:20.855114 3033 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:20.915153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852101511.mount: Deactivated successfully. Sep 4 17:17:20.927257 containerd[2154]: time="2024-09-04T17:17:20.927175202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:20.928985 containerd[2154]: time="2024-09-04T17:17:20.928931342Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:20.930938 containerd[2154]: time="2024-09-04T17:17:20.930882182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:17:20.932294 containerd[2154]: time="2024-09-04T17:17:20.932214842Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:17:20.934285 containerd[2154]: time="2024-09-04T17:17:20.934044638Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:20.936811 containerd[2154]: time="2024-09-04T17:17:20.936369986Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:20.936811 containerd[2154]: time="2024-09-04T17:17:20.936670886Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:17:20.942439 containerd[2154]: time="2024-09-04T17:17:20.942282266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:17:20.945866 containerd[2154]: time="2024-09-04T17:17:20.945809738Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 578.915943ms" Sep 4 17:17:20.950887 containerd[2154]: time="2024-09-04T17:17:20.950779334Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 589.966767ms" Sep 4 17:17:20.953050 containerd[2154]: time="2024-09-04T17:17:20.952751486Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 591.919683ms" Sep 4 17:17:20.980904 kubelet[3033]: W0904 17:17:20.980828 3033 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.23.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-29&limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:20.981032 kubelet[3033]: E0904 17:17:20.980920 3033 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-29&limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:21.126299 kubelet[3033]: E0904 17:17:21.126231 3033 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-29?timeout=10s\": dial tcp 172.31.23.29:6443: connect: connection refused" interval="1.6s" Sep 4 17:17:21.156218 kubelet[3033]: W0904 17:17:21.156091 3033 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.23.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:21.156218 kubelet[3033]: E0904 17:17:21.156196 3033 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.29:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.29:6443: connect: connection refused Sep 4 17:17:21.177983 containerd[2154]: time="2024-09-04T17:17:21.177831407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:21.178177 containerd[2154]: time="2024-09-04T17:17:21.177936071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:21.178177 containerd[2154]: time="2024-09-04T17:17:21.177998783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:21.178396 containerd[2154]: time="2024-09-04T17:17:21.178183739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:21.185917 containerd[2154]: time="2024-09-04T17:17:21.184096271Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:21.186072 containerd[2154]: time="2024-09-04T17:17:21.185876963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:21.186072 containerd[2154]: time="2024-09-04T17:17:21.186041171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:21.186933 containerd[2154]: time="2024-09-04T17:17:21.186833135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:21.194082 containerd[2154]: time="2024-09-04T17:17:21.193026083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:21.194082 containerd[2154]: time="2024-09-04T17:17:21.193121411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:21.194082 containerd[2154]: time="2024-09-04T17:17:21.193281971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:21.194082 containerd[2154]: time="2024-09-04T17:17:21.193462643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:21.240339 kubelet[3033]: I0904 17:17:21.239565 3033 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-29" Sep 4 17:17:21.240339 kubelet[3033]: E0904 17:17:21.240055 3033 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.23.29:6443/api/v1/nodes\": dial tcp 172.31.23.29:6443: connect: connection refused" node="ip-172-31-23-29" Sep 4 17:17:21.346781 containerd[2154]: time="2024-09-04T17:17:21.346603488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-29,Uid:44c77efcc47763029a2786699c6f19c7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac93360e294f9e6c1b9c19c04a6e841adeb4b4dbcb03c4a7e16df2118a845693\"" Sep 4 17:17:21.362575 containerd[2154]: time="2024-09-04T17:17:21.362519976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-29,Uid:51fd2660440bd6d237dbe757c4f89474,Namespace:kube-system,Attempt:0,} returns sandbox id \"76e350680e7353fe6c19bd8f67354a3fa498a77d75cfbc8463d72e13c5550a4c\"" Sep 4 17:17:21.368452 containerd[2154]: time="2024-09-04T17:17:21.367679256Z" level=info msg="CreateContainer within sandbox \"ac93360e294f9e6c1b9c19c04a6e841adeb4b4dbcb03c4a7e16df2118a845693\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:17:21.370050 containerd[2154]: time="2024-09-04T17:17:21.369566340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-29,Uid:53f30815a2e4b6fe0bcf3d5845e22795,Namespace:kube-system,Attempt:0,} returns sandbox id \"9352e3ba7f1e8c1adf63dd22188b7c9d0e576079d9fe17111ef30efbcd728120\"" Sep 4 17:17:21.372184 containerd[2154]: time="2024-09-04T17:17:21.371346600Z" level=info msg="CreateContainer within sandbox \"76e350680e7353fe6c19bd8f67354a3fa498a77d75cfbc8463d72e13c5550a4c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:17:21.381287 containerd[2154]: time="2024-09-04T17:17:21.379757748Z" level=info msg="CreateContainer within sandbox \"9352e3ba7f1e8c1adf63dd22188b7c9d0e576079d9fe17111ef30efbcd728120\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:17:21.403545 containerd[2154]: time="2024-09-04T17:17:21.403466784Z" level=info msg="CreateContainer within sandbox \"ac93360e294f9e6c1b9c19c04a6e841adeb4b4dbcb03c4a7e16df2118a845693\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b5639c1dd345a5fcff6da024733e18dc130a33e6c445765aaf93eb358bddd795\"" Sep 4 17:17:21.405962 containerd[2154]: time="2024-09-04T17:17:21.405904668Z" level=info msg="StartContainer for \"b5639c1dd345a5fcff6da024733e18dc130a33e6c445765aaf93eb358bddd795\"" Sep 4 17:17:21.412715 containerd[2154]: time="2024-09-04T17:17:21.412623265Z" level=info msg="CreateContainer within sandbox \"76e350680e7353fe6c19bd8f67354a3fa498a77d75cfbc8463d72e13c5550a4c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d6953daaabe73a33ff7c8d5a50217a34156d7e5df42762360b152e1c0f4b59c8\"" Sep 4 17:17:21.417902 containerd[2154]: time="2024-09-04T17:17:21.417782641Z" level=info msg="StartContainer for \"d6953daaabe73a33ff7c8d5a50217a34156d7e5df42762360b152e1c0f4b59c8\"" Sep 4 17:17:21.430224 containerd[2154]: time="2024-09-04T17:17:21.430132609Z" level=info msg="CreateContainer within sandbox \"9352e3ba7f1e8c1adf63dd22188b7c9d0e576079d9fe17111ef30efbcd728120\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"278de8c8a161027dc88a54ea9a2ea84a4ba3b019cb942c9d21c16957b7af7556\"" Sep 4 17:17:21.432277 containerd[2154]: time="2024-09-04T17:17:21.431385865Z" level=info msg="StartContainer for \"278de8c8a161027dc88a54ea9a2ea84a4ba3b019cb942c9d21c16957b7af7556\"" Sep 4 17:17:21.607873 containerd[2154]: time="2024-09-04T17:17:21.606683761Z" level=info msg="StartContainer for \"b5639c1dd345a5fcff6da024733e18dc130a33e6c445765aaf93eb358bddd795\" returns successfully" Sep 4 17:17:21.642675 containerd[2154]: time="2024-09-04T17:17:21.642612518Z" level=info msg="StartContainer for \"d6953daaabe73a33ff7c8d5a50217a34156d7e5df42762360b152e1c0f4b59c8\" returns successfully" Sep 4 17:17:21.689275 containerd[2154]: time="2024-09-04T17:17:21.687987662Z" level=info msg="StartContainer for \"278de8c8a161027dc88a54ea9a2ea84a4ba3b019cb942c9d21c16957b7af7556\" returns successfully" Sep 4 17:17:22.844391 kubelet[3033]: I0904 17:17:22.843869 3033 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-29" Sep 4 17:17:26.880135 kubelet[3033]: E0904 17:17:26.880068 3033 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-29\" not found" node="ip-172-31-23-29" Sep 4 17:17:26.942592 kubelet[3033]: I0904 17:17:26.941041 3033 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-23-29" Sep 4 17:17:27.123951 update_engine[2123]: I0904 17:17:27.122998 2123 update_attempter.cc:509] Updating boot flags... Sep 4 17:17:27.264265 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3318) Sep 4 17:17:27.696305 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3309) Sep 4 17:17:27.704276 kubelet[3033]: I0904 17:17:27.700379 3033 apiserver.go:52] "Watching apiserver" Sep 4 17:17:27.723423 kubelet[3033]: I0904 17:17:27.722494 3033 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:17:28.209287 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 42 scanned by (udev-worker) (3309) Sep 4 17:17:29.839085 systemd[1]: Reloading requested from client PID 3573 ('systemctl') (unit session-7.scope)... Sep 4 17:17:29.839121 systemd[1]: Reloading... Sep 4 17:17:30.026388 zram_generator::config[3614]: No configuration found. Sep 4 17:17:30.276306 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:17:30.462793 systemd[1]: Reloading finished in 622 ms. Sep 4 17:17:30.527913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:30.542869 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:17:30.543573 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:30.555047 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:17:31.004717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:17:31.019997 (kubelet)[3681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:17:31.136495 kubelet[3681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:31.136495 kubelet[3681]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:17:31.136495 kubelet[3681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:17:31.136495 kubelet[3681]: I0904 17:17:31.135569 3681 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:17:31.146684 kubelet[3681]: I0904 17:17:31.146370 3681 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Sep 4 17:17:31.146684 kubelet[3681]: I0904 17:17:31.146432 3681 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:17:31.147370 kubelet[3681]: I0904 17:17:31.147171 3681 server.go:895] "Client rotation is on, will bootstrap in background" Sep 4 17:17:31.153102 kubelet[3681]: I0904 17:17:31.153013 3681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:17:31.155765 kubelet[3681]: I0904 17:17:31.155276 3681 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:17:31.165382 kubelet[3681]: W0904 17:17:31.165349 3681 machine.go:65] Cannot read vendor id correctly, set empty. Sep 4 17:17:31.168402 kubelet[3681]: I0904 17:17:31.168360 3681 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:17:31.169644 kubelet[3681]: I0904 17:17:31.169610 3681 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:17:31.170355 kubelet[3681]: I0904 17:17:31.170318 3681 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:17:31.170642 kubelet[3681]: I0904 17:17:31.170615 3681 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:17:31.171310 kubelet[3681]: I0904 17:17:31.170741 3681 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:17:31.171310 kubelet[3681]: I0904 17:17:31.170826 3681 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:31.171310 kubelet[3681]: I0904 17:17:31.171043 3681 kubelet.go:393] "Attempting to sync node with API server" Sep 4 17:17:31.171310 kubelet[3681]: I0904 17:17:31.171080 3681 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:17:31.172012 kubelet[3681]: I0904 17:17:31.171973 3681 kubelet.go:309] "Adding apiserver pod source" Sep 4 17:17:31.172172 kubelet[3681]: I0904 17:17:31.172153 3681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:17:31.188779 kubelet[3681]: I0904 17:17:31.185932 3681 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.20" apiVersion="v1" Sep 4 17:17:31.188779 kubelet[3681]: I0904 17:17:31.186893 3681 server.go:1232] "Started kubelet" Sep 4 17:17:31.191884 kubelet[3681]: I0904 17:17:31.191849 3681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:17:31.206669 kubelet[3681]: I0904 17:17:31.192261 3681 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:17:31.211064 kubelet[3681]: I0904 17:17:31.210916 3681 server.go:462] "Adding debug handlers to kubelet server" Sep 4 17:17:31.217166 kubelet[3681]: I0904 17:17:31.215880 3681 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:17:31.217166 kubelet[3681]: I0904 17:17:31.192325 3681 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Sep 4 17:17:31.217166 kubelet[3681]: I0904 17:17:31.216933 3681 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:17:31.217166 kubelet[3681]: I0904 17:17:31.217006 3681 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:17:31.217484 kubelet[3681]: I0904 17:17:31.217302 3681 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:17:31.236989 kubelet[3681]: E0904 17:17:31.236949 3681 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Sep 4 17:17:31.239397 kubelet[3681]: E0904 17:17:31.239356 3681 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:17:31.277418 kubelet[3681]: I0904 17:17:31.277282 3681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:17:31.281405 kubelet[3681]: I0904 17:17:31.281371 3681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:17:31.281565 kubelet[3681]: I0904 17:17:31.281547 3681 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:17:31.281676 kubelet[3681]: I0904 17:17:31.281658 3681 kubelet.go:2303] "Starting kubelet main sync loop" Sep 4 17:17:31.281842 kubelet[3681]: E0904 17:17:31.281823 3681 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:17:31.326682 kubelet[3681]: E0904 17:17:31.326637 3681 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Sep 4 17:17:31.333587 kubelet[3681]: I0904 17:17:31.333443 3681 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-23-29" Sep 4 17:17:31.357828 kubelet[3681]: I0904 17:17:31.357546 3681 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-23-29" Sep 4 17:17:31.358340 kubelet[3681]: I0904 17:17:31.358023 3681 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-23-29" Sep 4 17:17:31.385272 kubelet[3681]: E0904 17:17:31.382745 3681 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:17:31.560752 kubelet[3681]: I0904 17:17:31.560009 3681 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:17:31.560752 kubelet[3681]: I0904 17:17:31.560044 3681 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:17:31.560752 kubelet[3681]: I0904 17:17:31.560077 3681 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:17:31.560752 kubelet[3681]: I0904 17:17:31.560377 3681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:17:31.560752 kubelet[3681]: I0904 17:17:31.560417 3681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:17:31.560752 kubelet[3681]: I0904 17:17:31.560434 3681 policy_none.go:49] "None policy: Start" Sep 4 17:17:31.563390 kubelet[3681]: I0904 17:17:31.563336 3681 memory_manager.go:169] "Starting memorymanager" policy="None" Sep 4 17:17:31.563390 kubelet[3681]: I0904 17:17:31.563391 3681 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:17:31.564436 kubelet[3681]: I0904 17:17:31.563662 3681 state_mem.go:75] "Updated machine memory state" Sep 4 17:17:31.566267 kubelet[3681]: I0904 17:17:31.565923 3681 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:17:31.568966 kubelet[3681]: I0904 17:17:31.568917 3681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:17:31.584215 kubelet[3681]: I0904 17:17:31.584157 3681 topology_manager.go:215] "Topology Admit Handler" podUID="44c77efcc47763029a2786699c6f19c7" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-29" Sep 4 17:17:31.584793 kubelet[3681]: I0904 17:17:31.584656 3681 topology_manager.go:215] "Topology Admit Handler" podUID="53f30815a2e4b6fe0bcf3d5845e22795" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:31.586704 kubelet[3681]: I0904 17:17:31.584947 3681 topology_manager.go:215] "Topology Admit Handler" podUID="51fd2660440bd6d237dbe757c4f89474" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-29" Sep 4 17:17:31.619453 kubelet[3681]: I0904 17:17:31.619420 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:31.620080 kubelet[3681]: I0904 17:17:31.619697 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:31.620080 kubelet[3681]: I0904 17:17:31.619756 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44c77efcc47763029a2786699c6f19c7-ca-certs\") pod \"kube-apiserver-ip-172-31-23-29\" (UID: \"44c77efcc47763029a2786699c6f19c7\") " pod="kube-system/kube-apiserver-ip-172-31-23-29" Sep 4 17:17:31.620080 kubelet[3681]: I0904 17:17:31.619800 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:31.620080 kubelet[3681]: I0904 17:17:31.619846 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:31.620080 kubelet[3681]: I0904 17:17:31.619897 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53f30815a2e4b6fe0bcf3d5845e22795-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-29\" (UID: \"53f30815a2e4b6fe0bcf3d5845e22795\") " pod="kube-system/kube-controller-manager-ip-172-31-23-29" Sep 4 17:17:31.620528 kubelet[3681]: I0904 17:17:31.619939 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51fd2660440bd6d237dbe757c4f89474-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-29\" (UID: \"51fd2660440bd6d237dbe757c4f89474\") " pod="kube-system/kube-scheduler-ip-172-31-23-29" Sep 4 17:17:31.620528 kubelet[3681]: I0904 17:17:31.619989 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44c77efcc47763029a2786699c6f19c7-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-29\" (UID: \"44c77efcc47763029a2786699c6f19c7\") " pod="kube-system/kube-apiserver-ip-172-31-23-29" Sep 4 17:17:31.622328 kubelet[3681]: I0904 17:17:31.620033 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44c77efcc47763029a2786699c6f19c7-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-29\" (UID: \"44c77efcc47763029a2786699c6f19c7\") " pod="kube-system/kube-apiserver-ip-172-31-23-29" Sep 4 17:17:32.177660 kubelet[3681]: I0904 17:17:32.177339 3681 apiserver.go:52] "Watching apiserver" Sep 4 17:17:32.217624 kubelet[3681]: I0904 17:17:32.217368 3681 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:17:32.239106 kubelet[3681]: I0904 17:17:32.239003 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-29" podStartSLOduration=1.23875657 podCreationTimestamp="2024-09-04 17:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:32.226123714 +0000 UTC m=+1.197111235" watchObservedRunningTime="2024-09-04 17:17:32.23875657 +0000 UTC m=+1.209744067" Sep 4 17:17:32.242643 kubelet[3681]: I0904 17:17:32.242400 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-29" podStartSLOduration=1.241201474 podCreationTimestamp="2024-09-04 17:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:32.240970822 +0000 UTC m=+1.211958319" watchObservedRunningTime="2024-09-04 17:17:32.241201474 +0000 UTC m=+1.212188983" Sep 4 17:17:32.441082 kubelet[3681]: E0904 17:17:32.440122 3681 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-29\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-29" Sep 4 17:17:32.448373 kubelet[3681]: I0904 17:17:32.447357 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-29" podStartSLOduration=1.447118151 podCreationTimestamp="2024-09-04 17:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:32.254461894 +0000 UTC m=+1.225449403" watchObservedRunningTime="2024-09-04 17:17:32.447118151 +0000 UTC m=+1.418105672" Sep 4 17:17:37.942585 sudo[2503]: pam_unix(sudo:session): session closed for user root Sep 4 17:17:37.966638 sshd[2499]: pam_unix(sshd:session): session closed for user core Sep 4 17:17:37.974545 systemd[1]: sshd@6-172.31.23.29:22-139.178.89.65:54992.service: Deactivated successfully. Sep 4 17:17:37.980550 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:17:37.980882 systemd-logind[2121]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:17:37.985437 systemd-logind[2121]: Removed session 7. Sep 4 17:17:42.559115 kubelet[3681]: I0904 17:17:42.558924 3681 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:17:42.563165 containerd[2154]: time="2024-09-04T17:17:42.562321426Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:17:42.566436 kubelet[3681]: I0904 17:17:42.562995 3681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:17:43.071152 kubelet[3681]: I0904 17:17:43.070999 3681 topology_manager.go:215] "Topology Admit Handler" podUID="5d10857f-07e8-4a53-bc55-0b9169927ef9" podNamespace="kube-system" podName="kube-proxy-4wszn" Sep 4 17:17:43.094979 kubelet[3681]: I0904 17:17:43.094712 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d10857f-07e8-4a53-bc55-0b9169927ef9-kube-proxy\") pod \"kube-proxy-4wszn\" (UID: \"5d10857f-07e8-4a53-bc55-0b9169927ef9\") " pod="kube-system/kube-proxy-4wszn" Sep 4 17:17:43.094979 kubelet[3681]: I0904 17:17:43.094836 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d10857f-07e8-4a53-bc55-0b9169927ef9-lib-modules\") pod \"kube-proxy-4wszn\" (UID: \"5d10857f-07e8-4a53-bc55-0b9169927ef9\") " pod="kube-system/kube-proxy-4wszn" Sep 4 17:17:43.095961 kubelet[3681]: I0904 17:17:43.095301 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97mpm\" (UniqueName: \"kubernetes.io/projected/5d10857f-07e8-4a53-bc55-0b9169927ef9-kube-api-access-97mpm\") pod \"kube-proxy-4wszn\" (UID: \"5d10857f-07e8-4a53-bc55-0b9169927ef9\") " pod="kube-system/kube-proxy-4wszn" Sep 4 17:17:43.095961 kubelet[3681]: I0904 17:17:43.095381 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d10857f-07e8-4a53-bc55-0b9169927ef9-xtables-lock\") pod \"kube-proxy-4wszn\" (UID: \"5d10857f-07e8-4a53-bc55-0b9169927ef9\") " pod="kube-system/kube-proxy-4wszn" Sep 4 17:17:43.292094 kubelet[3681]: I0904 17:17:43.291834 3681 topology_manager.go:215] "Topology Admit Handler" podUID="5935e463-6401-4ec1-820f-8baa89f11d4f" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-gj26w" Sep 4 17:17:43.299287 kubelet[3681]: I0904 17:17:43.298132 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsw9s\" (UniqueName: \"kubernetes.io/projected/5935e463-6401-4ec1-820f-8baa89f11d4f-kube-api-access-lsw9s\") pod \"tigera-operator-5d56685c77-gj26w\" (UID: \"5935e463-6401-4ec1-820f-8baa89f11d4f\") " pod="tigera-operator/tigera-operator-5d56685c77-gj26w" Sep 4 17:17:43.299287 kubelet[3681]: I0904 17:17:43.298204 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5935e463-6401-4ec1-820f-8baa89f11d4f-var-lib-calico\") pod \"tigera-operator-5d56685c77-gj26w\" (UID: \"5935e463-6401-4ec1-820f-8baa89f11d4f\") " pod="tigera-operator/tigera-operator-5d56685c77-gj26w" Sep 4 17:17:43.381146 containerd[2154]: time="2024-09-04T17:17:43.381010102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4wszn,Uid:5d10857f-07e8-4a53-bc55-0b9169927ef9,Namespace:kube-system,Attempt:0,}" Sep 4 17:17:43.440179 containerd[2154]: time="2024-09-04T17:17:43.433363954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:43.440179 containerd[2154]: time="2024-09-04T17:17:43.434250166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:43.440179 containerd[2154]: time="2024-09-04T17:17:43.434287138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:43.440179 containerd[2154]: time="2024-09-04T17:17:43.434691106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:43.509414 containerd[2154]: time="2024-09-04T17:17:43.509360842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4wszn,Uid:5d10857f-07e8-4a53-bc55-0b9169927ef9,Namespace:kube-system,Attempt:0,} returns sandbox id \"80464c983a0d525d934e0b40014236da050f2218bcc55643de5c40c94d64cb27\"" Sep 4 17:17:43.516005 containerd[2154]: time="2024-09-04T17:17:43.515452918Z" level=info msg="CreateContainer within sandbox \"80464c983a0d525d934e0b40014236da050f2218bcc55643de5c40c94d64cb27\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:17:43.544256 containerd[2154]: time="2024-09-04T17:17:43.544191274Z" level=info msg="CreateContainer within sandbox \"80464c983a0d525d934e0b40014236da050f2218bcc55643de5c40c94d64cb27\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"37a343fdf1e86cb5866fad09a47d4faf4b5d4b89706168cbb554d96c99048fc6\"" Sep 4 17:17:43.546402 containerd[2154]: time="2024-09-04T17:17:43.545752570Z" level=info msg="StartContainer for \"37a343fdf1e86cb5866fad09a47d4faf4b5d4b89706168cbb554d96c99048fc6\"" Sep 4 17:17:43.603833 containerd[2154]: time="2024-09-04T17:17:43.603718655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-gj26w,Uid:5935e463-6401-4ec1-820f-8baa89f11d4f,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:17:43.659202 containerd[2154]: time="2024-09-04T17:17:43.659043743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:43.659202 containerd[2154]: time="2024-09-04T17:17:43.659138639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:43.659623 containerd[2154]: time="2024-09-04T17:17:43.659520815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:43.660339 containerd[2154]: time="2024-09-04T17:17:43.660211715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:43.672966 containerd[2154]: time="2024-09-04T17:17:43.672892007Z" level=info msg="StartContainer for \"37a343fdf1e86cb5866fad09a47d4faf4b5d4b89706168cbb554d96c99048fc6\" returns successfully" Sep 4 17:17:43.781223 containerd[2154]: time="2024-09-04T17:17:43.781146888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-gj26w,Uid:5935e463-6401-4ec1-820f-8baa89f11d4f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"93c1dd596a22fd004357c6a70e5e8bafa39214f635d46bd18b8b9c5536893797\"" Sep 4 17:17:43.784688 containerd[2154]: time="2024-09-04T17:17:43.784625544Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:17:45.156509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1037794196.mount: Deactivated successfully. Sep 4 17:17:45.794372 containerd[2154]: time="2024-09-04T17:17:45.794177174Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:45.796329 containerd[2154]: time="2024-09-04T17:17:45.795948050Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485923" Sep 4 17:17:45.797851 containerd[2154]: time="2024-09-04T17:17:45.797760386Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:45.802305 containerd[2154]: time="2024-09-04T17:17:45.802189010Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:45.805987 containerd[2154]: time="2024-09-04T17:17:45.805922954Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.021231722s" Sep 4 17:17:45.806846 containerd[2154]: time="2024-09-04T17:17:45.805988366Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:17:45.809595 containerd[2154]: time="2024-09-04T17:17:45.809521790Z" level=info msg="CreateContainer within sandbox \"93c1dd596a22fd004357c6a70e5e8bafa39214f635d46bd18b8b9c5536893797\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:17:45.835193 containerd[2154]: time="2024-09-04T17:17:45.835032194Z" level=info msg="CreateContainer within sandbox \"93c1dd596a22fd004357c6a70e5e8bafa39214f635d46bd18b8b9c5536893797\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"84eca701f1706457f8bf84f4a64e3dc93cb72c1606d19255819512361eef38b0\"" Sep 4 17:17:45.835928 containerd[2154]: time="2024-09-04T17:17:45.835849658Z" level=info msg="StartContainer for \"84eca701f1706457f8bf84f4a64e3dc93cb72c1606d19255819512361eef38b0\"" Sep 4 17:17:45.900292 systemd[1]: run-containerd-runc-k8s.io-84eca701f1706457f8bf84f4a64e3dc93cb72c1606d19255819512361eef38b0-runc.HXyVah.mount: Deactivated successfully. Sep 4 17:17:45.956307 containerd[2154]: time="2024-09-04T17:17:45.953320070Z" level=info msg="StartContainer for \"84eca701f1706457f8bf84f4a64e3dc93cb72c1606d19255819512361eef38b0\" returns successfully" Sep 4 17:17:46.481659 kubelet[3681]: I0904 17:17:46.481594 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-4wszn" podStartSLOduration=3.481535065 podCreationTimestamp="2024-09-04 17:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:17:44.478525835 +0000 UTC m=+13.449513332" watchObservedRunningTime="2024-09-04 17:17:46.481535065 +0000 UTC m=+15.452522574" Sep 4 17:17:50.390310 kubelet[3681]: I0904 17:17:50.385363 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-gj26w" podStartSLOduration=5.36218699 podCreationTimestamp="2024-09-04 17:17:43 +0000 UTC" firstStartedPulling="2024-09-04 17:17:43.783287256 +0000 UTC m=+12.754274753" lastFinishedPulling="2024-09-04 17:17:45.806406554 +0000 UTC m=+14.777394063" observedRunningTime="2024-09-04 17:17:46.484803937 +0000 UTC m=+15.455791434" watchObservedRunningTime="2024-09-04 17:17:50.3853063 +0000 UTC m=+19.356293809" Sep 4 17:17:50.390310 kubelet[3681]: I0904 17:17:50.385541 3681 topology_manager.go:215] "Topology Admit Handler" podUID="cc7ca0c5-dc7b-4930-95ac-4fe10b56ac33" podNamespace="calico-system" podName="calico-typha-6b774fb6df-6m4qm" Sep 4 17:17:50.542524 kubelet[3681]: I0904 17:17:50.542458 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnkrn\" (UniqueName: \"kubernetes.io/projected/cc7ca0c5-dc7b-4930-95ac-4fe10b56ac33-kube-api-access-hnkrn\") pod \"calico-typha-6b774fb6df-6m4qm\" (UID: \"cc7ca0c5-dc7b-4930-95ac-4fe10b56ac33\") " pod="calico-system/calico-typha-6b774fb6df-6m4qm" Sep 4 17:17:50.542660 kubelet[3681]: I0904 17:17:50.542548 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc7ca0c5-dc7b-4930-95ac-4fe10b56ac33-tigera-ca-bundle\") pod \"calico-typha-6b774fb6df-6m4qm\" (UID: \"cc7ca0c5-dc7b-4930-95ac-4fe10b56ac33\") " pod="calico-system/calico-typha-6b774fb6df-6m4qm" Sep 4 17:17:50.542660 kubelet[3681]: I0904 17:17:50.542618 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cc7ca0c5-dc7b-4930-95ac-4fe10b56ac33-typha-certs\") pod \"calico-typha-6b774fb6df-6m4qm\" (UID: \"cc7ca0c5-dc7b-4930-95ac-4fe10b56ac33\") " pod="calico-system/calico-typha-6b774fb6df-6m4qm" Sep 4 17:17:50.639747 kubelet[3681]: I0904 17:17:50.639655 3681 topology_manager.go:215] "Topology Admit Handler" podUID="fd1a0554-43e7-4f40-b55c-fb5c44b199fb" podNamespace="calico-system" podName="calico-node-bmkn5" Sep 4 17:17:50.705119 containerd[2154]: time="2024-09-04T17:17:50.705066258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b774fb6df-6m4qm,Uid:cc7ca0c5-dc7b-4930-95ac-4fe10b56ac33,Namespace:calico-system,Attempt:0,}" Sep 4 17:17:50.747949 kubelet[3681]: I0904 17:17:50.743727 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-tigera-ca-bundle\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.747949 kubelet[3681]: I0904 17:17:50.743803 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-cni-net-dir\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.747949 kubelet[3681]: I0904 17:17:50.743851 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-xtables-lock\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.747949 kubelet[3681]: I0904 17:17:50.743913 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-policysync\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.747949 kubelet[3681]: I0904 17:17:50.743965 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-var-run-calico\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.750680 kubelet[3681]: I0904 17:17:50.744007 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-var-lib-calico\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.750680 kubelet[3681]: I0904 17:17:50.744053 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-node-certs\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.750680 kubelet[3681]: I0904 17:17:50.744107 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8w8hq\" (UniqueName: \"kubernetes.io/projected/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-kube-api-access-8w8hq\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.750680 kubelet[3681]: I0904 17:17:50.744151 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-cni-bin-dir\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.750680 kubelet[3681]: I0904 17:17:50.744194 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-flexvol-driver-host\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.751039 kubelet[3681]: I0904 17:17:50.744254 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-lib-modules\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.751039 kubelet[3681]: I0904 17:17:50.744305 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fd1a0554-43e7-4f40-b55c-fb5c44b199fb-cni-log-dir\") pod \"calico-node-bmkn5\" (UID: \"fd1a0554-43e7-4f40-b55c-fb5c44b199fb\") " pod="calico-system/calico-node-bmkn5" Sep 4 17:17:50.800216 kubelet[3681]: I0904 17:17:50.797020 3681 topology_manager.go:215] "Topology Admit Handler" podUID="beb1fab5-04d7-4346-bdd8-82707db57e16" podNamespace="calico-system" podName="csi-node-driver-cj7r5" Sep 4 17:17:50.800216 kubelet[3681]: E0904 17:17:50.797625 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cj7r5" podUID="beb1fab5-04d7-4346-bdd8-82707db57e16" Sep 4 17:17:50.824550 containerd[2154]: time="2024-09-04T17:17:50.824205079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:50.825322 containerd[2154]: time="2024-09-04T17:17:50.824589679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:50.825322 containerd[2154]: time="2024-09-04T17:17:50.824630923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:50.829449 containerd[2154]: time="2024-09-04T17:17:50.827817535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:50.901514 kubelet[3681]: E0904 17:17:50.898108 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.901514 kubelet[3681]: W0904 17:17:50.898149 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.901514 kubelet[3681]: E0904 17:17:50.898232 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.914298 kubelet[3681]: E0904 17:17:50.913651 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.914298 kubelet[3681]: W0904 17:17:50.913687 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.917343 kubelet[3681]: E0904 17:17:50.916866 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.917343 kubelet[3681]: W0904 17:17:50.916901 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.919730 kubelet[3681]: E0904 17:17:50.919117 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.919730 kubelet[3681]: W0904 17:17:50.919166 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.923339 kubelet[3681]: E0904 17:17:50.923190 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.924840 kubelet[3681]: E0904 17:17:50.924013 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.936011 kubelet[3681]: E0904 17:17:50.925348 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.937566 kubelet[3681]: E0904 17:17:50.936825 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.937566 kubelet[3681]: W0904 17:17:50.936972 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.941452 kubelet[3681]: E0904 17:17:50.940944 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.941452 kubelet[3681]: W0904 17:17:50.940979 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.945376 kubelet[3681]: E0904 17:17:50.941174 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.945376 kubelet[3681]: E0904 17:17:50.945004 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.947829 kubelet[3681]: E0904 17:17:50.947378 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.947829 kubelet[3681]: W0904 17:17:50.947411 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.950779 kubelet[3681]: E0904 17:17:50.950518 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.950779 kubelet[3681]: W0904 17:17:50.950556 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.952264 kubelet[3681]: E0904 17:17:50.951866 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.952264 kubelet[3681]: W0904 17:17:50.951898 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.956269 kubelet[3681]: E0904 17:17:50.954627 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.956269 kubelet[3681]: W0904 17:17:50.954691 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.962288 kubelet[3681]: E0904 17:17:50.959583 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.962288 kubelet[3681]: E0904 17:17:50.959673 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.962288 kubelet[3681]: E0904 17:17:50.959714 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.965974 kubelet[3681]: E0904 17:17:50.965934 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.974459 kubelet[3681]: E0904 17:17:50.974198 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.974459 kubelet[3681]: W0904 17:17:50.974232 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.976483 kubelet[3681]: E0904 17:17:50.976213 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.977163 kubelet[3681]: E0904 17:17:50.976998 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.977163 kubelet[3681]: W0904 17:17:50.977023 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.979540 kubelet[3681]: E0904 17:17:50.979296 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.979540 kubelet[3681]: W0904 17:17:50.979330 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.980436 kubelet[3681]: E0904 17:17:50.979931 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.983633 kubelet[3681]: E0904 17:17:50.983374 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.983969 kubelet[3681]: W0904 17:17:50.983414 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.986615 kubelet[3681]: E0904 17:17:50.986554 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.987469 kubelet[3681]: W0904 17:17:50.986589 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.990269 kubelet[3681]: E0904 17:17:50.983568 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.990269 kubelet[3681]: E0904 17:17:50.989091 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.990269 kubelet[3681]: E0904 17:17:50.989179 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.992286 kubelet[3681]: E0904 17:17:50.991665 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.992498 kubelet[3681]: W0904 17:17:50.992468 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.994341 kubelet[3681]: E0904 17:17:50.994304 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.994653 kubelet[3681]: E0904 17:17:50.994629 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.994839 kubelet[3681]: W0904 17:17:50.994812 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.995935 kubelet[3681]: E0904 17:17:50.995860 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.997690 kubelet[3681]: E0904 17:17:50.997543 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.997690 kubelet[3681]: W0904 17:17:50.997573 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:50.998121 kubelet[3681]: E0904 17:17:50.997881 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:50.999612 kubelet[3681]: E0904 17:17:50.999524 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:50.999612 kubelet[3681]: W0904 17:17:50.999575 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.000322 kubelet[3681]: E0904 17:17:51.000050 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.001531 kubelet[3681]: E0904 17:17:51.001077 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.001531 kubelet[3681]: W0904 17:17:51.001107 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.001531 kubelet[3681]: E0904 17:17:51.001144 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.001531 kubelet[3681]: I0904 17:17:51.001203 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghphm\" (UniqueName: \"kubernetes.io/projected/beb1fab5-04d7-4346-bdd8-82707db57e16-kube-api-access-ghphm\") pod \"csi-node-driver-cj7r5\" (UID: \"beb1fab5-04d7-4346-bdd8-82707db57e16\") " pod="calico-system/csi-node-driver-cj7r5" Sep 4 17:17:51.003276 kubelet[3681]: E0904 17:17:51.002426 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.003276 kubelet[3681]: W0904 17:17:51.002457 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.004115 kubelet[3681]: E0904 17:17:51.003561 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.004115 kubelet[3681]: I0904 17:17:51.004023 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/beb1fab5-04d7-4346-bdd8-82707db57e16-kubelet-dir\") pod \"csi-node-driver-cj7r5\" (UID: \"beb1fab5-04d7-4346-bdd8-82707db57e16\") " pod="calico-system/csi-node-driver-cj7r5" Sep 4 17:17:51.004585 kubelet[3681]: E0904 17:17:51.004337 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.004585 kubelet[3681]: W0904 17:17:51.004356 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.004585 kubelet[3681]: E0904 17:17:51.004392 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.005151 kubelet[3681]: E0904 17:17:51.005087 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.005151 kubelet[3681]: W0904 17:17:51.005112 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.005668 kubelet[3681]: E0904 17:17:51.005641 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.006369 kubelet[3681]: E0904 17:17:51.005887 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.006369 kubelet[3681]: W0904 17:17:51.006045 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.006369 kubelet[3681]: E0904 17:17:51.006082 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.006369 kubelet[3681]: I0904 17:17:51.006129 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/beb1fab5-04d7-4346-bdd8-82707db57e16-varrun\") pod \"csi-node-driver-cj7r5\" (UID: \"beb1fab5-04d7-4346-bdd8-82707db57e16\") " pod="calico-system/csi-node-driver-cj7r5" Sep 4 17:17:51.007608 kubelet[3681]: E0904 17:17:51.007363 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.007608 kubelet[3681]: W0904 17:17:51.007395 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.007608 kubelet[3681]: E0904 17:17:51.007512 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.007608 kubelet[3681]: I0904 17:17:51.007562 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/beb1fab5-04d7-4346-bdd8-82707db57e16-socket-dir\") pod \"csi-node-driver-cj7r5\" (UID: \"beb1fab5-04d7-4346-bdd8-82707db57e16\") " pod="calico-system/csi-node-driver-cj7r5" Sep 4 17:17:51.008752 kubelet[3681]: E0904 17:17:51.008375 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.008752 kubelet[3681]: W0904 17:17:51.008401 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.008752 kubelet[3681]: E0904 17:17:51.008533 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.009185 kubelet[3681]: E0904 17:17:51.009051 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.009185 kubelet[3681]: W0904 17:17:51.009075 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.009500 kubelet[3681]: E0904 17:17:51.009317 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.010119 kubelet[3681]: E0904 17:17:51.009889 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.010119 kubelet[3681]: W0904 17:17:51.009914 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.010119 kubelet[3681]: E0904 17:17:51.010037 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.010119 kubelet[3681]: I0904 17:17:51.010089 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/beb1fab5-04d7-4346-bdd8-82707db57e16-registration-dir\") pod \"csi-node-driver-cj7r5\" (UID: \"beb1fab5-04d7-4346-bdd8-82707db57e16\") " pod="calico-system/csi-node-driver-cj7r5" Sep 4 17:17:51.011092 kubelet[3681]: E0904 17:17:51.010902 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.011092 kubelet[3681]: W0904 17:17:51.010930 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.011092 kubelet[3681]: E0904 17:17:51.011062 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.011748 kubelet[3681]: E0904 17:17:51.011724 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.012009 kubelet[3681]: W0904 17:17:51.011840 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.012009 kubelet[3681]: E0904 17:17:51.011880 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.012879 kubelet[3681]: E0904 17:17:51.012682 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.012879 kubelet[3681]: W0904 17:17:51.012711 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.012879 kubelet[3681]: E0904 17:17:51.012750 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.013521 kubelet[3681]: E0904 17:17:51.013364 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.013521 kubelet[3681]: W0904 17:17:51.013390 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.013521 kubelet[3681]: E0904 17:17:51.013420 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.014840 kubelet[3681]: E0904 17:17:51.014536 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.014840 kubelet[3681]: W0904 17:17:51.014565 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.014840 kubelet[3681]: E0904 17:17:51.014647 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.015910 kubelet[3681]: E0904 17:17:51.015656 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.015910 kubelet[3681]: W0904 17:17:51.015688 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.015910 kubelet[3681]: E0904 17:17:51.015722 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.110590 containerd[2154]: time="2024-09-04T17:17:51.110525932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b774fb6df-6m4qm,Uid:cc7ca0c5-dc7b-4930-95ac-4fe10b56ac33,Namespace:calico-system,Attempt:0,} returns sandbox id \"435318aa75aaf5d995a3ce428e989a7a3734084d809c40647385e8d3a473f8df\"" Sep 4 17:17:51.115223 containerd[2154]: time="2024-09-04T17:17:51.114783616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:17:51.117584 kubelet[3681]: E0904 17:17:51.117551 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.117813 kubelet[3681]: W0904 17:17:51.117787 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.118043 kubelet[3681]: E0904 17:17:51.118022 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.119958 kubelet[3681]: E0904 17:17:51.119925 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.120445 kubelet[3681]: W0904 17:17:51.120229 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.120445 kubelet[3681]: E0904 17:17:51.120328 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.121795 kubelet[3681]: E0904 17:17:51.121694 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.121795 kubelet[3681]: W0904 17:17:51.121756 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.122424 kubelet[3681]: E0904 17:17:51.122308 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.123349 kubelet[3681]: E0904 17:17:51.123191 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.123349 kubelet[3681]: W0904 17:17:51.123221 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.123925 kubelet[3681]: E0904 17:17:51.123787 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.124453 kubelet[3681]: E0904 17:17:51.124304 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.124453 kubelet[3681]: W0904 17:17:51.124331 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.124874 kubelet[3681]: E0904 17:17:51.124637 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.125571 kubelet[3681]: E0904 17:17:51.125337 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.125571 kubelet[3681]: W0904 17:17:51.125365 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.126175 kubelet[3681]: E0904 17:17:51.125951 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.126629 kubelet[3681]: E0904 17:17:51.126439 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.126629 kubelet[3681]: W0904 17:17:51.126464 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.126629 kubelet[3681]: E0904 17:17:51.126538 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.127043 kubelet[3681]: E0904 17:17:51.126966 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.127337 kubelet[3681]: W0904 17:17:51.127178 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.127806 kubelet[3681]: E0904 17:17:51.127663 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.127806 kubelet[3681]: W0904 17:17:51.127683 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.127806 kubelet[3681]: E0904 17:17:51.127736 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.127962 kubelet[3681]: E0904 17:17:51.127831 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.128390 kubelet[3681]: E0904 17:17:51.128196 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.128390 kubelet[3681]: W0904 17:17:51.128231 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.128678 kubelet[3681]: E0904 17:17:51.128559 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.129330 kubelet[3681]: E0904 17:17:51.129116 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.129330 kubelet[3681]: W0904 17:17:51.129140 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.129696 kubelet[3681]: E0904 17:17:51.129515 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.130080 kubelet[3681]: E0904 17:17:51.129900 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.130080 kubelet[3681]: W0904 17:17:51.129921 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.130640 kubelet[3681]: E0904 17:17:51.130296 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.130938 kubelet[3681]: E0904 17:17:51.130919 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.131114 kubelet[3681]: W0904 17:17:51.130992 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.131449 kubelet[3681]: E0904 17:17:51.131378 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.131870 kubelet[3681]: E0904 17:17:51.131841 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.132278 kubelet[3681]: W0904 17:17:51.132032 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.132498 kubelet[3681]: E0904 17:17:51.132424 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.132826 kubelet[3681]: E0904 17:17:51.132803 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.133403 kubelet[3681]: W0904 17:17:51.133153 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.134088 kubelet[3681]: E0904 17:17:51.133915 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.134088 kubelet[3681]: W0904 17:17:51.133944 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.134782 kubelet[3681]: E0904 17:17:51.134664 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.134782 kubelet[3681]: E0904 17:17:51.134730 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.135400 kubelet[3681]: E0904 17:17:51.134932 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.135400 kubelet[3681]: W0904 17:17:51.135120 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.137096 kubelet[3681]: E0904 17:17:51.136250 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.137689 kubelet[3681]: E0904 17:17:51.137450 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.137689 kubelet[3681]: W0904 17:17:51.137478 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.138457 kubelet[3681]: E0904 17:17:51.138311 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.138900 kubelet[3681]: E0904 17:17:51.138697 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.138900 kubelet[3681]: W0904 17:17:51.138722 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.138900 kubelet[3681]: E0904 17:17:51.138850 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.139813 kubelet[3681]: E0904 17:17:51.139601 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.139813 kubelet[3681]: W0904 17:17:51.139627 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.139813 kubelet[3681]: E0904 17:17:51.139777 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.140471 kubelet[3681]: E0904 17:17:51.140217 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.140471 kubelet[3681]: W0904 17:17:51.140281 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.140980 kubelet[3681]: E0904 17:17:51.140712 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.141733 kubelet[3681]: E0904 17:17:51.141459 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.141733 kubelet[3681]: W0904 17:17:51.141552 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.141733 kubelet[3681]: E0904 17:17:51.141630 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.142949 kubelet[3681]: E0904 17:17:51.142712 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.142949 kubelet[3681]: W0904 17:17:51.142745 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.145007 kubelet[3681]: E0904 17:17:51.144520 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.145007 kubelet[3681]: E0904 17:17:51.144744 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.145007 kubelet[3681]: W0904 17:17:51.144763 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.145007 kubelet[3681]: E0904 17:17:51.144798 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.147149 kubelet[3681]: E0904 17:17:51.146920 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.147149 kubelet[3681]: W0904 17:17:51.146948 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.147149 kubelet[3681]: E0904 17:17:51.146982 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.166969 kubelet[3681]: E0904 17:17:51.166827 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:51.166969 kubelet[3681]: W0904 17:17:51.166910 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:51.166969 kubelet[3681]: E0904 17:17:51.166965 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:51.277439 containerd[2154]: time="2024-09-04T17:17:51.275969741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bmkn5,Uid:fd1a0554-43e7-4f40-b55c-fb5c44b199fb,Namespace:calico-system,Attempt:0,}" Sep 4 17:17:51.360148 containerd[2154]: time="2024-09-04T17:17:51.358666745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:17:51.360148 containerd[2154]: time="2024-09-04T17:17:51.359631221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:17:51.360148 containerd[2154]: time="2024-09-04T17:17:51.359784509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:51.361402 containerd[2154]: time="2024-09-04T17:17:51.360508865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:17:51.501072 containerd[2154]: time="2024-09-04T17:17:51.501001254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bmkn5,Uid:fd1a0554-43e7-4f40-b55c-fb5c44b199fb,Namespace:calico-system,Attempt:0,} returns sandbox id \"00ad30437ab4dc61696ed538b02a852aa1825837b62fe8512b15b2f3e7a5da7f\"" Sep 4 17:17:52.289284 kubelet[3681]: E0904 17:17:52.285666 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cj7r5" podUID="beb1fab5-04d7-4346-bdd8-82707db57e16" Sep 4 17:17:53.816696 containerd[2154]: time="2024-09-04T17:17:53.816626073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:53.820393 containerd[2154]: time="2024-09-04T17:17:53.820301625Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:17:53.823057 containerd[2154]: time="2024-09-04T17:17:53.822473722Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:53.837264 containerd[2154]: time="2024-09-04T17:17:53.837131086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:53.856812 containerd[2154]: time="2024-09-04T17:17:53.854275714Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.739398762s" Sep 4 17:17:53.856812 containerd[2154]: time="2024-09-04T17:17:53.854339434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:17:53.866951 containerd[2154]: time="2024-09-04T17:17:53.866714578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:17:53.893337 containerd[2154]: time="2024-09-04T17:17:53.893113846Z" level=info msg="CreateContainer within sandbox \"435318aa75aaf5d995a3ce428e989a7a3734084d809c40647385e8d3a473f8df\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:17:53.939287 containerd[2154]: time="2024-09-04T17:17:53.939168694Z" level=info msg="CreateContainer within sandbox \"435318aa75aaf5d995a3ce428e989a7a3734084d809c40647385e8d3a473f8df\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"764770a4fad4294f283ee8f3a36e9e9b82f648ebd8a75311a6b8aab47b851fec\"" Sep 4 17:17:53.940491 containerd[2154]: time="2024-09-04T17:17:53.940427710Z" level=info msg="StartContainer for \"764770a4fad4294f283ee8f3a36e9e9b82f648ebd8a75311a6b8aab47b851fec\"" Sep 4 17:17:54.132474 containerd[2154]: time="2024-09-04T17:17:54.132194875Z" level=info msg="StartContainer for \"764770a4fad4294f283ee8f3a36e9e9b82f648ebd8a75311a6b8aab47b851fec\" returns successfully" Sep 4 17:17:54.282358 kubelet[3681]: E0904 17:17:54.282131 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cj7r5" podUID="beb1fab5-04d7-4346-bdd8-82707db57e16" Sep 4 17:17:54.592359 kubelet[3681]: E0904 17:17:54.592297 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.592545 kubelet[3681]: W0904 17:17:54.592363 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.592545 kubelet[3681]: E0904 17:17:54.592406 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.593285 kubelet[3681]: E0904 17:17:54.592920 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.593285 kubelet[3681]: W0904 17:17:54.592976 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.593285 kubelet[3681]: E0904 17:17:54.593009 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.595182 kubelet[3681]: E0904 17:17:54.593466 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.595182 kubelet[3681]: W0904 17:17:54.593499 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.595182 kubelet[3681]: E0904 17:17:54.593528 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.595182 kubelet[3681]: E0904 17:17:54.593871 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.595182 kubelet[3681]: W0904 17:17:54.593912 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.595182 kubelet[3681]: E0904 17:17:54.593943 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.595182 kubelet[3681]: E0904 17:17:54.594392 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.595182 kubelet[3681]: W0904 17:17:54.594413 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.595182 kubelet[3681]: E0904 17:17:54.594471 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.595182 kubelet[3681]: E0904 17:17:54.594791 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.596071 kubelet[3681]: W0904 17:17:54.594809 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.596071 kubelet[3681]: E0904 17:17:54.594834 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.596071 kubelet[3681]: E0904 17:17:54.595434 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.596071 kubelet[3681]: W0904 17:17:54.595455 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.596071 kubelet[3681]: E0904 17:17:54.595483 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.596478 kubelet[3681]: E0904 17:17:54.596386 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.596478 kubelet[3681]: W0904 17:17:54.596410 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.596478 kubelet[3681]: E0904 17:17:54.596440 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.597360 kubelet[3681]: E0904 17:17:54.597309 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.597549 kubelet[3681]: W0904 17:17:54.597384 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.597549 kubelet[3681]: E0904 17:17:54.597421 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.597862 kubelet[3681]: E0904 17:17:54.597829 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.597936 kubelet[3681]: W0904 17:17:54.597861 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.597936 kubelet[3681]: E0904 17:17:54.597917 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.598620 kubelet[3681]: E0904 17:17:54.598583 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.598620 kubelet[3681]: W0904 17:17:54.598613 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.598816 kubelet[3681]: E0904 17:17:54.598644 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.599070 kubelet[3681]: E0904 17:17:54.598966 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.599070 kubelet[3681]: W0904 17:17:54.598992 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.599070 kubelet[3681]: E0904 17:17:54.599019 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.599568 kubelet[3681]: E0904 17:17:54.599512 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.599568 kubelet[3681]: W0904 17:17:54.599531 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.599568 kubelet[3681]: E0904 17:17:54.599558 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.600518 kubelet[3681]: E0904 17:17:54.600341 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.600518 kubelet[3681]: W0904 17:17:54.600376 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.600518 kubelet[3681]: E0904 17:17:54.600412 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.602090 kubelet[3681]: E0904 17:17:54.602049 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.602090 kubelet[3681]: W0904 17:17:54.602085 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.602356 kubelet[3681]: E0904 17:17:54.602122 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.661707 kubelet[3681]: E0904 17:17:54.661324 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.661707 kubelet[3681]: W0904 17:17:54.661358 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.661707 kubelet[3681]: E0904 17:17:54.661394 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.662167 kubelet[3681]: E0904 17:17:54.662128 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.662347 kubelet[3681]: W0904 17:17:54.662319 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.662478 kubelet[3681]: E0904 17:17:54.662457 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.663846 kubelet[3681]: E0904 17:17:54.663774 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.664479 kubelet[3681]: W0904 17:17:54.664296 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.665179 kubelet[3681]: E0904 17:17:54.665006 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.665179 kubelet[3681]: W0904 17:17:54.665032 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.665632 kubelet[3681]: E0904 17:17:54.665610 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.665925 kubelet[3681]: W0904 17:17:54.665738 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.665925 kubelet[3681]: E0904 17:17:54.665777 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.666599 kubelet[3681]: E0904 17:17:54.666383 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.666599 kubelet[3681]: W0904 17:17:54.666409 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.666599 kubelet[3681]: E0904 17:17:54.666440 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.666599 kubelet[3681]: E0904 17:17:54.666482 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.666891 kubelet[3681]: E0904 17:17:54.666728 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.667461 kubelet[3681]: E0904 17:17:54.667430 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.667461 kubelet[3681]: W0904 17:17:54.667458 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.668065 kubelet[3681]: E0904 17:17:54.667765 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.668763 kubelet[3681]: E0904 17:17:54.668741 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.668888 kubelet[3681]: W0904 17:17:54.668865 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.669304 kubelet[3681]: E0904 17:17:54.669132 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.670018 kubelet[3681]: E0904 17:17:54.669853 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.670018 kubelet[3681]: W0904 17:17:54.669875 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.670299 kubelet[3681]: E0904 17:17:54.670216 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.670652 kubelet[3681]: E0904 17:17:54.670499 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.670652 kubelet[3681]: W0904 17:17:54.670520 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.670834 kubelet[3681]: E0904 17:17:54.670801 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.671361 kubelet[3681]: E0904 17:17:54.671129 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.671361 kubelet[3681]: W0904 17:17:54.671147 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.671361 kubelet[3681]: E0904 17:17:54.671179 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.671711 kubelet[3681]: E0904 17:17:54.671691 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.671999 kubelet[3681]: W0904 17:17:54.671804 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.671999 kubelet[3681]: E0904 17:17:54.671844 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.672565 kubelet[3681]: E0904 17:17:54.672535 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.672881 kubelet[3681]: W0904 17:17:54.672691 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.673044 kubelet[3681]: E0904 17:17:54.673021 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.674291 kubelet[3681]: E0904 17:17:54.673895 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.674291 kubelet[3681]: W0904 17:17:54.673924 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.674291 kubelet[3681]: E0904 17:17:54.673962 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.676481 kubelet[3681]: E0904 17:17:54.675768 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.676481 kubelet[3681]: W0904 17:17:54.675801 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.676481 kubelet[3681]: E0904 17:17:54.676157 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.678288 kubelet[3681]: E0904 17:17:54.677750 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.678288 kubelet[3681]: W0904 17:17:54.677781 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.678288 kubelet[3681]: E0904 17:17:54.677821 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.680377 kubelet[3681]: E0904 17:17:54.680292 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.680706 kubelet[3681]: W0904 17:17:54.680672 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.681150 kubelet[3681]: E0904 17:17:54.681045 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:54.683493 kubelet[3681]: E0904 17:17:54.683460 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:54.683796 kubelet[3681]: W0904 17:17:54.683767 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:54.683986 kubelet[3681]: E0904 17:17:54.683922 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.391314 containerd[2154]: time="2024-09-04T17:17:55.391169397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:55.395177 containerd[2154]: time="2024-09-04T17:17:55.394638633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:17:55.398231 containerd[2154]: time="2024-09-04T17:17:55.396930837Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:55.403743 containerd[2154]: time="2024-09-04T17:17:55.403570053Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:17:55.409812 containerd[2154]: time="2024-09-04T17:17:55.409478733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.542692611s" Sep 4 17:17:55.409812 containerd[2154]: time="2024-09-04T17:17:55.409550985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:17:55.415992 containerd[2154]: time="2024-09-04T17:17:55.415453701Z" level=info msg="CreateContainer within sandbox \"00ad30437ab4dc61696ed538b02a852aa1825837b62fe8512b15b2f3e7a5da7f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:17:55.448563 containerd[2154]: time="2024-09-04T17:17:55.448489678Z" level=info msg="CreateContainer within sandbox \"00ad30437ab4dc61696ed538b02a852aa1825837b62fe8512b15b2f3e7a5da7f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"82a0eaaaaa59a972207c92488d7deee26209db84dcc6922ab8a791d8dc3d522b\"" Sep 4 17:17:55.449902 containerd[2154]: time="2024-09-04T17:17:55.449805934Z" level=info msg="StartContainer for \"82a0eaaaaa59a972207c92488d7deee26209db84dcc6922ab8a791d8dc3d522b\"" Sep 4 17:17:55.570388 kubelet[3681]: I0904 17:17:55.568600 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6b774fb6df-6m4qm" podStartSLOduration=2.8243011879999997 podCreationTimestamp="2024-09-04 17:17:50 +0000 UTC" firstStartedPulling="2024-09-04 17:17:51.112771948 +0000 UTC m=+20.083759433" lastFinishedPulling="2024-09-04 17:17:53.85701451 +0000 UTC m=+22.828002007" observedRunningTime="2024-09-04 17:17:54.571028289 +0000 UTC m=+23.542015786" watchObservedRunningTime="2024-09-04 17:17:55.568543762 +0000 UTC m=+24.539531271" Sep 4 17:17:55.610396 kubelet[3681]: E0904 17:17:55.610344 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.610396 kubelet[3681]: W0904 17:17:55.610385 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.610592 kubelet[3681]: E0904 17:17:55.610422 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.612374 kubelet[3681]: E0904 17:17:55.612325 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.612374 kubelet[3681]: W0904 17:17:55.612363 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.612588 kubelet[3681]: E0904 17:17:55.612402 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.613630 kubelet[3681]: E0904 17:17:55.613475 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.613630 kubelet[3681]: W0904 17:17:55.613509 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.613630 kubelet[3681]: E0904 17:17:55.613544 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.614726 kubelet[3681]: E0904 17:17:55.614459 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.614726 kubelet[3681]: W0904 17:17:55.614494 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.614726 kubelet[3681]: E0904 17:17:55.614542 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.616484 kubelet[3681]: E0904 17:17:55.615444 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.616484 kubelet[3681]: W0904 17:17:55.615583 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.616484 kubelet[3681]: E0904 17:17:55.615622 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.616484 kubelet[3681]: E0904 17:17:55.616407 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.616484 kubelet[3681]: W0904 17:17:55.616433 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.616484 kubelet[3681]: E0904 17:17:55.616463 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.617384 kubelet[3681]: E0904 17:17:55.617131 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.617384 kubelet[3681]: W0904 17:17:55.617163 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.617384 kubelet[3681]: E0904 17:17:55.617199 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.619052 kubelet[3681]: E0904 17:17:55.618749 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.619052 kubelet[3681]: W0904 17:17:55.618887 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.619052 kubelet[3681]: E0904 17:17:55.618924 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.620158 kubelet[3681]: E0904 17:17:55.619473 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.620158 kubelet[3681]: W0904 17:17:55.619495 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.620158 kubelet[3681]: E0904 17:17:55.619524 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.620158 kubelet[3681]: E0904 17:17:55.619974 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.620158 kubelet[3681]: W0904 17:17:55.619993 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.620158 kubelet[3681]: E0904 17:17:55.620020 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.622222 kubelet[3681]: E0904 17:17:55.620724 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.622222 kubelet[3681]: W0904 17:17:55.620745 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.622222 kubelet[3681]: E0904 17:17:55.620775 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.622222 kubelet[3681]: E0904 17:17:55.621204 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.622222 kubelet[3681]: W0904 17:17:55.621222 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.622222 kubelet[3681]: E0904 17:17:55.621280 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.622222 kubelet[3681]: E0904 17:17:55.621594 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.622222 kubelet[3681]: W0904 17:17:55.621611 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.622222 kubelet[3681]: E0904 17:17:55.621636 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.622222 kubelet[3681]: E0904 17:17:55.622069 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.622740 kubelet[3681]: W0904 17:17:55.622088 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.622740 kubelet[3681]: E0904 17:17:55.622118 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.626290 kubelet[3681]: E0904 17:17:55.623030 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.626290 kubelet[3681]: W0904 17:17:55.623065 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.626290 kubelet[3681]: E0904 17:17:55.623099 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.679502 kubelet[3681]: E0904 17:17:55.677375 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.679502 kubelet[3681]: W0904 17:17:55.677408 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.679502 kubelet[3681]: E0904 17:17:55.677443 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.679502 kubelet[3681]: E0904 17:17:55.679395 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.681039 kubelet[3681]: W0904 17:17:55.679423 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.681039 kubelet[3681]: E0904 17:17:55.680187 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.682674 kubelet[3681]: E0904 17:17:55.682400 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.682674 kubelet[3681]: W0904 17:17:55.682446 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.682674 kubelet[3681]: E0904 17:17:55.682495 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.684970 kubelet[3681]: E0904 17:17:55.684937 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.686529 kubelet[3681]: W0904 17:17:55.685217 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.686529 kubelet[3681]: E0904 17:17:55.686389 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.689209 kubelet[3681]: E0904 17:17:55.688619 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.689209 kubelet[3681]: W0904 17:17:55.688650 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.690668 kubelet[3681]: E0904 17:17:55.689000 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.694377 kubelet[3681]: E0904 17:17:55.694053 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.694377 kubelet[3681]: W0904 17:17:55.694207 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.696085 kubelet[3681]: E0904 17:17:55.695778 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.696085 kubelet[3681]: E0904 17:17:55.695966 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.696085 kubelet[3681]: W0904 17:17:55.695990 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.696550 kubelet[3681]: E0904 17:17:55.696305 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.697763 kubelet[3681]: E0904 17:17:55.697624 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.697763 kubelet[3681]: W0904 17:17:55.697655 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.698555 kubelet[3681]: E0904 17:17:55.698481 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.699295 kubelet[3681]: E0904 17:17:55.699034 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.699576 kubelet[3681]: W0904 17:17:55.699487 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.699935 kubelet[3681]: E0904 17:17:55.699859 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.701632 containerd[2154]: time="2024-09-04T17:17:55.700942811Z" level=info msg="StartContainer for \"82a0eaaaaa59a972207c92488d7deee26209db84dcc6922ab8a791d8dc3d522b\" returns successfully" Sep 4 17:17:55.701760 kubelet[3681]: E0904 17:17:55.701396 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.701760 kubelet[3681]: W0904 17:17:55.701421 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.704389 kubelet[3681]: E0904 17:17:55.704074 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.704389 kubelet[3681]: W0904 17:17:55.704102 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.705980 kubelet[3681]: E0904 17:17:55.705622 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.705980 kubelet[3681]: E0904 17:17:55.705934 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.707393 kubelet[3681]: E0904 17:17:55.706629 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.707393 kubelet[3681]: W0904 17:17:55.706655 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.707393 kubelet[3681]: E0904 17:17:55.706698 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.709062 kubelet[3681]: E0904 17:17:55.708680 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.709062 kubelet[3681]: W0904 17:17:55.708711 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.709062 kubelet[3681]: E0904 17:17:55.708905 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.711279 kubelet[3681]: E0904 17:17:55.710168 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.711279 kubelet[3681]: W0904 17:17:55.710201 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.711279 kubelet[3681]: E0904 17:17:55.710820 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.713267 kubelet[3681]: E0904 17:17:55.712084 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.713267 kubelet[3681]: W0904 17:17:55.712114 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.713267 kubelet[3681]: E0904 17:17:55.712158 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.714044 kubelet[3681]: E0904 17:17:55.713867 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.714044 kubelet[3681]: W0904 17:17:55.713939 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.714494 kubelet[3681]: E0904 17:17:55.714264 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.715322 kubelet[3681]: E0904 17:17:55.715144 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.715322 kubelet[3681]: W0904 17:17:55.715169 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.715322 kubelet[3681]: E0904 17:17:55.715208 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.716059 kubelet[3681]: E0904 17:17:55.715959 3681 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:17:55.716059 kubelet[3681]: W0904 17:17:55.715987 3681 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:17:55.716059 kubelet[3681]: E0904 17:17:55.716018 3681 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:17:55.820325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82a0eaaaaa59a972207c92488d7deee26209db84dcc6922ab8a791d8dc3d522b-rootfs.mount: Deactivated successfully. Sep 4 17:17:56.284198 kubelet[3681]: E0904 17:17:56.282901 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cj7r5" podUID="beb1fab5-04d7-4346-bdd8-82707db57e16" Sep 4 17:17:56.403996 containerd[2154]: time="2024-09-04T17:17:56.403887322Z" level=info msg="shim disconnected" id=82a0eaaaaa59a972207c92488d7deee26209db84dcc6922ab8a791d8dc3d522b namespace=k8s.io Sep 4 17:17:56.403996 containerd[2154]: time="2024-09-04T17:17:56.403977034Z" level=warning msg="cleaning up after shim disconnected" id=82a0eaaaaa59a972207c92488d7deee26209db84dcc6922ab8a791d8dc3d522b namespace=k8s.io Sep 4 17:17:56.403996 containerd[2154]: time="2024-09-04T17:17:56.404001466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:17:56.437806 containerd[2154]: time="2024-09-04T17:17:56.437676118Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:17:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:17:56.549175 containerd[2154]: time="2024-09-04T17:17:56.546494807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:17:58.285287 kubelet[3681]: E0904 17:17:58.285135 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cj7r5" podUID="beb1fab5-04d7-4346-bdd8-82707db57e16" Sep 4 17:18:00.282921 kubelet[3681]: E0904 17:18:00.282358 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cj7r5" podUID="beb1fab5-04d7-4346-bdd8-82707db57e16" Sep 4 17:18:00.361192 containerd[2154]: time="2024-09-04T17:18:00.360616670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:00.362510 containerd[2154]: time="2024-09-04T17:18:00.362178230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:18:00.364192 containerd[2154]: time="2024-09-04T17:18:00.364104038Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:00.368496 containerd[2154]: time="2024-09-04T17:18:00.368417354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:00.369971 containerd[2154]: time="2024-09-04T17:18:00.369925190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 3.820822747s" Sep 4 17:18:00.371107 containerd[2154]: time="2024-09-04T17:18:00.370136210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:18:00.374470 containerd[2154]: time="2024-09-04T17:18:00.374414294Z" level=info msg="CreateContainer within sandbox \"00ad30437ab4dc61696ed538b02a852aa1825837b62fe8512b15b2f3e7a5da7f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:18:00.406483 containerd[2154]: time="2024-09-04T17:18:00.406424090Z" level=info msg="CreateContainer within sandbox \"00ad30437ab4dc61696ed538b02a852aa1825837b62fe8512b15b2f3e7a5da7f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"03a0b555c91533920615cecc88be21cd0bbb26478ea3e6c07a1f6d29d78652d0\"" Sep 4 17:18:00.411158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1041524156.mount: Deactivated successfully. Sep 4 17:18:00.414310 containerd[2154]: time="2024-09-04T17:18:00.413490230Z" level=info msg="StartContainer for \"03a0b555c91533920615cecc88be21cd0bbb26478ea3e6c07a1f6d29d78652d0\"" Sep 4 17:18:00.518755 containerd[2154]: time="2024-09-04T17:18:00.518704227Z" level=info msg="StartContainer for \"03a0b555c91533920615cecc88be21cd0bbb26478ea3e6c07a1f6d29d78652d0\" returns successfully" Sep 4 17:18:01.916876 containerd[2154]: time="2024-09-04T17:18:01.916564482Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:18:01.961374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03a0b555c91533920615cecc88be21cd0bbb26478ea3e6c07a1f6d29d78652d0-rootfs.mount: Deactivated successfully. Sep 4 17:18:01.995499 kubelet[3681]: I0904 17:18:01.995457 3681 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Sep 4 17:18:02.029399 kubelet[3681]: I0904 17:18:02.029147 3681 topology_manager.go:215] "Topology Admit Handler" podUID="a850e369-2158-48d3-9cdb-9ee791c676aa" podNamespace="calico-system" podName="calico-kube-controllers-65b4fbd445-7jsjb" Sep 4 17:18:02.039735 kubelet[3681]: I0904 17:18:02.036991 3681 topology_manager.go:215] "Topology Admit Handler" podUID="5fa29b39-8f42-470a-b117-da5e937a8acb" podNamespace="kube-system" podName="coredns-5dd5756b68-9jcvq" Sep 4 17:18:02.057740 kubelet[3681]: I0904 17:18:02.055914 3681 topology_manager.go:215] "Topology Admit Handler" podUID="092d48cd-1ea8-4018-aeb9-4d6ed136faf7" podNamespace="kube-system" podName="coredns-5dd5756b68-sffwl" Sep 4 17:18:02.132258 kubelet[3681]: I0904 17:18:02.132172 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2ktl\" (UniqueName: \"kubernetes.io/projected/a850e369-2158-48d3-9cdb-9ee791c676aa-kube-api-access-s2ktl\") pod \"calico-kube-controllers-65b4fbd445-7jsjb\" (UID: \"a850e369-2158-48d3-9cdb-9ee791c676aa\") " pod="calico-system/calico-kube-controllers-65b4fbd445-7jsjb" Sep 4 17:18:02.132459 kubelet[3681]: I0904 17:18:02.132284 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gxr\" (UniqueName: \"kubernetes.io/projected/092d48cd-1ea8-4018-aeb9-4d6ed136faf7-kube-api-access-v6gxr\") pod \"coredns-5dd5756b68-sffwl\" (UID: \"092d48cd-1ea8-4018-aeb9-4d6ed136faf7\") " pod="kube-system/coredns-5dd5756b68-sffwl" Sep 4 17:18:02.132459 kubelet[3681]: I0904 17:18:02.132339 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw82w\" (UniqueName: \"kubernetes.io/projected/5fa29b39-8f42-470a-b117-da5e937a8acb-kube-api-access-zw82w\") pod \"coredns-5dd5756b68-9jcvq\" (UID: \"5fa29b39-8f42-470a-b117-da5e937a8acb\") " pod="kube-system/coredns-5dd5756b68-9jcvq" Sep 4 17:18:02.132459 kubelet[3681]: I0904 17:18:02.132393 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/092d48cd-1ea8-4018-aeb9-4d6ed136faf7-config-volume\") pod \"coredns-5dd5756b68-sffwl\" (UID: \"092d48cd-1ea8-4018-aeb9-4d6ed136faf7\") " pod="kube-system/coredns-5dd5756b68-sffwl" Sep 4 17:18:02.132459 kubelet[3681]: I0904 17:18:02.132443 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5fa29b39-8f42-470a-b117-da5e937a8acb-config-volume\") pod \"coredns-5dd5756b68-9jcvq\" (UID: \"5fa29b39-8f42-470a-b117-da5e937a8acb\") " pod="kube-system/coredns-5dd5756b68-9jcvq" Sep 4 17:18:02.132778 kubelet[3681]: I0904 17:18:02.132493 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a850e369-2158-48d3-9cdb-9ee791c676aa-tigera-ca-bundle\") pod \"calico-kube-controllers-65b4fbd445-7jsjb\" (UID: \"a850e369-2158-48d3-9cdb-9ee791c676aa\") " pod="calico-system/calico-kube-controllers-65b4fbd445-7jsjb" Sep 4 17:18:02.298904 containerd[2154]: time="2024-09-04T17:18:02.298738336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cj7r5,Uid:beb1fab5-04d7-4346-bdd8-82707db57e16,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:02.372769 containerd[2154]: time="2024-09-04T17:18:02.372330796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65b4fbd445-7jsjb,Uid:a850e369-2158-48d3-9cdb-9ee791c676aa,Namespace:calico-system,Attempt:0,}" Sep 4 17:18:02.385751 containerd[2154]: time="2024-09-04T17:18:02.385693900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sffwl,Uid:092d48cd-1ea8-4018-aeb9-4d6ed136faf7,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:02.389729 containerd[2154]: time="2024-09-04T17:18:02.389635756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9jcvq,Uid:5fa29b39-8f42-470a-b117-da5e937a8acb,Namespace:kube-system,Attempt:0,}" Sep 4 17:18:03.050037 containerd[2154]: time="2024-09-04T17:18:03.049848471Z" level=info msg="shim disconnected" id=03a0b555c91533920615cecc88be21cd0bbb26478ea3e6c07a1f6d29d78652d0 namespace=k8s.io Sep 4 17:18:03.050037 containerd[2154]: time="2024-09-04T17:18:03.050027907Z" level=warning msg="cleaning up after shim disconnected" id=03a0b555c91533920615cecc88be21cd0bbb26478ea3e6c07a1f6d29d78652d0 namespace=k8s.io Sep 4 17:18:03.050973 containerd[2154]: time="2024-09-04T17:18:03.050051655Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:18:03.253165 containerd[2154]: time="2024-09-04T17:18:03.251120800Z" level=error msg="Failed to destroy network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.254508 containerd[2154]: time="2024-09-04T17:18:03.254293420Z" level=error msg="encountered an error cleaning up failed sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.256232 containerd[2154]: time="2024-09-04T17:18:03.254508136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65b4fbd445-7jsjb,Uid:a850e369-2158-48d3-9cdb-9ee791c676aa,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.257729 kubelet[3681]: E0904 17:18:03.257684 3681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.260577 kubelet[3681]: E0904 17:18:03.257772 3681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65b4fbd445-7jsjb" Sep 4 17:18:03.260577 kubelet[3681]: E0904 17:18:03.257813 3681 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65b4fbd445-7jsjb" Sep 4 17:18:03.260577 kubelet[3681]: E0904 17:18:03.257896 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65b4fbd445-7jsjb_calico-system(a850e369-2158-48d3-9cdb-9ee791c676aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65b4fbd445-7jsjb_calico-system(a850e369-2158-48d3-9cdb-9ee791c676aa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65b4fbd445-7jsjb" podUID="a850e369-2158-48d3-9cdb-9ee791c676aa" Sep 4 17:18:03.260074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733-shm.mount: Deactivated successfully. Sep 4 17:18:03.267213 containerd[2154]: time="2024-09-04T17:18:03.267119944Z" level=error msg="Failed to destroy network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.270173 containerd[2154]: time="2024-09-04T17:18:03.269875528Z" level=error msg="encountered an error cleaning up failed sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.270696 containerd[2154]: time="2024-09-04T17:18:03.270419872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cj7r5,Uid:beb1fab5-04d7-4346-bdd8-82707db57e16,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.271190 kubelet[3681]: E0904 17:18:03.271139 3681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.271464 kubelet[3681]: E0904 17:18:03.271261 3681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cj7r5" Sep 4 17:18:03.271464 kubelet[3681]: E0904 17:18:03.271303 3681 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cj7r5" Sep 4 17:18:03.271464 kubelet[3681]: E0904 17:18:03.271392 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cj7r5_calico-system(beb1fab5-04d7-4346-bdd8-82707db57e16)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cj7r5_calico-system(beb1fab5-04d7-4346-bdd8-82707db57e16)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cj7r5" podUID="beb1fab5-04d7-4346-bdd8-82707db57e16" Sep 4 17:18:03.277050 containerd[2154]: time="2024-09-04T17:18:03.276925360Z" level=error msg="Failed to destroy network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.277996 containerd[2154]: time="2024-09-04T17:18:03.277803928Z" level=error msg="encountered an error cleaning up failed sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.277996 containerd[2154]: time="2024-09-04T17:18:03.277889548Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sffwl,Uid:092d48cd-1ea8-4018-aeb9-4d6ed136faf7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.280210 kubelet[3681]: E0904 17:18:03.278577 3681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.280210 kubelet[3681]: E0904 17:18:03.278653 3681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-sffwl" Sep 4 17:18:03.280210 kubelet[3681]: E0904 17:18:03.278691 3681 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-sffwl" Sep 4 17:18:03.280519 kubelet[3681]: E0904 17:18:03.278772 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-sffwl_kube-system(092d48cd-1ea8-4018-aeb9-4d6ed136faf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-sffwl_kube-system(092d48cd-1ea8-4018-aeb9-4d6ed136faf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-sffwl" podUID="092d48cd-1ea8-4018-aeb9-4d6ed136faf7" Sep 4 17:18:03.291646 containerd[2154]: time="2024-09-04T17:18:03.291566177Z" level=error msg="Failed to destroy network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.292229 containerd[2154]: time="2024-09-04T17:18:03.292181729Z" level=error msg="encountered an error cleaning up failed sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.292346 containerd[2154]: time="2024-09-04T17:18:03.292288097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9jcvq,Uid:5fa29b39-8f42-470a-b117-da5e937a8acb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.293312 kubelet[3681]: E0904 17:18:03.292940 3681 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.293312 kubelet[3681]: E0904 17:18:03.293028 3681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-9jcvq" Sep 4 17:18:03.293312 kubelet[3681]: E0904 17:18:03.293069 3681 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-9jcvq" Sep 4 17:18:03.293569 kubelet[3681]: E0904 17:18:03.293304 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-9jcvq_kube-system(5fa29b39-8f42-470a-b117-da5e937a8acb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-9jcvq_kube-system(5fa29b39-8f42-470a-b117-da5e937a8acb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-9jcvq" podUID="5fa29b39-8f42-470a-b117-da5e937a8acb" Sep 4 17:18:03.587805 containerd[2154]: time="2024-09-04T17:18:03.586541814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:18:03.588592 kubelet[3681]: I0904 17:18:03.586580 3681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:03.591384 containerd[2154]: time="2024-09-04T17:18:03.588928182Z" level=info msg="StopPodSandbox for \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\"" Sep 4 17:18:03.591384 containerd[2154]: time="2024-09-04T17:18:03.589383462Z" level=info msg="Ensure that sandbox 9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482 in task-service has been cleanup successfully" Sep 4 17:18:03.597725 kubelet[3681]: I0904 17:18:03.595465 3681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:03.597864 containerd[2154]: time="2024-09-04T17:18:03.596369886Z" level=info msg="StopPodSandbox for \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\"" Sep 4 17:18:03.597864 containerd[2154]: time="2024-09-04T17:18:03.596757174Z" level=info msg="Ensure that sandbox 8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5 in task-service has been cleanup successfully" Sep 4 17:18:03.603345 kubelet[3681]: I0904 17:18:03.603028 3681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:03.608128 containerd[2154]: time="2024-09-04T17:18:03.608064642Z" level=info msg="StopPodSandbox for \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\"" Sep 4 17:18:03.618694 containerd[2154]: time="2024-09-04T17:18:03.615689442Z" level=info msg="Ensure that sandbox dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733 in task-service has been cleanup successfully" Sep 4 17:18:03.622711 kubelet[3681]: I0904 17:18:03.622654 3681 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:03.625271 containerd[2154]: time="2024-09-04T17:18:03.625181262Z" level=info msg="StopPodSandbox for \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\"" Sep 4 17:18:03.625728 containerd[2154]: time="2024-09-04T17:18:03.625526070Z" level=info msg="Ensure that sandbox ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973 in task-service has been cleanup successfully" Sep 4 17:18:03.707341 containerd[2154]: time="2024-09-04T17:18:03.707083747Z" level=error msg="StopPodSandbox for \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\" failed" error="failed to destroy network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.709817 kubelet[3681]: E0904 17:18:03.709712 3681 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:03.709817 kubelet[3681]: E0904 17:18:03.709817 3681 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482"} Sep 4 17:18:03.710220 kubelet[3681]: E0904 17:18:03.709915 3681 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"092d48cd-1ea8-4018-aeb9-4d6ed136faf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:03.710220 kubelet[3681]: E0904 17:18:03.709973 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"092d48cd-1ea8-4018-aeb9-4d6ed136faf7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-sffwl" podUID="092d48cd-1ea8-4018-aeb9-4d6ed136faf7" Sep 4 17:18:03.722305 containerd[2154]: time="2024-09-04T17:18:03.721599151Z" level=error msg="StopPodSandbox for \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\" failed" error="failed to destroy network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.722463 kubelet[3681]: E0904 17:18:03.721967 3681 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:03.722463 kubelet[3681]: E0904 17:18:03.722026 3681 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5"} Sep 4 17:18:03.723555 kubelet[3681]: E0904 17:18:03.723364 3681 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5fa29b39-8f42-470a-b117-da5e937a8acb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:03.723555 kubelet[3681]: E0904 17:18:03.723482 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5fa29b39-8f42-470a-b117-da5e937a8acb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-9jcvq" podUID="5fa29b39-8f42-470a-b117-da5e937a8acb" Sep 4 17:18:03.738792 containerd[2154]: time="2024-09-04T17:18:03.738705847Z" level=error msg="StopPodSandbox for \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\" failed" error="failed to destroy network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.739412 kubelet[3681]: E0904 17:18:03.739084 3681 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:03.739412 kubelet[3681]: E0904 17:18:03.739175 3681 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733"} Sep 4 17:18:03.739412 kubelet[3681]: E0904 17:18:03.739276 3681 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a850e369-2158-48d3-9cdb-9ee791c676aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:03.739412 kubelet[3681]: E0904 17:18:03.739333 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a850e369-2158-48d3-9cdb-9ee791c676aa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65b4fbd445-7jsjb" podUID="a850e369-2158-48d3-9cdb-9ee791c676aa" Sep 4 17:18:03.740720 containerd[2154]: time="2024-09-04T17:18:03.740635867Z" level=error msg="StopPodSandbox for \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\" failed" error="failed to destroy network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:18:03.741402 kubelet[3681]: E0904 17:18:03.740981 3681 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:03.741402 kubelet[3681]: E0904 17:18:03.741086 3681 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973"} Sep 4 17:18:03.741402 kubelet[3681]: E0904 17:18:03.741148 3681 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"beb1fab5-04d7-4346-bdd8-82707db57e16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:18:03.741402 kubelet[3681]: E0904 17:18:03.741213 3681 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"beb1fab5-04d7-4346-bdd8-82707db57e16\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cj7r5" podUID="beb1fab5-04d7-4346-bdd8-82707db57e16" Sep 4 17:18:03.960496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5-shm.mount: Deactivated successfully. Sep 4 17:18:03.961002 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482-shm.mount: Deactivated successfully. Sep 4 17:18:03.961371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973-shm.mount: Deactivated successfully. Sep 4 17:18:09.175485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1268917150.mount: Deactivated successfully. Sep 4 17:18:09.239019 containerd[2154]: time="2024-09-04T17:18:09.238959526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:09.240937 containerd[2154]: time="2024-09-04T17:18:09.240837934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:18:09.242192 containerd[2154]: time="2024-09-04T17:18:09.242131834Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:09.247263 containerd[2154]: time="2024-09-04T17:18:09.247160650Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:09.248634 containerd[2154]: time="2024-09-04T17:18:09.248567158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 5.661951676s" Sep 4 17:18:09.248871 containerd[2154]: time="2024-09-04T17:18:09.248638654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:18:09.276482 containerd[2154]: time="2024-09-04T17:18:09.276289618Z" level=info msg="CreateContainer within sandbox \"00ad30437ab4dc61696ed538b02a852aa1825837b62fe8512b15b2f3e7a5da7f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:18:09.307551 containerd[2154]: time="2024-09-04T17:18:09.307412830Z" level=info msg="CreateContainer within sandbox \"00ad30437ab4dc61696ed538b02a852aa1825837b62fe8512b15b2f3e7a5da7f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9557030706167cc3d243a3f380eb994d038e113170aaf010e927bf05dfc6d660\"" Sep 4 17:18:09.310168 containerd[2154]: time="2024-09-04T17:18:09.308069290Z" level=info msg="StartContainer for \"9557030706167cc3d243a3f380eb994d038e113170aaf010e927bf05dfc6d660\"" Sep 4 17:18:09.429613 containerd[2154]: time="2024-09-04T17:18:09.429422867Z" level=info msg="StartContainer for \"9557030706167cc3d243a3f380eb994d038e113170aaf010e927bf05dfc6d660\" returns successfully" Sep 4 17:18:09.559076 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:18:09.559279 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:18:09.712828 kubelet[3681]: I0904 17:18:09.711858 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-bmkn5" podStartSLOduration=1.967650068 podCreationTimestamp="2024-09-04 17:17:50 +0000 UTC" firstStartedPulling="2024-09-04 17:17:51.504968406 +0000 UTC m=+20.475955903" lastFinishedPulling="2024-09-04 17:18:09.249111346 +0000 UTC m=+38.220098843" observedRunningTime="2024-09-04 17:18:09.70906566 +0000 UTC m=+38.680053181" watchObservedRunningTime="2024-09-04 17:18:09.711793008 +0000 UTC m=+38.682780505" Sep 4 17:18:11.757344 kernel: bpftool[4877]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:18:12.197699 (udev-worker)[4705]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:12.203720 systemd-networkd[1692]: vxlan.calico: Link UP Sep 4 17:18:12.203735 systemd-networkd[1692]: vxlan.calico: Gained carrier Sep 4 17:18:12.276885 (udev-worker)[4917]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:13.284591 systemd-networkd[1692]: vxlan.calico: Gained IPv6LL Sep 4 17:18:15.284749 containerd[2154]: time="2024-09-04T17:18:15.284025544Z" level=info msg="StopPodSandbox for \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\"" Sep 4 17:18:15.288318 containerd[2154]: time="2024-09-04T17:18:15.285896428Z" level=info msg="StopPodSandbox for \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\"" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.422 [INFO][4994] k8s.go 608: Cleaning up netns ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.423 [INFO][4994] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" iface="eth0" netns="/var/run/netns/cni-3a19a3e2-7596-4b76-b8da-e9f46c9461cc" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.430 [INFO][4994] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" iface="eth0" netns="/var/run/netns/cni-3a19a3e2-7596-4b76-b8da-e9f46c9461cc" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.433 [INFO][4994] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" iface="eth0" netns="/var/run/netns/cni-3a19a3e2-7596-4b76-b8da-e9f46c9461cc" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.433 [INFO][4994] k8s.go 615: Releasing IP address(es) ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.433 [INFO][4994] utils.go 188: Calico CNI releasing IP address ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.495 [INFO][5012] ipam_plugin.go 417: Releasing address using handleID ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" HandleID="k8s-pod-network.9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.495 [INFO][5012] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.495 [INFO][5012] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.510 [WARNING][5012] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" HandleID="k8s-pod-network.9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.510 [INFO][5012] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" HandleID="k8s-pod-network.9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.512 [INFO][5012] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:15.525018 containerd[2154]: 2024-09-04 17:18:15.519 [INFO][4994] k8s.go 621: Teardown processing complete. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:15.532842 containerd[2154]: time="2024-09-04T17:18:15.531641321Z" level=info msg="TearDown network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\" successfully" Sep 4 17:18:15.535567 containerd[2154]: time="2024-09-04T17:18:15.533517677Z" level=info msg="StopPodSandbox for \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\" returns successfully" Sep 4 17:18:15.538424 systemd[1]: run-netns-cni\x2d3a19a3e2\x2d7596\x2d4b76\x2db8da\x2de9f46c9461cc.mount: Deactivated successfully. Sep 4 17:18:15.544941 containerd[2154]: time="2024-09-04T17:18:15.543923537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sffwl,Uid:092d48cd-1ea8-4018-aeb9-4d6ed136faf7,Namespace:kube-system,Attempt:1,}" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.413 [INFO][4995] k8s.go 608: Cleaning up netns ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.415 [INFO][4995] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" iface="eth0" netns="/var/run/netns/cni-96cfd627-46ea-9b5d-99bf-cd8405668a44" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.420 [INFO][4995] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" iface="eth0" netns="/var/run/netns/cni-96cfd627-46ea-9b5d-99bf-cd8405668a44" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.422 [INFO][4995] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" iface="eth0" netns="/var/run/netns/cni-96cfd627-46ea-9b5d-99bf-cd8405668a44" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.422 [INFO][4995] k8s.go 615: Releasing IP address(es) ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.422 [INFO][4995] utils.go 188: Calico CNI releasing IP address ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.499 [INFO][5008] ipam_plugin.go 417: Releasing address using handleID ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" HandleID="k8s-pod-network.dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.499 [INFO][5008] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.513 [INFO][5008] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.536 [WARNING][5008] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" HandleID="k8s-pod-network.dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.536 [INFO][5008] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" HandleID="k8s-pod-network.dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.540 [INFO][5008] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:15.549992 containerd[2154]: 2024-09-04 17:18:15.546 [INFO][4995] k8s.go 621: Teardown processing complete. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:15.552872 containerd[2154]: time="2024-09-04T17:18:15.550746833Z" level=info msg="TearDown network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\" successfully" Sep 4 17:18:15.552872 containerd[2154]: time="2024-09-04T17:18:15.550845545Z" level=info msg="StopPodSandbox for \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\" returns successfully" Sep 4 17:18:15.554739 containerd[2154]: time="2024-09-04T17:18:15.554683901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65b4fbd445-7jsjb,Uid:a850e369-2158-48d3-9cdb-9ee791c676aa,Namespace:calico-system,Attempt:1,}" Sep 4 17:18:15.560695 systemd[1]: run-netns-cni\x2d96cfd627\x2d46ea\x2d9b5d\x2d99bf\x2dcd8405668a44.mount: Deactivated successfully. Sep 4 17:18:15.969170 (udev-worker)[5058]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:15.979060 systemd-networkd[1692]: calia3c17137b09: Link UP Sep 4 17:18:15.984845 systemd-networkd[1692]: calia3c17137b09: Gained carrier Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.770 [INFO][5028] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0 calico-kube-controllers-65b4fbd445- calico-system a850e369-2158-48d3-9cdb-9ee791c676aa 722 0 2024-09-04 17:17:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65b4fbd445 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-29 calico-kube-controllers-65b4fbd445-7jsjb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia3c17137b09 [] []}} ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Namespace="calico-system" Pod="calico-kube-controllers-65b4fbd445-7jsjb" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.771 [INFO][5028] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Namespace="calico-system" Pod="calico-kube-controllers-65b4fbd445-7jsjb" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.846 [INFO][5046] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" HandleID="k8s-pod-network.1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.867 [INFO][5046] ipam_plugin.go 270: Auto assigning IP ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" HandleID="k8s-pod-network.1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005b7ef0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-29", "pod":"calico-kube-controllers-65b4fbd445-7jsjb", "timestamp":"2024-09-04 17:18:15.846465619 +0000 UTC"}, Hostname:"ip-172-31-23-29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.867 [INFO][5046] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.867 [INFO][5046] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.867 [INFO][5046] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-29' Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.874 [INFO][5046] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" host="ip-172-31-23-29" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.890 [INFO][5046] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-29" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.901 [INFO][5046] ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.905 [INFO][5046] ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.909 [INFO][5046] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.909 [INFO][5046] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" host="ip-172-31-23-29" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.912 [INFO][5046] ipam.go 1685: Creating new handle: k8s-pod-network.1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9 Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.921 [INFO][5046] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" host="ip-172-31-23-29" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.933 [INFO][5046] ipam.go 1216: Successfully claimed IPs: [192.168.65.129/26] block=192.168.65.128/26 handle="k8s-pod-network.1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" host="ip-172-31-23-29" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.933 [INFO][5046] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.129/26] handle="k8s-pod-network.1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" host="ip-172-31-23-29" Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.934 [INFO][5046] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:16.026181 containerd[2154]: 2024-09-04 17:18:15.935 [INFO][5046] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.65.129/26] IPv6=[] ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" HandleID="k8s-pod-network.1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:16.029432 containerd[2154]: 2024-09-04 17:18:15.942 [INFO][5028] k8s.go 386: Populated endpoint ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Namespace="calico-system" Pod="calico-kube-controllers-65b4fbd445-7jsjb" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0", GenerateName:"calico-kube-controllers-65b4fbd445-", Namespace:"calico-system", SelfLink:"", UID:"a850e369-2158-48d3-9cdb-9ee791c676aa", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65b4fbd445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"", Pod:"calico-kube-controllers-65b4fbd445-7jsjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3c17137b09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:16.029432 containerd[2154]: 2024-09-04 17:18:15.942 [INFO][5028] k8s.go 387: Calico CNI using IPs: [192.168.65.129/32] ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Namespace="calico-system" Pod="calico-kube-controllers-65b4fbd445-7jsjb" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:16.029432 containerd[2154]: 2024-09-04 17:18:15.942 [INFO][5028] dataplane_linux.go 68: Setting the host side veth name to calia3c17137b09 ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Namespace="calico-system" Pod="calico-kube-controllers-65b4fbd445-7jsjb" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:16.029432 containerd[2154]: 2024-09-04 17:18:15.987 [INFO][5028] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Namespace="calico-system" Pod="calico-kube-controllers-65b4fbd445-7jsjb" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:16.029432 containerd[2154]: 2024-09-04 17:18:15.994 [INFO][5028] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Namespace="calico-system" Pod="calico-kube-controllers-65b4fbd445-7jsjb" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0", GenerateName:"calico-kube-controllers-65b4fbd445-", Namespace:"calico-system", SelfLink:"", UID:"a850e369-2158-48d3-9cdb-9ee791c676aa", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65b4fbd445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9", Pod:"calico-kube-controllers-65b4fbd445-7jsjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3c17137b09", MAC:"be:56:c0:03:e8:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:16.029432 containerd[2154]: 2024-09-04 17:18:16.021 [INFO][5028] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9" Namespace="calico-system" Pod="calico-kube-controllers-65b4fbd445-7jsjb" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:16.051689 systemd-networkd[1692]: cali6fc33a79ecc: Link UP Sep 4 17:18:16.055623 systemd-networkd[1692]: cali6fc33a79ecc: Gained carrier Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.770 [INFO][5022] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0 coredns-5dd5756b68- kube-system 092d48cd-1ea8-4018-aeb9-4d6ed136faf7 723 0 2024-09-04 17:17:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-29 coredns-5dd5756b68-sffwl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6fc33a79ecc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Namespace="kube-system" Pod="coredns-5dd5756b68-sffwl" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.771 [INFO][5022] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Namespace="kube-system" Pod="coredns-5dd5756b68-sffwl" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.851 [INFO][5047] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" HandleID="k8s-pod-network.84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.885 [INFO][5047] ipam_plugin.go 270: Auto assigning IP ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" HandleID="k8s-pod-network.84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cfb20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-29", "pod":"coredns-5dd5756b68-sffwl", "timestamp":"2024-09-04 17:18:15.851864047 +0000 UTC"}, Hostname:"ip-172-31-23-29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.886 [INFO][5047] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.934 [INFO][5047] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.934 [INFO][5047] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-29' Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.940 [INFO][5047] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" host="ip-172-31-23-29" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.951 [INFO][5047] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-29" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.969 [INFO][5047] ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.973 [INFO][5047] ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.983 [INFO][5047] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.989 [INFO][5047] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" host="ip-172-31-23-29" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:15.994 [INFO][5047] ipam.go 1685: Creating new handle: k8s-pod-network.84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7 Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:16.004 [INFO][5047] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" host="ip-172-31-23-29" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:16.011 [INFO][5047] ipam.go 1216: Successfully claimed IPs: [192.168.65.130/26] block=192.168.65.128/26 handle="k8s-pod-network.84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" host="ip-172-31-23-29" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:16.011 [INFO][5047] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.130/26] handle="k8s-pod-network.84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" host="ip-172-31-23-29" Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:16.011 [INFO][5047] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:16.099001 containerd[2154]: 2024-09-04 17:18:16.012 [INFO][5047] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.65.130/26] IPv6=[] ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" HandleID="k8s-pod-network.84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:16.105476 containerd[2154]: 2024-09-04 17:18:16.023 [INFO][5022] k8s.go 386: Populated endpoint ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Namespace="kube-system" Pod="coredns-5dd5756b68-sffwl" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"092d48cd-1ea8-4018-aeb9-4d6ed136faf7", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"", Pod:"coredns-5dd5756b68-sffwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc33a79ecc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:16.105476 containerd[2154]: 2024-09-04 17:18:16.023 [INFO][5022] k8s.go 387: Calico CNI using IPs: [192.168.65.130/32] ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Namespace="kube-system" Pod="coredns-5dd5756b68-sffwl" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:16.105476 containerd[2154]: 2024-09-04 17:18:16.023 [INFO][5022] dataplane_linux.go 68: Setting the host side veth name to cali6fc33a79ecc ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Namespace="kube-system" Pod="coredns-5dd5756b68-sffwl" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:16.105476 containerd[2154]: 2024-09-04 17:18:16.064 [INFO][5022] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Namespace="kube-system" Pod="coredns-5dd5756b68-sffwl" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:16.105476 containerd[2154]: 2024-09-04 17:18:16.069 [INFO][5022] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Namespace="kube-system" Pod="coredns-5dd5756b68-sffwl" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"092d48cd-1ea8-4018-aeb9-4d6ed136faf7", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7", Pod:"coredns-5dd5756b68-sffwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc33a79ecc", MAC:"be:73:7c:e5:d8:06", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:16.105476 containerd[2154]: 2024-09-04 17:18:16.090 [INFO][5022] k8s.go 500: Wrote updated endpoint to datastore ContainerID="84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7" Namespace="kube-system" Pod="coredns-5dd5756b68-sffwl" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:16.149980 containerd[2154]: time="2024-09-04T17:18:16.149805400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:16.149980 containerd[2154]: time="2024-09-04T17:18:16.149898208Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:16.149980 containerd[2154]: time="2024-09-04T17:18:16.149923420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:16.154871 containerd[2154]: time="2024-09-04T17:18:16.150120160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:16.220420 containerd[2154]: time="2024-09-04T17:18:16.219676445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:16.220420 containerd[2154]: time="2024-09-04T17:18:16.219797717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:16.220420 containerd[2154]: time="2024-09-04T17:18:16.219835757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:16.220420 containerd[2154]: time="2024-09-04T17:18:16.220020377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:16.285934 containerd[2154]: time="2024-09-04T17:18:16.284355917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65b4fbd445-7jsjb,Uid:a850e369-2158-48d3-9cdb-9ee791c676aa,Namespace:calico-system,Attempt:1,} returns sandbox id \"1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9\"" Sep 4 17:18:16.289035 containerd[2154]: time="2024-09-04T17:18:16.286820309Z" level=info msg="StopPodSandbox for \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\"" Sep 4 17:18:16.299114 containerd[2154]: time="2024-09-04T17:18:16.299046869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:18:16.356549 containerd[2154]: time="2024-09-04T17:18:16.356268965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-sffwl,Uid:092d48cd-1ea8-4018-aeb9-4d6ed136faf7,Namespace:kube-system,Attempt:1,} returns sandbox id \"84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7\"" Sep 4 17:18:16.368964 containerd[2154]: time="2024-09-04T17:18:16.368875061Z" level=info msg="CreateContainer within sandbox \"84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:18:16.398894 containerd[2154]: time="2024-09-04T17:18:16.398819598Z" level=info msg="CreateContainer within sandbox \"84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a67c0b176f0cfdc8cf53c52cafd47d1acc0bc44fa388c66022e14cbc844a84b9\"" Sep 4 17:18:16.401641 containerd[2154]: time="2024-09-04T17:18:16.401588166Z" level=info msg="StartContainer for \"a67c0b176f0cfdc8cf53c52cafd47d1acc0bc44fa388c66022e14cbc844a84b9\"" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.413 [INFO][5175] k8s.go 608: Cleaning up netns ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.414 [INFO][5175] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" iface="eth0" netns="/var/run/netns/cni-f8836963-4b10-c9f4-bdb3-9ec79010dea3" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.414 [INFO][5175] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" iface="eth0" netns="/var/run/netns/cni-f8836963-4b10-c9f4-bdb3-9ec79010dea3" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.414 [INFO][5175] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" iface="eth0" netns="/var/run/netns/cni-f8836963-4b10-c9f4-bdb3-9ec79010dea3" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.415 [INFO][5175] k8s.go 615: Releasing IP address(es) ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.415 [INFO][5175] utils.go 188: Calico CNI releasing IP address ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.478 [INFO][5189] ipam_plugin.go 417: Releasing address using handleID ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" HandleID="k8s-pod-network.8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.478 [INFO][5189] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.478 [INFO][5189] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.498 [WARNING][5189] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" HandleID="k8s-pod-network.8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.499 [INFO][5189] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" HandleID="k8s-pod-network.8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.502 [INFO][5189] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:16.516922 containerd[2154]: 2024-09-04 17:18:16.505 [INFO][5175] k8s.go 621: Teardown processing complete. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:16.519462 containerd[2154]: time="2024-09-04T17:18:16.517168206Z" level=info msg="TearDown network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\" successfully" Sep 4 17:18:16.519462 containerd[2154]: time="2024-09-04T17:18:16.517333578Z" level=info msg="StopPodSandbox for \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\" returns successfully" Sep 4 17:18:16.519804 containerd[2154]: time="2024-09-04T17:18:16.519614562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9jcvq,Uid:5fa29b39-8f42-470a-b117-da5e937a8acb,Namespace:kube-system,Attempt:1,}" Sep 4 17:18:16.538796 containerd[2154]: time="2024-09-04T17:18:16.537571038Z" level=info msg="StartContainer for \"a67c0b176f0cfdc8cf53c52cafd47d1acc0bc44fa388c66022e14cbc844a84b9\" returns successfully" Sep 4 17:18:16.550504 systemd[1]: run-netns-cni\x2df8836963\x2d4b10\x2dc9f4\x2dbdb3\x2d9ec79010dea3.mount: Deactivated successfully. Sep 4 17:18:16.860844 systemd-networkd[1692]: calidf8ea84c94e: Link UP Sep 4 17:18:16.865391 systemd-networkd[1692]: calidf8ea84c94e: Gained carrier Sep 4 17:18:16.890212 kubelet[3681]: I0904 17:18:16.890154 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-sffwl" podStartSLOduration=33.890066648 podCreationTimestamp="2024-09-04 17:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:16.713623651 +0000 UTC m=+45.684611160" watchObservedRunningTime="2024-09-04 17:18:16.890066648 +0000 UTC m=+45.861054157" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.657 [INFO][5229] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0 coredns-5dd5756b68- kube-system 5fa29b39-8f42-470a-b117-da5e937a8acb 736 0 2024-09-04 17:17:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-29 coredns-5dd5756b68-9jcvq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidf8ea84c94e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Namespace="kube-system" Pod="coredns-5dd5756b68-9jcvq" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.657 [INFO][5229] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Namespace="kube-system" Pod="coredns-5dd5756b68-9jcvq" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.783 [INFO][5242] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" HandleID="k8s-pod-network.d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.803 [INFO][5242] ipam_plugin.go 270: Auto assigning IP ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" HandleID="k8s-pod-network.d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000faea0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-29", "pod":"coredns-5dd5756b68-9jcvq", "timestamp":"2024-09-04 17:18:16.783143132 +0000 UTC"}, Hostname:"ip-172-31-23-29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.804 [INFO][5242] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.804 [INFO][5242] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.804 [INFO][5242] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-29' Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.807 [INFO][5242] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" host="ip-172-31-23-29" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.813 [INFO][5242] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-29" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.822 [INFO][5242] ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.825 [INFO][5242] ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.829 [INFO][5242] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.829 [INFO][5242] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" host="ip-172-31-23-29" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.831 [INFO][5242] ipam.go 1685: Creating new handle: k8s-pod-network.d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01 Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.838 [INFO][5242] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" host="ip-172-31-23-29" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.846 [INFO][5242] ipam.go 1216: Successfully claimed IPs: [192.168.65.131/26] block=192.168.65.128/26 handle="k8s-pod-network.d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" host="ip-172-31-23-29" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.846 [INFO][5242] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.131/26] handle="k8s-pod-network.d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" host="ip-172-31-23-29" Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.846 [INFO][5242] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:16.890934 containerd[2154]: 2024-09-04 17:18:16.847 [INFO][5242] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.65.131/26] IPv6=[] ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" HandleID="k8s-pod-network.d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.894620 containerd[2154]: 2024-09-04 17:18:16.854 [INFO][5229] k8s.go 386: Populated endpoint ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Namespace="kube-system" Pod="coredns-5dd5756b68-9jcvq" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5fa29b39-8f42-470a-b117-da5e937a8acb", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"", Pod:"coredns-5dd5756b68-9jcvq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf8ea84c94e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:16.894620 containerd[2154]: 2024-09-04 17:18:16.855 [INFO][5229] k8s.go 387: Calico CNI using IPs: [192.168.65.131/32] ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Namespace="kube-system" Pod="coredns-5dd5756b68-9jcvq" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.894620 containerd[2154]: 2024-09-04 17:18:16.855 [INFO][5229] dataplane_linux.go 68: Setting the host side veth name to calidf8ea84c94e ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Namespace="kube-system" Pod="coredns-5dd5756b68-9jcvq" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.894620 containerd[2154]: 2024-09-04 17:18:16.862 [INFO][5229] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Namespace="kube-system" Pod="coredns-5dd5756b68-9jcvq" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.894620 containerd[2154]: 2024-09-04 17:18:16.864 [INFO][5229] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Namespace="kube-system" Pod="coredns-5dd5756b68-9jcvq" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5fa29b39-8f42-470a-b117-da5e937a8acb", ResourceVersion:"736", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01", Pod:"coredns-5dd5756b68-9jcvq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf8ea84c94e", MAC:"ce:49:65:62:46:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:16.894620 containerd[2154]: 2024-09-04 17:18:16.884 [INFO][5229] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01" Namespace="kube-system" Pod="coredns-5dd5756b68-9jcvq" WorkloadEndpoint="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:16.934624 containerd[2154]: time="2024-09-04T17:18:16.934440428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:16.934624 containerd[2154]: time="2024-09-04T17:18:16.934557656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:16.934624 containerd[2154]: time="2024-09-04T17:18:16.934607252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:16.935178 containerd[2154]: time="2024-09-04T17:18:16.934961156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:17.040931 containerd[2154]: time="2024-09-04T17:18:17.040779857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-9jcvq,Uid:5fa29b39-8f42-470a-b117-da5e937a8acb,Namespace:kube-system,Attempt:1,} returns sandbox id \"d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01\"" Sep 4 17:18:17.048710 containerd[2154]: time="2024-09-04T17:18:17.048624773Z" level=info msg="CreateContainer within sandbox \"d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:18:17.068211 containerd[2154]: time="2024-09-04T17:18:17.068136257Z" level=info msg="CreateContainer within sandbox \"d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8fbf16a054d9219a83ba4ea1e099e3730837b31110af0e43d608f3d9607fc3e0\"" Sep 4 17:18:17.069506 containerd[2154]: time="2024-09-04T17:18:17.069449453Z" level=info msg="StartContainer for \"8fbf16a054d9219a83ba4ea1e099e3730837b31110af0e43d608f3d9607fc3e0\"" Sep 4 17:18:17.123524 systemd-networkd[1692]: cali6fc33a79ecc: Gained IPv6LL Sep 4 17:18:17.162105 containerd[2154]: time="2024-09-04T17:18:17.161999981Z" level=info msg="StartContainer for \"8fbf16a054d9219a83ba4ea1e099e3730837b31110af0e43d608f3d9607fc3e0\" returns successfully" Sep 4 17:18:17.251646 systemd-networkd[1692]: calia3c17137b09: Gained IPv6LL Sep 4 17:18:17.739960 kubelet[3681]: I0904 17:18:17.738009 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-9jcvq" podStartSLOduration=34.737938388 podCreationTimestamp="2024-09-04 17:17:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:18:17.737188448 +0000 UTC m=+46.708176065" watchObservedRunningTime="2024-09-04 17:18:17.737938388 +0000 UTC m=+46.708925897" Sep 4 17:18:18.149352 systemd-networkd[1692]: calidf8ea84c94e: Gained IPv6LL Sep 4 17:18:18.287693 containerd[2154]: time="2024-09-04T17:18:18.287586787Z" level=info msg="StopPodSandbox for \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\"" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.547 [INFO][5369] k8s.go 608: Cleaning up netns ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.548 [INFO][5369] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" iface="eth0" netns="/var/run/netns/cni-dc499e0a-7ec4-3d19-598b-2b86637f42fb" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.548 [INFO][5369] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" iface="eth0" netns="/var/run/netns/cni-dc499e0a-7ec4-3d19-598b-2b86637f42fb" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.550 [INFO][5369] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" iface="eth0" netns="/var/run/netns/cni-dc499e0a-7ec4-3d19-598b-2b86637f42fb" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.550 [INFO][5369] k8s.go 615: Releasing IP address(es) ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.550 [INFO][5369] utils.go 188: Calico CNI releasing IP address ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.639 [INFO][5378] ipam_plugin.go 417: Releasing address using handleID ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" HandleID="k8s-pod-network.ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.640 [INFO][5378] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.641 [INFO][5378] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.663 [WARNING][5378] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" HandleID="k8s-pod-network.ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.664 [INFO][5378] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" HandleID="k8s-pod-network.ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.666 [INFO][5378] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:18.674625 containerd[2154]: 2024-09-04 17:18:18.669 [INFO][5369] k8s.go 621: Teardown processing complete. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:18.681274 containerd[2154]: time="2024-09-04T17:18:18.678915141Z" level=info msg="TearDown network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\" successfully" Sep 4 17:18:18.681274 containerd[2154]: time="2024-09-04T17:18:18.679081761Z" level=info msg="StopPodSandbox for \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\" returns successfully" Sep 4 17:18:18.685167 systemd[1]: run-netns-cni\x2ddc499e0a\x2d7ec4\x2d3d19\x2d598b\x2d2b86637f42fb.mount: Deactivated successfully. Sep 4 17:18:18.686215 containerd[2154]: time="2024-09-04T17:18:18.685992045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cj7r5,Uid:beb1fab5-04d7-4346-bdd8-82707db57e16,Namespace:calico-system,Attempt:1,}" Sep 4 17:18:18.764663 systemd[1]: Started sshd@7-172.31.23.29:22-139.178.89.65:37038.service - OpenSSH per-connection server daemon (139.178.89.65:37038). Sep 4 17:18:19.011553 sshd[5395]: Accepted publickey for core from 139.178.89.65 port 37038 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:19.018279 sshd[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:19.034365 systemd-logind[2121]: New session 8 of user core. Sep 4 17:18:19.042667 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:18:19.216608 systemd-networkd[1692]: calib44212cc8b1: Link UP Sep 4 17:18:19.220261 systemd-networkd[1692]: calib44212cc8b1: Gained carrier Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:18.926 [INFO][5385] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0 csi-node-driver- calico-system beb1fab5-04d7-4346-bdd8-82707db57e16 792 0 2024-09-04 17:17:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-23-29 csi-node-driver-cj7r5 eth0 default [] [] [kns.calico-system ksa.calico-system.default] calib44212cc8b1 [] []}} ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Namespace="calico-system" Pod="csi-node-driver-cj7r5" WorkloadEndpoint="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:18.926 [INFO][5385] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Namespace="calico-system" Pod="csi-node-driver-cj7r5" WorkloadEndpoint="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.067 [INFO][5400] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" HandleID="k8s-pod-network.c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.097 [INFO][5400] ipam_plugin.go 270: Auto assigning IP ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" HandleID="k8s-pod-network.c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c7c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-29", "pod":"csi-node-driver-cj7r5", "timestamp":"2024-09-04 17:18:19.067846207 +0000 UTC"}, Hostname:"ip-172-31-23-29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.098 [INFO][5400] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.098 [INFO][5400] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.098 [INFO][5400] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-29' Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.102 [INFO][5400] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" host="ip-172-31-23-29" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.111 [INFO][5400] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-29" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.122 [INFO][5400] ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.126 [INFO][5400] ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.131 [INFO][5400] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.131 [INFO][5400] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" host="ip-172-31-23-29" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.134 [INFO][5400] ipam.go 1685: Creating new handle: k8s-pod-network.c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.169 [INFO][5400] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" host="ip-172-31-23-29" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.191 [INFO][5400] ipam.go 1216: Successfully claimed IPs: [192.168.65.132/26] block=192.168.65.128/26 handle="k8s-pod-network.c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" host="ip-172-31-23-29" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.192 [INFO][5400] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.132/26] handle="k8s-pod-network.c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" host="ip-172-31-23-29" Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.192 [INFO][5400] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:19.290230 containerd[2154]: 2024-09-04 17:18:19.192 [INFO][5400] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.65.132/26] IPv6=[] ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" HandleID="k8s-pod-network.c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:19.293331 containerd[2154]: 2024-09-04 17:18:19.201 [INFO][5385] k8s.go 386: Populated endpoint ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Namespace="calico-system" Pod="csi-node-driver-cj7r5" WorkloadEndpoint="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"beb1fab5-04d7-4346-bdd8-82707db57e16", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"", Pod:"csi-node-driver-cj7r5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib44212cc8b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:19.293331 containerd[2154]: 2024-09-04 17:18:19.202 [INFO][5385] k8s.go 387: Calico CNI using IPs: [192.168.65.132/32] ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Namespace="calico-system" Pod="csi-node-driver-cj7r5" WorkloadEndpoint="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:19.293331 containerd[2154]: 2024-09-04 17:18:19.203 [INFO][5385] dataplane_linux.go 68: Setting the host side veth name to calib44212cc8b1 ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Namespace="calico-system" Pod="csi-node-driver-cj7r5" WorkloadEndpoint="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:19.293331 containerd[2154]: 2024-09-04 17:18:19.228 [INFO][5385] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Namespace="calico-system" Pod="csi-node-driver-cj7r5" WorkloadEndpoint="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:19.293331 containerd[2154]: 2024-09-04 17:18:19.230 [INFO][5385] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Namespace="calico-system" Pod="csi-node-driver-cj7r5" WorkloadEndpoint="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"beb1fab5-04d7-4346-bdd8-82707db57e16", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a", Pod:"csi-node-driver-cj7r5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib44212cc8b1", MAC:"0a:5e:52:a4:b9:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:19.293331 containerd[2154]: 2024-09-04 17:18:19.252 [INFO][5385] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a" Namespace="calico-system" Pod="csi-node-driver-cj7r5" WorkloadEndpoint="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:19.324536 containerd[2154]: time="2024-09-04T17:18:19.324298280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:19.328760 containerd[2154]: time="2024-09-04T17:18:19.328545596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:18:19.331282 containerd[2154]: time="2024-09-04T17:18:19.330435932Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:19.344875 containerd[2154]: time="2024-09-04T17:18:19.344819912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:19.349623 containerd[2154]: time="2024-09-04T17:18:19.349567256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 3.050441451s" Sep 4 17:18:19.349885 containerd[2154]: time="2024-09-04T17:18:19.349852772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:18:19.400770 containerd[2154]: time="2024-09-04T17:18:19.400687581Z" level=info msg="CreateContainer within sandbox \"1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:18:19.413045 containerd[2154]: time="2024-09-04T17:18:19.411672477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:19.413389 containerd[2154]: time="2024-09-04T17:18:19.413029725Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:19.413389 containerd[2154]: time="2024-09-04T17:18:19.413091549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:19.413951 containerd[2154]: time="2024-09-04T17:18:19.413321217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:19.457998 containerd[2154]: time="2024-09-04T17:18:19.457841961Z" level=info msg="CreateContainer within sandbox \"1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"af9d5d104f0589efbd785b07207beb6a0b96195897966f8da2ef2e3d2ae90a96\"" Sep 4 17:18:19.461829 sshd[5395]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:19.466730 containerd[2154]: time="2024-09-04T17:18:19.462224085Z" level=info msg="StartContainer for \"af9d5d104f0589efbd785b07207beb6a0b96195897966f8da2ef2e3d2ae90a96\"" Sep 4 17:18:19.482036 systemd[1]: sshd@7-172.31.23.29:22-139.178.89.65:37038.service: Deactivated successfully. Sep 4 17:18:19.499825 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:18:19.505926 systemd-logind[2121]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:18:19.512923 systemd-logind[2121]: Removed session 8. Sep 4 17:18:19.563776 containerd[2154]: time="2024-09-04T17:18:19.563493381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cj7r5,Uid:beb1fab5-04d7-4346-bdd8-82707db57e16,Namespace:calico-system,Attempt:1,} returns sandbox id \"c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a\"" Sep 4 17:18:19.569273 containerd[2154]: time="2024-09-04T17:18:19.568762701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:18:19.658559 containerd[2154]: time="2024-09-04T17:18:19.658446274Z" level=info msg="StartContainer for \"af9d5d104f0589efbd785b07207beb6a0b96195897966f8da2ef2e3d2ae90a96\" returns successfully" Sep 4 17:18:19.783603 kubelet[3681]: I0904 17:18:19.781131 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65b4fbd445-7jsjb" podStartSLOduration=26.726381151 podCreationTimestamp="2024-09-04 17:17:50 +0000 UTC" firstStartedPulling="2024-09-04 17:18:16.296715833 +0000 UTC m=+45.267703330" lastFinishedPulling="2024-09-04 17:18:19.35028218 +0000 UTC m=+48.321269665" observedRunningTime="2024-09-04 17:18:19.776287798 +0000 UTC m=+48.747275319" watchObservedRunningTime="2024-09-04 17:18:19.779947486 +0000 UTC m=+48.750935031" Sep 4 17:18:20.644081 systemd-networkd[1692]: calib44212cc8b1: Gained IPv6LL Sep 4 17:18:20.847553 containerd[2154]: time="2024-09-04T17:18:20.847475364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:20.849295 containerd[2154]: time="2024-09-04T17:18:20.849184464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:18:20.850856 containerd[2154]: time="2024-09-04T17:18:20.850767444Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:20.855435 containerd[2154]: time="2024-09-04T17:18:20.855338460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:20.857172 containerd[2154]: time="2024-09-04T17:18:20.856951248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.288110583s" Sep 4 17:18:20.857172 containerd[2154]: time="2024-09-04T17:18:20.857004756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:18:20.861764 containerd[2154]: time="2024-09-04T17:18:20.861670356Z" level=info msg="CreateContainer within sandbox \"c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:18:20.890652 containerd[2154]: time="2024-09-04T17:18:20.890580660Z" level=info msg="CreateContainer within sandbox \"c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c00f407e6e8cb7b2e0a98a15f977634d2cc262775ce4c68bd591d74798674d01\"" Sep 4 17:18:20.892379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505586678.mount: Deactivated successfully. Sep 4 17:18:20.898489 containerd[2154]: time="2024-09-04T17:18:20.894005232Z" level=info msg="StartContainer for \"c00f407e6e8cb7b2e0a98a15f977634d2cc262775ce4c68bd591d74798674d01\"" Sep 4 17:18:21.045620 containerd[2154]: time="2024-09-04T17:18:21.045540129Z" level=info msg="StartContainer for \"c00f407e6e8cb7b2e0a98a15f977634d2cc262775ce4c68bd591d74798674d01\" returns successfully" Sep 4 17:18:21.057488 containerd[2154]: time="2024-09-04T17:18:21.057089853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:18:22.956278 containerd[2154]: time="2024-09-04T17:18:22.953417606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:22.960512 containerd[2154]: time="2024-09-04T17:18:22.960456614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:18:22.963019 containerd[2154]: time="2024-09-04T17:18:22.962813786Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:22.972938 containerd[2154]: time="2024-09-04T17:18:22.972880778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:22.975926 containerd[2154]: time="2024-09-04T17:18:22.975857162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 1.918705233s" Sep 4 17:18:22.975926 containerd[2154]: time="2024-09-04T17:18:22.975922250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:18:23.000049 containerd[2154]: time="2024-09-04T17:18:22.999979994Z" level=info msg="CreateContainer within sandbox \"c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:18:23.089583 containerd[2154]: time="2024-09-04T17:18:23.089393375Z" level=info msg="CreateContainer within sandbox \"c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"47d75c26232488305c064d325f43254ba8f02a39eb2d2a27c9c6bd900e7d8550\"" Sep 4 17:18:23.100173 containerd[2154]: time="2024-09-04T17:18:23.100092815Z" level=info msg="StartContainer for \"47d75c26232488305c064d325f43254ba8f02a39eb2d2a27c9c6bd900e7d8550\"" Sep 4 17:18:23.189177 ntpd[2100]: Listen normally on 6 vxlan.calico 192.168.65.128:123 Sep 4 17:18:23.196585 ntpd[2100]: 4 Sep 17:18:23 ntpd[2100]: Listen normally on 6 vxlan.calico 192.168.65.128:123 Sep 4 17:18:23.196585 ntpd[2100]: 4 Sep 17:18:23 ntpd[2100]: Listen normally on 7 vxlan.calico [fe80::6469:64ff:fecc:aa90%4]:123 Sep 4 17:18:23.196585 ntpd[2100]: 4 Sep 17:18:23 ntpd[2100]: Listen normally on 8 calia3c17137b09 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:18:23.196585 ntpd[2100]: 4 Sep 17:18:23 ntpd[2100]: Listen normally on 9 cali6fc33a79ecc [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:18:23.196585 ntpd[2100]: 4 Sep 17:18:23 ntpd[2100]: Listen normally on 10 calidf8ea84c94e [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:18:23.196585 ntpd[2100]: 4 Sep 17:18:23 ntpd[2100]: Listen normally on 11 calib44212cc8b1 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:18:23.193386 ntpd[2100]: Listen normally on 7 vxlan.calico [fe80::6469:64ff:fecc:aa90%4]:123 Sep 4 17:18:23.193487 ntpd[2100]: Listen normally on 8 calia3c17137b09 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:18:23.193563 ntpd[2100]: Listen normally on 9 cali6fc33a79ecc [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:18:23.193632 ntpd[2100]: Listen normally on 10 calidf8ea84c94e [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:18:23.193697 ntpd[2100]: Listen normally on 11 calib44212cc8b1 [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:18:23.339832 containerd[2154]: time="2024-09-04T17:18:23.339512532Z" level=info msg="StartContainer for \"47d75c26232488305c064d325f43254ba8f02a39eb2d2a27c9c6bd900e7d8550\" returns successfully" Sep 4 17:18:23.605534 kubelet[3681]: I0904 17:18:23.605378 3681 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:18:23.605534 kubelet[3681]: I0904 17:18:23.605479 3681 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:18:24.494141 systemd[1]: Started sshd@8-172.31.23.29:22-139.178.89.65:37054.service - OpenSSH per-connection server daemon (139.178.89.65:37054). Sep 4 17:18:24.695779 sshd[5618]: Accepted publickey for core from 139.178.89.65 port 37054 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:24.702856 sshd[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:24.716727 systemd-logind[2121]: New session 9 of user core. Sep 4 17:18:24.723905 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:18:25.028073 sshd[5618]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:25.037601 systemd[1]: sshd@8-172.31.23.29:22-139.178.89.65:37054.service: Deactivated successfully. Sep 4 17:18:25.045783 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:18:25.048737 systemd-logind[2121]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:18:25.053592 systemd-logind[2121]: Removed session 9. Sep 4 17:18:30.058834 systemd[1]: Started sshd@9-172.31.23.29:22-139.178.89.65:36004.service - OpenSSH per-connection server daemon (139.178.89.65:36004). Sep 4 17:18:30.235293 sshd[5635]: Accepted publickey for core from 139.178.89.65 port 36004 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:30.237980 sshd[5635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:30.245391 systemd-logind[2121]: New session 10 of user core. Sep 4 17:18:30.254721 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:18:30.496064 sshd[5635]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:30.503712 systemd[1]: sshd@9-172.31.23.29:22-139.178.89.65:36004.service: Deactivated successfully. Sep 4 17:18:30.510496 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:18:30.512423 systemd-logind[2121]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:18:30.515106 systemd-logind[2121]: Removed session 10. Sep 4 17:18:30.526741 systemd[1]: Started sshd@10-172.31.23.29:22-139.178.89.65:36006.service - OpenSSH per-connection server daemon (139.178.89.65:36006). Sep 4 17:18:30.715220 sshd[5650]: Accepted publickey for core from 139.178.89.65 port 36006 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:30.717857 sshd[5650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:30.726810 systemd-logind[2121]: New session 11 of user core. Sep 4 17:18:30.737723 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:18:31.321843 sshd[5650]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:31.323212 containerd[2154]: time="2024-09-04T17:18:31.322507796Z" level=info msg="StopPodSandbox for \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\"" Sep 4 17:18:31.351738 systemd[1]: sshd@10-172.31.23.29:22-139.178.89.65:36006.service: Deactivated successfully. Sep 4 17:18:31.364300 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:18:31.373951 systemd-logind[2121]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:18:31.384812 systemd[1]: Started sshd@11-172.31.23.29:22-139.178.89.65:36020.service - OpenSSH per-connection server daemon (139.178.89.65:36020). Sep 4 17:18:31.392367 systemd-logind[2121]: Removed session 11. Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.446 [WARNING][5677] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"092d48cd-1ea8-4018-aeb9-4d6ed136faf7", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7", Pod:"coredns-5dd5756b68-sffwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc33a79ecc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.447 [INFO][5677] k8s.go 608: Cleaning up netns ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.447 [INFO][5677] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" iface="eth0" netns="" Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.447 [INFO][5677] k8s.go 615: Releasing IP address(es) ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.447 [INFO][5677] utils.go 188: Calico CNI releasing IP address ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.484 [INFO][5684] ipam_plugin.go 417: Releasing address using handleID ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" HandleID="k8s-pod-network.9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.484 [INFO][5684] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.484 [INFO][5684] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.503 [WARNING][5684] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" HandleID="k8s-pod-network.9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.503 [INFO][5684] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" HandleID="k8s-pod-network.9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.506 [INFO][5684] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:31.510827 containerd[2154]: 2024-09-04 17:18:31.508 [INFO][5677] k8s.go 621: Teardown processing complete. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:31.510827 containerd[2154]: time="2024-09-04T17:18:31.510741369Z" level=info msg="TearDown network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\" successfully" Sep 4 17:18:31.510827 containerd[2154]: time="2024-09-04T17:18:31.510780081Z" level=info msg="StopPodSandbox for \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\" returns successfully" Sep 4 17:18:31.512335 containerd[2154]: time="2024-09-04T17:18:31.512229009Z" level=info msg="RemovePodSandbox for \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\"" Sep 4 17:18:31.512541 containerd[2154]: time="2024-09-04T17:18:31.512368173Z" level=info msg="Forcibly stopping sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\"" Sep 4 17:18:31.579490 sshd[5676]: Accepted publickey for core from 139.178.89.65 port 36020 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:31.586526 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:31.604280 systemd-logind[2121]: New session 12 of user core. Sep 4 17:18:31.613958 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.591 [WARNING][5702] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"092d48cd-1ea8-4018-aeb9-4d6ed136faf7", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"84187644d5aeba2c5a57ac4587a65b7f8273d302e3aaafed5f712d7223a211f7", Pod:"coredns-5dd5756b68-sffwl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6fc33a79ecc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.592 [INFO][5702] k8s.go 608: Cleaning up netns ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.592 [INFO][5702] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" iface="eth0" netns="" Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.592 [INFO][5702] k8s.go 615: Releasing IP address(es) ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.592 [INFO][5702] utils.go 188: Calico CNI releasing IP address ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.648 [INFO][5708] ipam_plugin.go 417: Releasing address using handleID ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" HandleID="k8s-pod-network.9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.648 [INFO][5708] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.648 [INFO][5708] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.666 [WARNING][5708] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" HandleID="k8s-pod-network.9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.666 [INFO][5708] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" HandleID="k8s-pod-network.9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--sffwl-eth0" Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.669 [INFO][5708] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:31.674459 containerd[2154]: 2024-09-04 17:18:31.672 [INFO][5702] k8s.go 621: Teardown processing complete. ContainerID="9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482" Sep 4 17:18:31.674459 containerd[2154]: time="2024-09-04T17:18:31.674455498Z" level=info msg="TearDown network for sandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\" successfully" Sep 4 17:18:31.679218 containerd[2154]: time="2024-09-04T17:18:31.679139266Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:31.679626 containerd[2154]: time="2024-09-04T17:18:31.679284214Z" level=info msg="RemovePodSandbox \"9d20b138a59fcd7cdcf08879e80e536d6bafa89e4058382e9f7989d7d3056482\" returns successfully" Sep 4 17:18:31.680283 containerd[2154]: time="2024-09-04T17:18:31.679941550Z" level=info msg="StopPodSandbox for \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\"" Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.767 [WARNING][5728] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0", GenerateName:"calico-kube-controllers-65b4fbd445-", Namespace:"calico-system", SelfLink:"", UID:"a850e369-2158-48d3-9cdb-9ee791c676aa", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65b4fbd445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9", Pod:"calico-kube-controllers-65b4fbd445-7jsjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3c17137b09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.768 [INFO][5728] k8s.go 608: Cleaning up netns ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.768 [INFO][5728] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" iface="eth0" netns="" Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.768 [INFO][5728] k8s.go 615: Releasing IP address(es) ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.768 [INFO][5728] utils.go 188: Calico CNI releasing IP address ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.820 [INFO][5741] ipam_plugin.go 417: Releasing address using handleID ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" HandleID="k8s-pod-network.dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.821 [INFO][5741] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.821 [INFO][5741] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.837 [WARNING][5741] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" HandleID="k8s-pod-network.dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.837 [INFO][5741] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" HandleID="k8s-pod-network.dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.840 [INFO][5741] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:31.847440 containerd[2154]: 2024-09-04 17:18:31.843 [INFO][5728] k8s.go 621: Teardown processing complete. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:31.847440 containerd[2154]: time="2024-09-04T17:18:31.846799354Z" level=info msg="TearDown network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\" successfully" Sep 4 17:18:31.847440 containerd[2154]: time="2024-09-04T17:18:31.846854914Z" level=info msg="StopPodSandbox for \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\" returns successfully" Sep 4 17:18:31.850055 containerd[2154]: time="2024-09-04T17:18:31.849129394Z" level=info msg="RemovePodSandbox for \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\"" Sep 4 17:18:31.850055 containerd[2154]: time="2024-09-04T17:18:31.849175462Z" level=info msg="Forcibly stopping sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\"" Sep 4 17:18:31.924492 sshd[5676]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:31.931457 systemd[1]: sshd@11-172.31.23.29:22-139.178.89.65:36020.service: Deactivated successfully. Sep 4 17:18:31.932475 systemd-logind[2121]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:18:31.939603 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:18:31.943300 systemd-logind[2121]: Removed session 12. Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:31.956 [WARNING][5759] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0", GenerateName:"calico-kube-controllers-65b4fbd445-", Namespace:"calico-system", SelfLink:"", UID:"a850e369-2158-48d3-9cdb-9ee791c676aa", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65b4fbd445", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"1f883b94fd1fe1ca77abac48bb10bf11597652a83aa3ede841f36672d2b3f7c9", Pod:"calico-kube-controllers-65b4fbd445-7jsjb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.65.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3c17137b09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:31.956 [INFO][5759] k8s.go 608: Cleaning up netns ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:31.956 [INFO][5759] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" iface="eth0" netns="" Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:31.956 [INFO][5759] k8s.go 615: Releasing IP address(es) ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:31.956 [INFO][5759] utils.go 188: Calico CNI releasing IP address ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:31.995 [INFO][5768] ipam_plugin.go 417: Releasing address using handleID ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" HandleID="k8s-pod-network.dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:31.995 [INFO][5768] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:31.995 [INFO][5768] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:32.008 [WARNING][5768] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" HandleID="k8s-pod-network.dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:32.008 [INFO][5768] ipam_plugin.go 445: Releasing address using workloadID ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" HandleID="k8s-pod-network.dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Workload="ip--172--31--23--29-k8s-calico--kube--controllers--65b4fbd445--7jsjb-eth0" Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:32.011 [INFO][5768] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:32.015701 containerd[2154]: 2024-09-04 17:18:32.013 [INFO][5759] k8s.go 621: Teardown processing complete. ContainerID="dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733" Sep 4 17:18:32.016504 containerd[2154]: time="2024-09-04T17:18:32.016377499Z" level=info msg="TearDown network for sandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\" successfully" Sep 4 17:18:32.022261 containerd[2154]: time="2024-09-04T17:18:32.022111963Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:32.022261 containerd[2154]: time="2024-09-04T17:18:32.022216051Z" level=info msg="RemovePodSandbox \"dcffc6975e49b82624db10f9ac85fff63fad0abdb8aba8f818428d0293dac733\" returns successfully" Sep 4 17:18:32.023175 containerd[2154]: time="2024-09-04T17:18:32.023000755Z" level=info msg="StopPodSandbox for \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\"" Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.103 [WARNING][5786] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5fa29b39-8f42-470a-b117-da5e937a8acb", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01", Pod:"coredns-5dd5756b68-9jcvq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf8ea84c94e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.104 [INFO][5786] k8s.go 608: Cleaning up netns ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.104 [INFO][5786] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" iface="eth0" netns="" Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.104 [INFO][5786] k8s.go 615: Releasing IP address(es) ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.104 [INFO][5786] utils.go 188: Calico CNI releasing IP address ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.146 [INFO][5792] ipam_plugin.go 417: Releasing address using handleID ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" HandleID="k8s-pod-network.8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.147 [INFO][5792] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.147 [INFO][5792] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.161 [WARNING][5792] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" HandleID="k8s-pod-network.8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.161 [INFO][5792] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" HandleID="k8s-pod-network.8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.165 [INFO][5792] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:32.170144 containerd[2154]: 2024-09-04 17:18:32.167 [INFO][5786] k8s.go 621: Teardown processing complete. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:32.171085 containerd[2154]: time="2024-09-04T17:18:32.170196332Z" level=info msg="TearDown network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\" successfully" Sep 4 17:18:32.171085 containerd[2154]: time="2024-09-04T17:18:32.170459768Z" level=info msg="StopPodSandbox for \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\" returns successfully" Sep 4 17:18:32.172437 containerd[2154]: time="2024-09-04T17:18:32.171949352Z" level=info msg="RemovePodSandbox for \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\"" Sep 4 17:18:32.172437 containerd[2154]: time="2024-09-04T17:18:32.172015076Z" level=info msg="Forcibly stopping sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\"" Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.243 [WARNING][5810] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"5fa29b39-8f42-470a-b117-da5e937a8acb", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"d3a37e830f09b665a8c645ba54d62ae144bc62860fb2a4a6e42e65ec4def7c01", Pod:"coredns-5dd5756b68-9jcvq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.65.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf8ea84c94e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.244 [INFO][5810] k8s.go 608: Cleaning up netns ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.244 [INFO][5810] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" iface="eth0" netns="" Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.244 [INFO][5810] k8s.go 615: Releasing IP address(es) ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.244 [INFO][5810] utils.go 188: Calico CNI releasing IP address ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.288 [INFO][5816] ipam_plugin.go 417: Releasing address using handleID ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" HandleID="k8s-pod-network.8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.288 [INFO][5816] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.288 [INFO][5816] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.300 [WARNING][5816] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" HandleID="k8s-pod-network.8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.300 [INFO][5816] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" HandleID="k8s-pod-network.8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Workload="ip--172--31--23--29-k8s-coredns--5dd5756b68--9jcvq-eth0" Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.302 [INFO][5816] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:32.308334 containerd[2154]: 2024-09-04 17:18:32.305 [INFO][5810] k8s.go 621: Teardown processing complete. ContainerID="8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5" Sep 4 17:18:32.309732 containerd[2154]: time="2024-09-04T17:18:32.308379153Z" level=info msg="TearDown network for sandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\" successfully" Sep 4 17:18:32.315104 containerd[2154]: time="2024-09-04T17:18:32.315034617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:32.315265 containerd[2154]: time="2024-09-04T17:18:32.315143745Z" level=info msg="RemovePodSandbox \"8eaf8d37ffe57ac81f5b20ede984153779d21d40c78b84681f2eb9ec1c820ff5\" returns successfully" Sep 4 17:18:32.316297 containerd[2154]: time="2024-09-04T17:18:32.316036605Z" level=info msg="StopPodSandbox for \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\"" Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.382 [WARNING][5834] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"beb1fab5-04d7-4346-bdd8-82707db57e16", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a", Pod:"csi-node-driver-cj7r5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib44212cc8b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.383 [INFO][5834] k8s.go 608: Cleaning up netns ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.384 [INFO][5834] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" iface="eth0" netns="" Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.384 [INFO][5834] k8s.go 615: Releasing IP address(es) ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.384 [INFO][5834] utils.go 188: Calico CNI releasing IP address ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.454 [INFO][5841] ipam_plugin.go 417: Releasing address using handleID ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" HandleID="k8s-pod-network.ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.454 [INFO][5841] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.455 [INFO][5841] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.468 [WARNING][5841] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" HandleID="k8s-pod-network.ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.468 [INFO][5841] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" HandleID="k8s-pod-network.ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.471 [INFO][5841] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:32.476775 containerd[2154]: 2024-09-04 17:18:32.474 [INFO][5834] k8s.go 621: Teardown processing complete. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:32.480812 containerd[2154]: time="2024-09-04T17:18:32.476754022Z" level=info msg="TearDown network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\" successfully" Sep 4 17:18:32.480812 containerd[2154]: time="2024-09-04T17:18:32.478913938Z" level=info msg="StopPodSandbox for \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\" returns successfully" Sep 4 17:18:32.480812 containerd[2154]: time="2024-09-04T17:18:32.479691706Z" level=info msg="RemovePodSandbox for \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\"" Sep 4 17:18:32.480812 containerd[2154]: time="2024-09-04T17:18:32.479739574Z" level=info msg="Forcibly stopping sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\"" Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.554 [WARNING][5877] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"beb1fab5-04d7-4346-bdd8-82707db57e16", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"c652ebad2211a1716a827fa32f8894d11cf679cf708d00629a3aed19c015df5a", Pod:"csi-node-driver-cj7r5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.65.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calib44212cc8b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.554 [INFO][5877] k8s.go 608: Cleaning up netns ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.554 [INFO][5877] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" iface="eth0" netns="" Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.554 [INFO][5877] k8s.go 615: Releasing IP address(es) ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.554 [INFO][5877] utils.go 188: Calico CNI releasing IP address ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.597 [INFO][5883] ipam_plugin.go 417: Releasing address using handleID ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" HandleID="k8s-pod-network.ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.597 [INFO][5883] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.597 [INFO][5883] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.616 [WARNING][5883] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" HandleID="k8s-pod-network.ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.616 [INFO][5883] ipam_plugin.go 445: Releasing address using workloadID ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" HandleID="k8s-pod-network.ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Workload="ip--172--31--23--29-k8s-csi--node--driver--cj7r5-eth0" Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.619 [INFO][5883] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:32.632846 containerd[2154]: 2024-09-04 17:18:32.624 [INFO][5877] k8s.go 621: Teardown processing complete. ContainerID="ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973" Sep 4 17:18:32.634150 containerd[2154]: time="2024-09-04T17:18:32.632861878Z" level=info msg="TearDown network for sandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\" successfully" Sep 4 17:18:32.638981 containerd[2154]: time="2024-09-04T17:18:32.638876662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:18:32.639150 containerd[2154]: time="2024-09-04T17:18:32.638988550Z" level=info msg="RemovePodSandbox \"ab82094c12495dc65873ba30afabd2cfee820268d80971f4d4acc38f56230973\" returns successfully" Sep 4 17:18:34.071341 kubelet[3681]: I0904 17:18:34.071279 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-cj7r5" podStartSLOduration=40.6623348 podCreationTimestamp="2024-09-04 17:17:50 +0000 UTC" firstStartedPulling="2024-09-04 17:18:19.567798573 +0000 UTC m=+48.538786070" lastFinishedPulling="2024-09-04 17:18:22.97665317 +0000 UTC m=+51.947640679" observedRunningTime="2024-09-04 17:18:23.855867243 +0000 UTC m=+52.826854728" watchObservedRunningTime="2024-09-04 17:18:34.071189409 +0000 UTC m=+63.042176906" Sep 4 17:18:36.954709 systemd[1]: Started sshd@12-172.31.23.29:22-139.178.89.65:36026.service - OpenSSH per-connection server daemon (139.178.89.65:36026). Sep 4 17:18:37.137451 sshd[5926]: Accepted publickey for core from 139.178.89.65 port 36026 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:37.140185 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:37.148150 systemd-logind[2121]: New session 13 of user core. Sep 4 17:18:37.157834 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:18:37.454574 sshd[5926]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:37.459564 systemd[1]: sshd@12-172.31.23.29:22-139.178.89.65:36026.service: Deactivated successfully. Sep 4 17:18:37.467911 systemd-logind[2121]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:18:37.468766 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:18:37.474024 systemd-logind[2121]: Removed session 13. Sep 4 17:18:42.484732 systemd[1]: Started sshd@13-172.31.23.29:22-139.178.89.65:36762.service - OpenSSH per-connection server daemon (139.178.89.65:36762). Sep 4 17:18:42.661474 sshd[5942]: Accepted publickey for core from 139.178.89.65 port 36762 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:42.664215 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:42.672769 systemd-logind[2121]: New session 14 of user core. Sep 4 17:18:42.681822 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:18:42.941457 sshd[5942]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:42.948827 systemd[1]: sshd@13-172.31.23.29:22-139.178.89.65:36762.service: Deactivated successfully. Sep 4 17:18:42.956374 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:18:42.958877 systemd-logind[2121]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:18:42.963078 systemd-logind[2121]: Removed session 14. Sep 4 17:18:47.972799 systemd[1]: Started sshd@14-172.31.23.29:22-139.178.89.65:60236.service - OpenSSH per-connection server daemon (139.178.89.65:60236). Sep 4 17:18:48.143110 sshd[5963]: Accepted publickey for core from 139.178.89.65 port 60236 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:48.145759 sshd[5963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:48.156533 systemd-logind[2121]: New session 15 of user core. Sep 4 17:18:48.167474 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:18:48.473664 sshd[5963]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:48.492140 systemd[1]: sshd@14-172.31.23.29:22-139.178.89.65:60236.service: Deactivated successfully. Sep 4 17:18:48.508219 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:18:48.513519 systemd-logind[2121]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:18:48.522575 systemd-logind[2121]: Removed session 15. Sep 4 17:18:49.348275 kubelet[3681]: I0904 17:18:49.346559 3681 topology_manager.go:215] "Topology Admit Handler" podUID="b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca" podNamespace="calico-apiserver" podName="calico-apiserver-bf98cf4c-p47gh" Sep 4 17:18:49.360412 kubelet[3681]: W0904 17:18:49.358830 3681 reflector.go:535] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-23-29" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-23-29' and this object Sep 4 17:18:49.360412 kubelet[3681]: E0904 17:18:49.358901 3681 reflector.go:147] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-23-29" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-23-29' and this object Sep 4 17:18:49.517566 kubelet[3681]: I0904 17:18:49.517498 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca-calico-apiserver-certs\") pod \"calico-apiserver-bf98cf4c-p47gh\" (UID: \"b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca\") " pod="calico-apiserver/calico-apiserver-bf98cf4c-p47gh" Sep 4 17:18:49.517707 kubelet[3681]: I0904 17:18:49.517583 3681 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxb4c\" (UniqueName: \"kubernetes.io/projected/b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca-kube-api-access-sxb4c\") pod \"calico-apiserver-bf98cf4c-p47gh\" (UID: \"b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca\") " pod="calico-apiserver/calico-apiserver-bf98cf4c-p47gh" Sep 4 17:18:50.620110 kubelet[3681]: E0904 17:18:50.620048 3681 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 17:18:50.620831 kubelet[3681]: E0904 17:18:50.620180 3681 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca-calico-apiserver-certs podName:b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca nodeName:}" failed. No retries permitted until 2024-09-04 17:18:51.120146072 +0000 UTC m=+80.091133557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca-calico-apiserver-certs") pod "calico-apiserver-bf98cf4c-p47gh" (UID: "b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:18:51.174322 containerd[2154]: time="2024-09-04T17:18:51.172759946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf98cf4c-p47gh,Uid:b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:18:51.435539 systemd-networkd[1692]: califc4eb6aff3d: Link UP Sep 4 17:18:51.435951 systemd-networkd[1692]: califc4eb6aff3d: Gained carrier Sep 4 17:18:51.448117 (udev-worker)[6004]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.296 [INFO][5990] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0 calico-apiserver-bf98cf4c- calico-apiserver b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca 1019 0 2024-09-04 17:18:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bf98cf4c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-29 calico-apiserver-bf98cf4c-p47gh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califc4eb6aff3d [] []}} ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Namespace="calico-apiserver" Pod="calico-apiserver-bf98cf4c-p47gh" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.296 [INFO][5990] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Namespace="calico-apiserver" Pod="calico-apiserver-bf98cf4c-p47gh" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.353 [INFO][5996] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" HandleID="k8s-pod-network.b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Workload="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.370 [INFO][5996] ipam_plugin.go 270: Auto assigning IP ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" HandleID="k8s-pod-network.b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Workload="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000263bf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-29", "pod":"calico-apiserver-bf98cf4c-p47gh", "timestamp":"2024-09-04 17:18:51.353031915 +0000 UTC"}, Hostname:"ip-172-31-23-29", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.370 [INFO][5996] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.370 [INFO][5996] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.370 [INFO][5996] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-29' Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.373 [INFO][5996] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" host="ip-172-31-23-29" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.380 [INFO][5996] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-29" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.389 [INFO][5996] ipam.go 489: Trying affinity for 192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.393 [INFO][5996] ipam.go 155: Attempting to load block cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.400 [INFO][5996] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.65.128/26 host="ip-172-31-23-29" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.400 [INFO][5996] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.65.128/26 handle="k8s-pod-network.b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" host="ip-172-31-23-29" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.404 [INFO][5996] ipam.go 1685: Creating new handle: k8s-pod-network.b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440 Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.414 [INFO][5996] ipam.go 1203: Writing block in order to claim IPs block=192.168.65.128/26 handle="k8s-pod-network.b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" host="ip-172-31-23-29" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.423 [INFO][5996] ipam.go 1216: Successfully claimed IPs: [192.168.65.133/26] block=192.168.65.128/26 handle="k8s-pod-network.b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" host="ip-172-31-23-29" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.424 [INFO][5996] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.65.133/26] handle="k8s-pod-network.b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" host="ip-172-31-23-29" Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.424 [INFO][5996] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:18:51.470158 containerd[2154]: 2024-09-04 17:18:51.424 [INFO][5996] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.65.133/26] IPv6=[] ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" HandleID="k8s-pod-network.b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Workload="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" Sep 4 17:18:51.471812 containerd[2154]: 2024-09-04 17:18:51.428 [INFO][5990] k8s.go 386: Populated endpoint ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Namespace="calico-apiserver" Pod="calico-apiserver-bf98cf4c-p47gh" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0", GenerateName:"calico-apiserver-bf98cf4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf98cf4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"", Pod:"calico-apiserver-bf98cf4c-p47gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc4eb6aff3d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:51.471812 containerd[2154]: 2024-09-04 17:18:51.429 [INFO][5990] k8s.go 387: Calico CNI using IPs: [192.168.65.133/32] ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Namespace="calico-apiserver" Pod="calico-apiserver-bf98cf4c-p47gh" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" Sep 4 17:18:51.471812 containerd[2154]: 2024-09-04 17:18:51.429 [INFO][5990] dataplane_linux.go 68: Setting the host side veth name to califc4eb6aff3d ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Namespace="calico-apiserver" Pod="calico-apiserver-bf98cf4c-p47gh" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" Sep 4 17:18:51.471812 containerd[2154]: 2024-09-04 17:18:51.432 [INFO][5990] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Namespace="calico-apiserver" Pod="calico-apiserver-bf98cf4c-p47gh" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" Sep 4 17:18:51.471812 containerd[2154]: 2024-09-04 17:18:51.439 [INFO][5990] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Namespace="calico-apiserver" Pod="calico-apiserver-bf98cf4c-p47gh" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0", GenerateName:"calico-apiserver-bf98cf4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 18, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bf98cf4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-29", ContainerID:"b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440", Pod:"calico-apiserver-bf98cf4c-p47gh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.65.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califc4eb6aff3d", MAC:"c2:2d:d6:46:91:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:18:51.471812 containerd[2154]: 2024-09-04 17:18:51.459 [INFO][5990] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440" Namespace="calico-apiserver" Pod="calico-apiserver-bf98cf4c-p47gh" WorkloadEndpoint="ip--172--31--23--29-k8s-calico--apiserver--bf98cf4c--p47gh-eth0" Sep 4 17:18:51.542168 containerd[2154]: time="2024-09-04T17:18:51.537321964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:18:51.542168 containerd[2154]: time="2024-09-04T17:18:51.537451624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:18:51.542168 containerd[2154]: time="2024-09-04T17:18:51.537487672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:51.542168 containerd[2154]: time="2024-09-04T17:18:51.540665488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:18:51.676121 containerd[2154]: time="2024-09-04T17:18:51.675793373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bf98cf4c-p47gh,Uid:b7cbec9a-3342-4d7f-b5be-4b1e022cd8ca,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440\"" Sep 4 17:18:51.682092 containerd[2154]: time="2024-09-04T17:18:51.681308009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:18:52.901870 systemd-networkd[1692]: califc4eb6aff3d: Gained IPv6LL Sep 4 17:18:53.508806 systemd[1]: Started sshd@15-172.31.23.29:22-139.178.89.65:60246.service - OpenSSH per-connection server daemon (139.178.89.65:60246). Sep 4 17:18:53.735036 sshd[6073]: Accepted publickey for core from 139.178.89.65 port 60246 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:53.740038 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:53.754117 systemd-logind[2121]: New session 16 of user core. Sep 4 17:18:53.761943 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:18:54.152132 sshd[6073]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:54.163638 systemd[1]: sshd@15-172.31.23.29:22-139.178.89.65:60246.service: Deactivated successfully. Sep 4 17:18:54.179472 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:18:54.216782 systemd-logind[2121]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:18:54.231771 systemd[1]: Started sshd@16-172.31.23.29:22-139.178.89.65:60250.service - OpenSSH per-connection server daemon (139.178.89.65:60250). Sep 4 17:18:54.235090 systemd-logind[2121]: Removed session 16. Sep 4 17:18:54.474064 sshd[6088]: Accepted publickey for core from 139.178.89.65 port 60250 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:54.475724 sshd[6088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:54.494572 systemd-logind[2121]: New session 17 of user core. Sep 4 17:18:54.501349 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:18:54.812558 containerd[2154]: time="2024-09-04T17:18:54.812299772Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:54.815660 containerd[2154]: time="2024-09-04T17:18:54.815024600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Sep 4 17:18:54.818531 containerd[2154]: time="2024-09-04T17:18:54.818477156Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:54.825196 containerd[2154]: time="2024-09-04T17:18:54.825125517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:18:54.827316 containerd[2154]: time="2024-09-04T17:18:54.827216757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 3.14576782s" Sep 4 17:18:54.827510 containerd[2154]: time="2024-09-04T17:18:54.827479317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:18:54.841385 containerd[2154]: time="2024-09-04T17:18:54.841113081Z" level=info msg="CreateContainer within sandbox \"b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:18:54.865308 containerd[2154]: time="2024-09-04T17:18:54.860934009Z" level=info msg="CreateContainer within sandbox \"b6db77e6d88b1afaff4e1d11341e8ee0114d111e5d468c2b65a089028c79d440\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"640c07fc9cd650229841064414c2e04028652f65521f51caaba55d572ac28367\"" Sep 4 17:18:54.867158 containerd[2154]: time="2024-09-04T17:18:54.866849349Z" level=info msg="StartContainer for \"640c07fc9cd650229841064414c2e04028652f65521f51caaba55d572ac28367\"" Sep 4 17:18:55.168029 containerd[2154]: time="2024-09-04T17:18:55.167954430Z" level=info msg="StartContainer for \"640c07fc9cd650229841064414c2e04028652f65521f51caaba55d572ac28367\" returns successfully" Sep 4 17:18:55.189287 ntpd[2100]: Listen normally on 12 califc4eb6aff3d [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:18:55.190660 ntpd[2100]: 4 Sep 17:18:55 ntpd[2100]: Listen normally on 12 califc4eb6aff3d [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:18:55.257862 sshd[6088]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:55.269709 systemd[1]: sshd@16-172.31.23.29:22-139.178.89.65:60250.service: Deactivated successfully. Sep 4 17:18:55.280492 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:18:55.285947 systemd-logind[2121]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:18:55.299803 systemd[1]: Started sshd@17-172.31.23.29:22-139.178.89.65:60264.service - OpenSSH per-connection server daemon (139.178.89.65:60264). Sep 4 17:18:55.301896 systemd-logind[2121]: Removed session 17. Sep 4 17:18:55.512974 sshd[6138]: Accepted publickey for core from 139.178.89.65 port 60264 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:55.515105 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:55.524603 systemd-logind[2121]: New session 18 of user core. Sep 4 17:18:55.534911 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:18:57.544842 sshd[6138]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:57.558500 systemd[1]: sshd@17-172.31.23.29:22-139.178.89.65:60264.service: Deactivated successfully. Sep 4 17:18:57.561486 systemd-logind[2121]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:18:57.573926 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:18:57.585966 systemd-logind[2121]: Removed session 18. Sep 4 17:18:57.600910 systemd[1]: Started sshd@18-172.31.23.29:22-139.178.89.65:42158.service - OpenSSH per-connection server daemon (139.178.89.65:42158). Sep 4 17:18:57.838928 sshd[6162]: Accepted publickey for core from 139.178.89.65 port 42158 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:57.842077 sshd[6162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:57.867009 systemd-logind[2121]: New session 19 of user core. Sep 4 17:18:57.873664 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:18:58.329065 kubelet[3681]: I0904 17:18:58.328911 3681 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-bf98cf4c-p47gh" podStartSLOduration=6.178273194 podCreationTimestamp="2024-09-04 17:18:49 +0000 UTC" firstStartedPulling="2024-09-04 17:18:51.678674153 +0000 UTC m=+80.649661650" lastFinishedPulling="2024-09-04 17:18:54.827990469 +0000 UTC m=+83.798977978" observedRunningTime="2024-09-04 17:18:55.98851345 +0000 UTC m=+84.959500959" watchObservedRunningTime="2024-09-04 17:18:58.327589522 +0000 UTC m=+87.298577019" Sep 4 17:18:58.926851 sshd[6162]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:58.939033 systemd[1]: sshd@18-172.31.23.29:22-139.178.89.65:42158.service: Deactivated successfully. Sep 4 17:18:58.948798 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:18:58.953801 systemd-logind[2121]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:18:58.965740 systemd[1]: Started sshd@19-172.31.23.29:22-139.178.89.65:42174.service - OpenSSH per-connection server daemon (139.178.89.65:42174). Sep 4 17:18:58.969620 systemd-logind[2121]: Removed session 19. Sep 4 17:18:59.170373 sshd[6181]: Accepted publickey for core from 139.178.89.65 port 42174 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:18:59.177510 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:18:59.198327 systemd-logind[2121]: New session 20 of user core. Sep 4 17:18:59.212091 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:18:59.510743 sshd[6181]: pam_unix(sshd:session): session closed for user core Sep 4 17:18:59.519093 systemd[1]: sshd@19-172.31.23.29:22-139.178.89.65:42174.service: Deactivated successfully. Sep 4 17:18:59.530509 systemd-logind[2121]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:18:59.530914 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:18:59.541380 systemd-logind[2121]: Removed session 20. Sep 4 17:19:02.431551 systemd[1]: run-containerd-runc-k8s.io-af9d5d104f0589efbd785b07207beb6a0b96195897966f8da2ef2e3d2ae90a96-runc.tGGBBC.mount: Deactivated successfully. Sep 4 17:19:04.542763 systemd[1]: Started sshd@20-172.31.23.29:22-139.178.89.65:42176.service - OpenSSH per-connection server daemon (139.178.89.65:42176). Sep 4 17:19:04.729621 sshd[6246]: Accepted publickey for core from 139.178.89.65 port 42176 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:04.732543 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:04.743743 systemd-logind[2121]: New session 21 of user core. Sep 4 17:19:04.748895 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:19:05.001059 sshd[6246]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:05.008715 systemd[1]: sshd@20-172.31.23.29:22-139.178.89.65:42176.service: Deactivated successfully. Sep 4 17:19:05.017804 systemd-logind[2121]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:19:05.018506 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:19:05.021067 systemd-logind[2121]: Removed session 21. Sep 4 17:19:10.030728 systemd[1]: Started sshd@21-172.31.23.29:22-139.178.89.65:38402.service - OpenSSH per-connection server daemon (139.178.89.65:38402). Sep 4 17:19:10.209512 sshd[6283]: Accepted publickey for core from 139.178.89.65 port 38402 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:10.211481 sshd[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:10.219831 systemd-logind[2121]: New session 22 of user core. Sep 4 17:19:10.229876 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:19:10.463570 sshd[6283]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:10.469870 systemd[1]: sshd@21-172.31.23.29:22-139.178.89.65:38402.service: Deactivated successfully. Sep 4 17:19:10.479325 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:19:10.481191 systemd-logind[2121]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:19:10.484152 systemd-logind[2121]: Removed session 22. Sep 4 17:19:15.495711 systemd[1]: Started sshd@22-172.31.23.29:22-139.178.89.65:38416.service - OpenSSH per-connection server daemon (139.178.89.65:38416). Sep 4 17:19:15.676531 sshd[6306]: Accepted publickey for core from 139.178.89.65 port 38416 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:15.679542 sshd[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:15.689490 systemd-logind[2121]: New session 23 of user core. Sep 4 17:19:15.696002 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:19:15.944763 sshd[6306]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:15.950333 systemd-logind[2121]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:19:15.951223 systemd[1]: sshd@22-172.31.23.29:22-139.178.89.65:38416.service: Deactivated successfully. Sep 4 17:19:15.959627 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:19:15.961468 systemd-logind[2121]: Removed session 23. Sep 4 17:19:20.981726 systemd[1]: Started sshd@23-172.31.23.29:22-139.178.89.65:40984.service - OpenSSH per-connection server daemon (139.178.89.65:40984). Sep 4 17:19:21.156494 sshd[6320]: Accepted publickey for core from 139.178.89.65 port 40984 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:21.159303 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:21.167528 systemd-logind[2121]: New session 24 of user core. Sep 4 17:19:21.173379 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:19:21.418585 sshd[6320]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:21.425881 systemd[1]: sshd@23-172.31.23.29:22-139.178.89.65:40984.service: Deactivated successfully. Sep 4 17:19:21.425962 systemd-logind[2121]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:19:21.432103 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:19:21.434168 systemd-logind[2121]: Removed session 24. Sep 4 17:19:26.451807 systemd[1]: Started sshd@24-172.31.23.29:22-139.178.89.65:40990.service - OpenSSH per-connection server daemon (139.178.89.65:40990). Sep 4 17:19:26.647512 sshd[6339]: Accepted publickey for core from 139.178.89.65 port 40990 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:26.650355 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:26.660692 systemd-logind[2121]: New session 25 of user core. Sep 4 17:19:26.664801 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:19:26.912232 sshd[6339]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:26.918900 systemd[1]: sshd@24-172.31.23.29:22-139.178.89.65:40990.service: Deactivated successfully. Sep 4 17:19:26.928737 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:19:26.931490 systemd-logind[2121]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:19:26.933666 systemd-logind[2121]: Removed session 25. Sep 4 17:19:31.944808 systemd[1]: Started sshd@25-172.31.23.29:22-139.178.89.65:48366.service - OpenSSH per-connection server daemon (139.178.89.65:48366). Sep 4 17:19:32.127541 sshd[6357]: Accepted publickey for core from 139.178.89.65 port 48366 ssh2: RSA SHA256:IRxYwZpG2Kh+6kN1JT/TNpCW4pawGijsWR2Ejhy48gk Sep 4 17:19:32.130274 sshd[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 4 17:19:32.138802 systemd-logind[2121]: New session 26 of user core. Sep 4 17:19:32.143714 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:19:32.393736 sshd[6357]: pam_unix(sshd:session): session closed for user core Sep 4 17:19:32.408536 systemd[1]: sshd@25-172.31.23.29:22-139.178.89.65:48366.service: Deactivated successfully. Sep 4 17:19:32.426793 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:19:32.432342 systemd-logind[2121]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:19:32.435007 systemd-logind[2121]: Removed session 26. Sep 4 17:19:45.973975 containerd[2154]: time="2024-09-04T17:19:45.973773311Z" level=info msg="shim disconnected" id=278de8c8a161027dc88a54ea9a2ea84a4ba3b019cb942c9d21c16957b7af7556 namespace=k8s.io Sep 4 17:19:45.973975 containerd[2154]: time="2024-09-04T17:19:45.973904279Z" level=warning msg="cleaning up after shim disconnected" id=278de8c8a161027dc88a54ea9a2ea84a4ba3b019cb942c9d21c16957b7af7556 namespace=k8s.io Sep 4 17:19:45.973975 containerd[2154]: time="2024-09-04T17:19:45.973961903Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:45.976753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-278de8c8a161027dc88a54ea9a2ea84a4ba3b019cb942c9d21c16957b7af7556-rootfs.mount: Deactivated successfully. Sep 4 17:19:46.118083 kubelet[3681]: I0904 17:19:46.118044 3681 scope.go:117] "RemoveContainer" containerID="278de8c8a161027dc88a54ea9a2ea84a4ba3b019cb942c9d21c16957b7af7556" Sep 4 17:19:46.123737 containerd[2154]: time="2024-09-04T17:19:46.123646759Z" level=info msg="CreateContainer within sandbox \"9352e3ba7f1e8c1adf63dd22188b7c9d0e576079d9fe17111ef30efbcd728120\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:19:46.147020 containerd[2154]: time="2024-09-04T17:19:46.145987879Z" level=info msg="CreateContainer within sandbox \"9352e3ba7f1e8c1adf63dd22188b7c9d0e576079d9fe17111ef30efbcd728120\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9cdb1a393efe349ee70513f1939d07d5b0390f58e9df6f8bd0ad5e50f47704c4\"" Sep 4 17:19:46.148584 containerd[2154]: time="2024-09-04T17:19:46.148523983Z" level=info msg="StartContainer for \"9cdb1a393efe349ee70513f1939d07d5b0390f58e9df6f8bd0ad5e50f47704c4\"" Sep 4 17:19:46.274991 containerd[2154]: time="2024-09-04T17:19:46.274152308Z" level=info msg="StartContainer for \"9cdb1a393efe349ee70513f1939d07d5b0390f58e9df6f8bd0ad5e50f47704c4\" returns successfully" Sep 4 17:19:47.152259 containerd[2154]: time="2024-09-04T17:19:47.149400944Z" level=info msg="shim disconnected" id=84eca701f1706457f8bf84f4a64e3dc93cb72c1606d19255819512361eef38b0 namespace=k8s.io Sep 4 17:19:47.152259 containerd[2154]: time="2024-09-04T17:19:47.150134528Z" level=warning msg="cleaning up after shim disconnected" id=84eca701f1706457f8bf84f4a64e3dc93cb72c1606d19255819512361eef38b0 namespace=k8s.io Sep 4 17:19:47.152259 containerd[2154]: time="2024-09-04T17:19:47.150160604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:47.162079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84eca701f1706457f8bf84f4a64e3dc93cb72c1606d19255819512361eef38b0-rootfs.mount: Deactivated successfully. Sep 4 17:19:48.142841 kubelet[3681]: I0904 17:19:48.141989 3681 scope.go:117] "RemoveContainer" containerID="84eca701f1706457f8bf84f4a64e3dc93cb72c1606d19255819512361eef38b0" Sep 4 17:19:48.146511 containerd[2154]: time="2024-09-04T17:19:48.146450181Z" level=info msg="CreateContainer within sandbox \"93c1dd596a22fd004357c6a70e5e8bafa39214f635d46bd18b8b9c5536893797\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:19:48.173354 containerd[2154]: time="2024-09-04T17:19:48.173218425Z" level=info msg="CreateContainer within sandbox \"93c1dd596a22fd004357c6a70e5e8bafa39214f635d46bd18b8b9c5536893797\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"249752ac8abcfb0f108a8ae6888db3083ab8098cfb4ae897c65248c2da8761ee\"" Sep 4 17:19:48.177255 containerd[2154]: time="2024-09-04T17:19:48.175340686Z" level=info msg="StartContainer for \"249752ac8abcfb0f108a8ae6888db3083ab8098cfb4ae897c65248c2da8761ee\"" Sep 4 17:19:48.348274 containerd[2154]: time="2024-09-04T17:19:48.346026682Z" level=info msg="StartContainer for \"249752ac8abcfb0f108a8ae6888db3083ab8098cfb4ae897c65248c2da8761ee\" returns successfully" Sep 4 17:19:50.836126 containerd[2154]: time="2024-09-04T17:19:50.836028699Z" level=info msg="shim disconnected" id=d6953daaabe73a33ff7c8d5a50217a34156d7e5df42762360b152e1c0f4b59c8 namespace=k8s.io Sep 4 17:19:50.836126 containerd[2154]: time="2024-09-04T17:19:50.836107551Z" level=warning msg="cleaning up after shim disconnected" id=d6953daaabe73a33ff7c8d5a50217a34156d7e5df42762360b152e1c0f4b59c8 namespace=k8s.io Sep 4 17:19:50.836126 containerd[2154]: time="2024-09-04T17:19:50.836129319Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:19:50.842152 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6953daaabe73a33ff7c8d5a50217a34156d7e5df42762360b152e1c0f4b59c8-rootfs.mount: Deactivated successfully. Sep 4 17:19:51.154475 kubelet[3681]: I0904 17:19:51.154433 3681 scope.go:117] "RemoveContainer" containerID="d6953daaabe73a33ff7c8d5a50217a34156d7e5df42762360b152e1c0f4b59c8" Sep 4 17:19:51.158000 containerd[2154]: time="2024-09-04T17:19:51.157926720Z" level=info msg="CreateContainer within sandbox \"76e350680e7353fe6c19bd8f67354a3fa498a77d75cfbc8463d72e13c5550a4c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:19:51.203546 containerd[2154]: time="2024-09-04T17:19:51.202698937Z" level=info msg="CreateContainer within sandbox \"76e350680e7353fe6c19bd8f67354a3fa498a77d75cfbc8463d72e13c5550a4c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a9df5d56273a40bb5b5154056996f7dfa69dfe57b1f22f72175122c23f1744c4\"" Sep 4 17:19:51.205314 containerd[2154]: time="2024-09-04T17:19:51.204828865Z" level=info msg="StartContainer for \"a9df5d56273a40bb5b5154056996f7dfa69dfe57b1f22f72175122c23f1744c4\"" Sep 4 17:19:51.335142 containerd[2154]: time="2024-09-04T17:19:51.335048149Z" level=info msg="StartContainer for \"a9df5d56273a40bb5b5154056996f7dfa69dfe57b1f22f72175122c23f1744c4\" returns successfully" Sep 4 17:19:51.842299 systemd[1]: run-containerd-runc-k8s.io-a9df5d56273a40bb5b5154056996f7dfa69dfe57b1f22f72175122c23f1744c4-runc.60l4tD.mount: Deactivated successfully. Sep 4 17:19:54.452188 kubelet[3681]: E0904 17:19:54.451993 3681 controller.go:193] "Failed to update lease" err="Put \"https://172.31.23.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-29?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Sep 4 17:20:04.452611 kubelet[3681]: E0904 17:20:04.452535 3681 controller.go:193] "Failed to update lease" err="Put \"https://172.31.23.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-29?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"