Sep 4 17:11:09.213646 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Sep 4 17:11:09.213696 kernel: Linux version 6.6.48-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Wed Sep 4 15:52:28 -00 2024 Sep 4 17:11:09.213722 kernel: KASLR disabled due to lack of seed Sep 4 17:11:09.213739 kernel: efi: EFI v2.7 by EDK II Sep 4 17:11:09.213755 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Sep 4 17:11:09.213772 kernel: ACPI: Early table checksum verification disabled Sep 4 17:11:09.213790 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Sep 4 17:11:09.213806 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Sep 4 17:11:09.213822 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Sep 4 17:11:09.215265 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Sep 4 17:11:09.215297 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Sep 4 17:11:09.215314 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Sep 4 17:11:09.215331 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Sep 4 17:11:09.215348 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Sep 4 17:11:09.215369 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Sep 4 17:11:09.215393 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Sep 4 17:11:09.215412 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Sep 4 17:11:09.215429 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Sep 4 17:11:09.215447 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Sep 4 17:11:09.215465 kernel: printk: bootconsole [uart0] enabled Sep 4 17:11:09.215482 kernel: NUMA: Failed to initialise from firmware Sep 4 17:11:09.215501 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:11:09.215519 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Sep 4 17:11:09.215538 kernel: Zone ranges: Sep 4 17:11:09.215555 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Sep 4 17:11:09.215573 kernel: DMA32 empty Sep 4 17:11:09.215596 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Sep 4 17:11:09.215615 kernel: Movable zone start for each node Sep 4 17:11:09.215633 kernel: Early memory node ranges Sep 4 17:11:09.215651 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Sep 4 17:11:09.215668 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Sep 4 17:11:09.215686 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Sep 4 17:11:09.215704 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Sep 4 17:11:09.215722 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Sep 4 17:11:09.215740 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Sep 4 17:11:09.215757 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Sep 4 17:11:09.215775 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Sep 4 17:11:09.215792 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Sep 4 17:11:09.215817 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Sep 4 17:11:09.215922 kernel: psci: probing for conduit method from ACPI. Sep 4 17:11:09.215957 kernel: psci: PSCIv1.0 detected in firmware. Sep 4 17:11:09.215976 kernel: psci: Using standard PSCI v0.2 function IDs Sep 4 17:11:09.215995 kernel: psci: Trusted OS migration not required Sep 4 17:11:09.216019 kernel: psci: SMC Calling Convention v1.1 Sep 4 17:11:09.216037 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 4 17:11:09.216055 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 4 17:11:09.216074 kernel: pcpu-alloc: [0] 0 [0] 1 Sep 4 17:11:09.216093 kernel: Detected PIPT I-cache on CPU0 Sep 4 17:11:09.216112 kernel: CPU features: detected: GIC system register CPU interface Sep 4 17:11:09.216131 kernel: CPU features: detected: Spectre-v2 Sep 4 17:11:09.216149 kernel: CPU features: detected: Spectre-v3a Sep 4 17:11:09.216167 kernel: CPU features: detected: Spectre-BHB Sep 4 17:11:09.216186 kernel: CPU features: detected: ARM erratum 1742098 Sep 4 17:11:09.216205 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Sep 4 17:11:09.216228 kernel: alternatives: applying boot alternatives Sep 4 17:11:09.216250 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:11:09.216271 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 4 17:11:09.216292 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 4 17:11:09.216311 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 4 17:11:09.216330 kernel: Fallback order for Node 0: 0 Sep 4 17:11:09.216349 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Sep 4 17:11:09.216368 kernel: Policy zone: Normal Sep 4 17:11:09.216387 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 4 17:11:09.216405 kernel: software IO TLB: area num 2. Sep 4 17:11:09.216424 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Sep 4 17:11:09.216450 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8076K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Sep 4 17:11:09.216469 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 4 17:11:09.216487 kernel: trace event string verifier disabled Sep 4 17:11:09.216505 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 4 17:11:09.216526 kernel: rcu: RCU event tracing is enabled. Sep 4 17:11:09.216544 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 4 17:11:09.216562 kernel: Trampoline variant of Tasks RCU enabled. Sep 4 17:11:09.216581 kernel: Tracing variant of Tasks RCU enabled. Sep 4 17:11:09.216599 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 4 17:11:09.216618 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 4 17:11:09.216636 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 4 17:11:09.216679 kernel: GICv3: 96 SPIs implemented Sep 4 17:11:09.216703 kernel: GICv3: 0 Extended SPIs implemented Sep 4 17:11:09.216721 kernel: Root IRQ handler: gic_handle_irq Sep 4 17:11:09.216740 kernel: GICv3: GICv3 features: 16 PPIs Sep 4 17:11:09.216758 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Sep 4 17:11:09.216776 kernel: ITS [mem 0x10080000-0x1009ffff] Sep 4 17:11:09.216794 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Sep 4 17:11:09.216813 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Sep 4 17:11:09.219510 kernel: GICv3: using LPI property table @0x00000004000e0000 Sep 4 17:11:09.219548 kernel: ITS: Using hypervisor restricted LPI range [128] Sep 4 17:11:09.219568 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Sep 4 17:11:09.219590 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 4 17:11:09.219627 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Sep 4 17:11:09.219646 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Sep 4 17:11:09.219666 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Sep 4 17:11:09.219689 kernel: Console: colour dummy device 80x25 Sep 4 17:11:09.219709 kernel: printk: console [tty1] enabled Sep 4 17:11:09.219731 kernel: ACPI: Core revision 20230628 Sep 4 17:11:09.219752 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Sep 4 17:11:09.219772 kernel: pid_max: default: 32768 minimum: 301 Sep 4 17:11:09.219792 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Sep 4 17:11:09.219819 kernel: SELinux: Initializing. Sep 4 17:11:09.219880 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:11:09.219901 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 4 17:11:09.219920 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:11:09.219941 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Sep 4 17:11:09.219959 kernel: rcu: Hierarchical SRCU implementation. Sep 4 17:11:09.219979 kernel: rcu: Max phase no-delay instances is 400. Sep 4 17:11:09.219997 kernel: Platform MSI: ITS@0x10080000 domain created Sep 4 17:11:09.220017 kernel: PCI/MSI: ITS@0x10080000 domain created Sep 4 17:11:09.220045 kernel: Remapping and enabling EFI services. Sep 4 17:11:09.220066 kernel: smp: Bringing up secondary CPUs ... Sep 4 17:11:09.220085 kernel: Detected PIPT I-cache on CPU1 Sep 4 17:11:09.220103 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Sep 4 17:11:09.220123 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Sep 4 17:11:09.220141 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Sep 4 17:11:09.220160 kernel: smp: Brought up 1 node, 2 CPUs Sep 4 17:11:09.220179 kernel: SMP: Total of 2 processors activated. Sep 4 17:11:09.220198 kernel: CPU features: detected: 32-bit EL0 Support Sep 4 17:11:09.220217 kernel: CPU features: detected: 32-bit EL1 Support Sep 4 17:11:09.220243 kernel: CPU features: detected: CRC32 instructions Sep 4 17:11:09.220262 kernel: CPU: All CPU(s) started at EL1 Sep 4 17:11:09.220295 kernel: alternatives: applying system-wide alternatives Sep 4 17:11:09.220320 kernel: devtmpfs: initialized Sep 4 17:11:09.220340 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 4 17:11:09.220361 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 4 17:11:09.220381 kernel: pinctrl core: initialized pinctrl subsystem Sep 4 17:11:09.220400 kernel: SMBIOS 3.0.0 present. Sep 4 17:11:09.220420 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Sep 4 17:11:09.220446 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 4 17:11:09.220466 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 4 17:11:09.220486 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 4 17:11:09.220506 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 4 17:11:09.220527 kernel: audit: initializing netlink subsys (disabled) Sep 4 17:11:09.220546 kernel: audit: type=2000 audit(0.297:1): state=initialized audit_enabled=0 res=1 Sep 4 17:11:09.220566 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 4 17:11:09.220592 kernel: cpuidle: using governor menu Sep 4 17:11:09.220613 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 4 17:11:09.220632 kernel: ASID allocator initialised with 65536 entries Sep 4 17:11:09.220650 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 4 17:11:09.220697 kernel: Serial: AMBA PL011 UART driver Sep 4 17:11:09.220722 kernel: Modules: 17600 pages in range for non-PLT usage Sep 4 17:11:09.220743 kernel: Modules: 509120 pages in range for PLT usage Sep 4 17:11:09.220763 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 4 17:11:09.220782 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 4 17:11:09.220809 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 4 17:11:09.220884 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 4 17:11:09.220911 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 4 17:11:09.220930 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 4 17:11:09.220951 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 4 17:11:09.220970 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 4 17:11:09.220989 kernel: ACPI: Added _OSI(Module Device) Sep 4 17:11:09.221008 kernel: ACPI: Added _OSI(Processor Device) Sep 4 17:11:09.221028 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Sep 4 17:11:09.221056 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 4 17:11:09.221077 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 4 17:11:09.221096 kernel: ACPI: Interpreter enabled Sep 4 17:11:09.221115 kernel: ACPI: Using GIC for interrupt routing Sep 4 17:11:09.221135 kernel: ACPI: MCFG table detected, 1 entries Sep 4 17:11:09.221155 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Sep 4 17:11:09.221501 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 4 17:11:09.221771 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 4 17:11:09.224193 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 4 17:11:09.224466 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Sep 4 17:11:09.224733 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Sep 4 17:11:09.224769 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Sep 4 17:11:09.224790 kernel: acpiphp: Slot [1] registered Sep 4 17:11:09.224811 kernel: acpiphp: Slot [2] registered Sep 4 17:11:09.224879 kernel: acpiphp: Slot [3] registered Sep 4 17:11:09.224904 kernel: acpiphp: Slot [4] registered Sep 4 17:11:09.224937 kernel: acpiphp: Slot [5] registered Sep 4 17:11:09.224958 kernel: acpiphp: Slot [6] registered Sep 4 17:11:09.224978 kernel: acpiphp: Slot [7] registered Sep 4 17:11:09.224998 kernel: acpiphp: Slot [8] registered Sep 4 17:11:09.225018 kernel: acpiphp: Slot [9] registered Sep 4 17:11:09.225038 kernel: acpiphp: Slot [10] registered Sep 4 17:11:09.225058 kernel: acpiphp: Slot [11] registered Sep 4 17:11:09.225078 kernel: acpiphp: Slot [12] registered Sep 4 17:11:09.225096 kernel: acpiphp: Slot [13] registered Sep 4 17:11:09.225115 kernel: acpiphp: Slot [14] registered Sep 4 17:11:09.225142 kernel: acpiphp: Slot [15] registered Sep 4 17:11:09.225162 kernel: acpiphp: Slot [16] registered Sep 4 17:11:09.225181 kernel: acpiphp: Slot [17] registered Sep 4 17:11:09.225201 kernel: acpiphp: Slot [18] registered Sep 4 17:11:09.225220 kernel: acpiphp: Slot [19] registered Sep 4 17:11:09.225240 kernel: acpiphp: Slot [20] registered Sep 4 17:11:09.225259 kernel: acpiphp: Slot [21] registered Sep 4 17:11:09.225280 kernel: acpiphp: Slot [22] registered Sep 4 17:11:09.225300 kernel: acpiphp: Slot [23] registered Sep 4 17:11:09.225325 kernel: acpiphp: Slot [24] registered Sep 4 17:11:09.225345 kernel: acpiphp: Slot [25] registered Sep 4 17:11:09.225364 kernel: acpiphp: Slot [26] registered Sep 4 17:11:09.225384 kernel: acpiphp: Slot [27] registered Sep 4 17:11:09.225404 kernel: acpiphp: Slot [28] registered Sep 4 17:11:09.225423 kernel: acpiphp: Slot [29] registered Sep 4 17:11:09.225442 kernel: acpiphp: Slot [30] registered Sep 4 17:11:09.225461 kernel: acpiphp: Slot [31] registered Sep 4 17:11:09.225480 kernel: PCI host bridge to bus 0000:00 Sep 4 17:11:09.225767 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Sep 4 17:11:09.227135 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 4 17:11:09.227373 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Sep 4 17:11:09.227597 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Sep 4 17:11:09.229111 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Sep 4 17:11:09.229430 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Sep 4 17:11:09.229684 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Sep 4 17:11:09.232163 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Sep 4 17:11:09.232454 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Sep 4 17:11:09.232733 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:11:09.233069 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Sep 4 17:11:09.233321 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Sep 4 17:11:09.233559 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Sep 4 17:11:09.233802 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Sep 4 17:11:09.236193 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Sep 4 17:11:09.236441 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Sep 4 17:11:09.236699 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Sep 4 17:11:09.238116 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Sep 4 17:11:09.238382 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Sep 4 17:11:09.238619 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Sep 4 17:11:09.239899 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Sep 4 17:11:09.240233 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 4 17:11:09.240447 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Sep 4 17:11:09.240480 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 4 17:11:09.240502 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 4 17:11:09.240523 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 4 17:11:09.240544 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 4 17:11:09.240563 kernel: iommu: Default domain type: Translated Sep 4 17:11:09.240583 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 4 17:11:09.240613 kernel: efivars: Registered efivars operations Sep 4 17:11:09.240632 kernel: vgaarb: loaded Sep 4 17:11:09.240652 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 4 17:11:09.240728 kernel: VFS: Disk quotas dquot_6.6.0 Sep 4 17:11:09.240753 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 4 17:11:09.240774 kernel: pnp: PnP ACPI init Sep 4 17:11:09.246164 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Sep 4 17:11:09.246220 kernel: pnp: PnP ACPI: found 1 devices Sep 4 17:11:09.246258 kernel: NET: Registered PF_INET protocol family Sep 4 17:11:09.246278 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 4 17:11:09.246298 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 4 17:11:09.246318 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 4 17:11:09.246337 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 4 17:11:09.246357 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 4 17:11:09.246376 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 4 17:11:09.246395 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:11:09.246414 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 4 17:11:09.246439 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 4 17:11:09.246458 kernel: PCI: CLS 0 bytes, default 64 Sep 4 17:11:09.246477 kernel: kvm [1]: HYP mode not available Sep 4 17:11:09.246496 kernel: Initialise system trusted keyrings Sep 4 17:11:09.246516 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 4 17:11:09.246536 kernel: Key type asymmetric registered Sep 4 17:11:09.246556 kernel: Asymmetric key parser 'x509' registered Sep 4 17:11:09.246575 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 4 17:11:09.246596 kernel: io scheduler mq-deadline registered Sep 4 17:11:09.246623 kernel: io scheduler kyber registered Sep 4 17:11:09.246643 kernel: io scheduler bfq registered Sep 4 17:11:09.248383 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Sep 4 17:11:09.248430 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 4 17:11:09.248451 kernel: ACPI: button: Power Button [PWRB] Sep 4 17:11:09.248471 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Sep 4 17:11:09.248490 kernel: ACPI: button: Sleep Button [SLPB] Sep 4 17:11:09.248510 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 4 17:11:09.248545 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Sep 4 17:11:09.248999 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Sep 4 17:11:09.249045 kernel: printk: console [ttyS0] disabled Sep 4 17:11:09.249066 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Sep 4 17:11:09.249086 kernel: printk: console [ttyS0] enabled Sep 4 17:11:09.249106 kernel: printk: bootconsole [uart0] disabled Sep 4 17:11:09.249125 kernel: thunder_xcv, ver 1.0 Sep 4 17:11:09.249145 kernel: thunder_bgx, ver 1.0 Sep 4 17:11:09.249164 kernel: nicpf, ver 1.0 Sep 4 17:11:09.249197 kernel: nicvf, ver 1.0 Sep 4 17:11:09.249484 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 4 17:11:09.249723 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-09-04T17:11:08 UTC (1725469868) Sep 4 17:11:09.249756 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 4 17:11:09.249778 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Sep 4 17:11:09.249799 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 4 17:11:09.249819 kernel: watchdog: Hard watchdog permanently disabled Sep 4 17:11:09.249879 kernel: NET: Registered PF_INET6 protocol family Sep 4 17:11:09.249915 kernel: Segment Routing with IPv6 Sep 4 17:11:09.249936 kernel: In-situ OAM (IOAM) with IPv6 Sep 4 17:11:09.249956 kernel: NET: Registered PF_PACKET protocol family Sep 4 17:11:09.249975 kernel: Key type dns_resolver registered Sep 4 17:11:09.249996 kernel: registered taskstats version 1 Sep 4 17:11:09.250016 kernel: Loading compiled-in X.509 certificates Sep 4 17:11:09.250036 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.48-flatcar: 1f5b9f288f9cae6ec9698678cdc0f614482066f7' Sep 4 17:11:09.250056 kernel: Key type .fscrypt registered Sep 4 17:11:09.250075 kernel: Key type fscrypt-provisioning registered Sep 4 17:11:09.250094 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 4 17:11:09.250122 kernel: ima: Allocated hash algorithm: sha1 Sep 4 17:11:09.250142 kernel: ima: No architecture policies found Sep 4 17:11:09.250164 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 4 17:11:09.250183 kernel: clk: Disabling unused clocks Sep 4 17:11:09.250202 kernel: Freeing unused kernel memory: 39040K Sep 4 17:11:09.250221 kernel: Run /init as init process Sep 4 17:11:09.250241 kernel: with arguments: Sep 4 17:11:09.250260 kernel: /init Sep 4 17:11:09.250279 kernel: with environment: Sep 4 17:11:09.250308 kernel: HOME=/ Sep 4 17:11:09.250327 kernel: TERM=linux Sep 4 17:11:09.250346 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 4 17:11:09.250371 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:11:09.250396 systemd[1]: Detected virtualization amazon. Sep 4 17:11:09.250417 systemd[1]: Detected architecture arm64. Sep 4 17:11:09.250438 systemd[1]: Running in initrd. Sep 4 17:11:09.250465 systemd[1]: No hostname configured, using default hostname. Sep 4 17:11:09.250488 systemd[1]: Hostname set to . Sep 4 17:11:09.250511 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:11:09.250532 systemd[1]: Queued start job for default target initrd.target. Sep 4 17:11:09.250553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:11:09.250575 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:11:09.250599 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 4 17:11:09.250621 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:11:09.250649 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 4 17:11:09.250671 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 4 17:11:09.250696 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 4 17:11:09.250717 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 4 17:11:09.250738 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:11:09.250759 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:11:09.250780 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:11:09.250808 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:11:09.250863 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:11:09.250893 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:11:09.250914 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:11:09.250935 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:11:09.250957 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 4 17:11:09.250978 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 4 17:11:09.250999 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:11:09.251021 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:11:09.251053 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:11:09.251074 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:11:09.251095 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 4 17:11:09.251117 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:11:09.251139 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 4 17:11:09.251159 systemd[1]: Starting systemd-fsck-usr.service... Sep 4 17:11:09.251180 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:11:09.251202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:11:09.251232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:09.251254 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 4 17:11:09.251275 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:11:09.251295 systemd[1]: Finished systemd-fsck-usr.service. Sep 4 17:11:09.251319 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 4 17:11:09.251412 systemd-journald[250]: Collecting audit messages is disabled. Sep 4 17:11:09.251468 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 4 17:11:09.251491 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 4 17:11:09.251520 systemd-journald[250]: Journal started Sep 4 17:11:09.251561 systemd-journald[250]: Runtime Journal (/run/log/journal/ec269c156cad257356c44431ba3fd363) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:11:09.208698 systemd-modules-load[251]: Inserted module 'overlay' Sep 4 17:11:09.258952 kernel: Bridge firewalling registered Sep 4 17:11:09.255721 systemd-modules-load[251]: Inserted module 'br_netfilter' Sep 4 17:11:09.265239 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:11:09.269276 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:11:09.275997 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:11:09.281139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:09.302352 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:11:09.308588 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:11:09.327262 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:11:09.334914 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:11:09.353769 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:11:09.375351 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:11:09.380982 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:11:09.396130 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 4 17:11:09.407154 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:11:09.437622 dracut-cmdline[286]: dracut-dracut-053 Sep 4 17:11:09.450502 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7913866621ae0af53522ae1b4ff4e1e453dd69d966d437a439147039341ecbbc Sep 4 17:11:09.499502 systemd-resolved[287]: Positive Trust Anchors: Sep 4 17:11:09.499536 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:11:09.499596 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:11:09.664251 kernel: SCSI subsystem initialized Sep 4 17:11:09.671951 kernel: Loading iSCSI transport class v2.0-870. Sep 4 17:11:09.684948 kernel: iscsi: registered transport (tcp) Sep 4 17:11:09.708607 kernel: iscsi: registered transport (qla4xxx) Sep 4 17:11:09.708697 kernel: QLogic iSCSI HBA Driver Sep 4 17:11:09.759159 kernel: random: crng init done Sep 4 17:11:09.759550 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 4 17:11:09.763505 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:11:09.767356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:11:09.810046 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 4 17:11:09.820193 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 4 17:11:09.864338 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 4 17:11:09.864431 kernel: device-mapper: uevent: version 1.0.3 Sep 4 17:11:09.864459 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 4 17:11:09.934908 kernel: raid6: neonx8 gen() 6729 MB/s Sep 4 17:11:09.951881 kernel: raid6: neonx4 gen() 6517 MB/s Sep 4 17:11:09.968877 kernel: raid6: neonx2 gen() 5449 MB/s Sep 4 17:11:09.985883 kernel: raid6: neonx1 gen() 3928 MB/s Sep 4 17:11:10.002884 kernel: raid6: int64x8 gen() 3818 MB/s Sep 4 17:11:10.019894 kernel: raid6: int64x4 gen() 3688 MB/s Sep 4 17:11:10.036878 kernel: raid6: int64x2 gen() 3565 MB/s Sep 4 17:11:10.054874 kernel: raid6: int64x1 gen() 2753 MB/s Sep 4 17:11:10.054945 kernel: raid6: using algorithm neonx8 gen() 6729 MB/s Sep 4 17:11:10.073883 kernel: raid6: .... xor() 4885 MB/s, rmw enabled Sep 4 17:11:10.073956 kernel: raid6: using neon recovery algorithm Sep 4 17:11:10.081880 kernel: xor: measuring software checksum speed Sep 4 17:11:10.083887 kernel: 8regs : 11094 MB/sec Sep 4 17:11:10.085886 kernel: 32regs : 11943 MB/sec Sep 4 17:11:10.088223 kernel: arm64_neon : 9543 MB/sec Sep 4 17:11:10.088292 kernel: xor: using function: 32regs (11943 MB/sec) Sep 4 17:11:10.177896 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 4 17:11:10.199094 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:11:10.207184 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:11:10.248442 systemd-udevd[469]: Using default interface naming scheme 'v255'. Sep 4 17:11:10.257637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:11:10.269153 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 4 17:11:10.305257 dracut-pre-trigger[471]: rd.md=0: removing MD RAID activation Sep 4 17:11:10.364291 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:11:10.375169 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:11:10.504577 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:11:10.519547 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 4 17:11:10.569991 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 4 17:11:10.578663 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:11:10.583152 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:11:10.585396 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:11:10.598503 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 4 17:11:10.643253 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:11:10.709870 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 4 17:11:10.709949 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Sep 4 17:11:10.721423 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:11:10.721992 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:11:10.732795 kernel: ena 0000:00:05.0: ENA device version: 0.10 Sep 4 17:11:10.733135 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Sep 4 17:11:10.728542 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:11:10.728957 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:11:10.731687 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:10.747792 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:8f:ca:03:3d:19 Sep 4 17:11:10.738567 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:10.751327 (udev-worker)[542]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:10.760922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:10.791596 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Sep 4 17:11:10.791671 kernel: nvme nvme0: pci function 0000:00:04.0 Sep 4 17:11:10.802884 kernel: nvme nvme0: 2/0/0 default/read/poll queues Sep 4 17:11:10.807894 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:10.815592 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 4 17:11:10.815642 kernel: GPT:9289727 != 16777215 Sep 4 17:11:10.815668 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 4 17:11:10.815693 kernel: GPT:9289727 != 16777215 Sep 4 17:11:10.815718 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 4 17:11:10.819867 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:11:10.824114 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 4 17:11:10.856552 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:11:10.943882 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (518) Sep 4 17:11:10.951263 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Sep 4 17:11:10.972317 kernel: BTRFS: device fsid 2be47701-3393-455e-86fc-33755ceb9c20 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (542) Sep 4 17:11:11.066670 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Sep 4 17:11:11.093232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Sep 4 17:11:11.096103 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Sep 4 17:11:11.113994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:11:11.132076 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 4 17:11:11.146782 disk-uuid[659]: Primary Header is updated. Sep 4 17:11:11.146782 disk-uuid[659]: Secondary Entries is updated. Sep 4 17:11:11.146782 disk-uuid[659]: Secondary Header is updated. Sep 4 17:11:11.156875 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:11:11.164875 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:11:11.171883 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:11:12.180861 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Sep 4 17:11:12.182501 disk-uuid[660]: The operation has completed successfully. Sep 4 17:11:12.361456 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 4 17:11:12.361712 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 4 17:11:12.439109 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 4 17:11:12.448002 sh[1003]: Success Sep 4 17:11:12.476892 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 4 17:11:12.580254 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 4 17:11:12.593073 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 4 17:11:12.609164 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 4 17:11:12.632562 kernel: BTRFS info (device dm-0): first mount of filesystem 2be47701-3393-455e-86fc-33755ceb9c20 Sep 4 17:11:12.632628 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:11:12.632674 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 4 17:11:12.635599 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 4 17:11:12.635666 kernel: BTRFS info (device dm-0): using free space tree Sep 4 17:11:12.709879 kernel: BTRFS info (device dm-0): enabling ssd optimizations Sep 4 17:11:12.763081 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 4 17:11:12.769546 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 4 17:11:12.784157 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 4 17:11:12.792086 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 4 17:11:12.824504 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:11:12.824604 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:11:12.826479 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:11:12.831877 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:11:12.853438 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 4 17:11:12.855867 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:11:12.882272 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 4 17:11:12.892268 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 4 17:11:13.015731 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:11:13.035225 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:11:13.089356 systemd-networkd[1197]: lo: Link UP Sep 4 17:11:13.089373 systemd-networkd[1197]: lo: Gained carrier Sep 4 17:11:13.094686 systemd-networkd[1197]: Enumeration completed Sep 4 17:11:13.095367 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:11:13.095757 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:13.095764 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:11:13.099007 systemd[1]: Reached target network.target - Network. Sep 4 17:11:13.111216 systemd-networkd[1197]: eth0: Link UP Sep 4 17:11:13.111231 systemd-networkd[1197]: eth0: Gained carrier Sep 4 17:11:13.111251 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:13.140005 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.31.13/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:11:13.295721 ignition[1112]: Ignition 2.18.0 Sep 4 17:11:13.295742 ignition[1112]: Stage: fetch-offline Sep 4 17:11:13.296410 ignition[1112]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:13.296439 ignition[1112]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:11:13.298202 ignition[1112]: Ignition finished successfully Sep 4 17:11:13.307963 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:11:13.318189 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 4 17:11:13.349815 ignition[1208]: Ignition 2.18.0 Sep 4 17:11:13.349898 ignition[1208]: Stage: fetch Sep 4 17:11:13.351640 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:13.351668 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:11:13.352191 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:11:13.363136 ignition[1208]: PUT result: OK Sep 4 17:11:13.366220 ignition[1208]: parsed url from cmdline: "" Sep 4 17:11:13.366243 ignition[1208]: no config URL provided Sep 4 17:11:13.366260 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Sep 4 17:11:13.366289 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Sep 4 17:11:13.366324 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:11:13.368361 ignition[1208]: PUT result: OK Sep 4 17:11:13.370460 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Sep 4 17:11:13.377936 ignition[1208]: GET result: OK Sep 4 17:11:13.379390 ignition[1208]: parsing config with SHA512: 8844cac5298c0e4f065cdbf852c5879329856882ef7d80df956b3d59ad1d8196b13db8752283b998c83786c2c9d376dfe9c29124051a1139d35d9c45d8fb787d Sep 4 17:11:13.389678 unknown[1208]: fetched base config from "system" Sep 4 17:11:13.390071 unknown[1208]: fetched base config from "system" Sep 4 17:11:13.391077 ignition[1208]: fetch: fetch complete Sep 4 17:11:13.390088 unknown[1208]: fetched user config from "aws" Sep 4 17:11:13.391093 ignition[1208]: fetch: fetch passed Sep 4 17:11:13.391217 ignition[1208]: Ignition finished successfully Sep 4 17:11:13.403985 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 4 17:11:13.426282 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 4 17:11:13.454196 ignition[1215]: Ignition 2.18.0 Sep 4 17:11:13.454950 ignition[1215]: Stage: kargs Sep 4 17:11:13.455764 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:13.455799 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:11:13.456033 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:11:13.459034 ignition[1215]: PUT result: OK Sep 4 17:11:13.470774 ignition[1215]: kargs: kargs passed Sep 4 17:11:13.471587 ignition[1215]: Ignition finished successfully Sep 4 17:11:13.475029 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 4 17:11:13.486165 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 4 17:11:13.520514 ignition[1222]: Ignition 2.18.0 Sep 4 17:11:13.520556 ignition[1222]: Stage: disks Sep 4 17:11:13.522430 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:13.522461 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:11:13.522619 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:11:13.524616 ignition[1222]: PUT result: OK Sep 4 17:11:13.535591 ignition[1222]: disks: disks passed Sep 4 17:11:13.537282 ignition[1222]: Ignition finished successfully Sep 4 17:11:13.541964 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 4 17:11:13.546775 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 4 17:11:13.549464 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 4 17:11:13.553551 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:11:13.555902 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:11:13.557948 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:11:13.567234 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 4 17:11:13.632253 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 4 17:11:13.644788 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 4 17:11:13.659195 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 4 17:11:13.756884 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f2f4f3ba-c5a3-49c0-ace4-444935e9934b r/w with ordered data mode. Quota mode: none. Sep 4 17:11:13.758658 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 4 17:11:13.761816 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 4 17:11:13.785007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:11:13.796956 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 4 17:11:13.800026 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 4 17:11:13.800769 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 4 17:11:13.800975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:11:13.832871 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1250) Sep 4 17:11:13.836528 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:11:13.836676 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:11:13.838483 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:11:13.845452 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 4 17:11:13.850945 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:11:13.860669 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 4 17:11:13.870037 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:11:14.207311 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Sep 4 17:11:14.228323 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Sep 4 17:11:14.238538 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Sep 4 17:11:14.248277 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Sep 4 17:11:14.309057 systemd-networkd[1197]: eth0: Gained IPv6LL Sep 4 17:11:14.555920 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 4 17:11:14.568168 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 4 17:11:14.578265 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 4 17:11:14.598215 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 4 17:11:14.600315 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:11:14.631214 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 4 17:11:14.644898 ignition[1364]: INFO : Ignition 2.18.0 Sep 4 17:11:14.644898 ignition[1364]: INFO : Stage: mount Sep 4 17:11:14.648879 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:14.648879 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:11:14.648879 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:11:14.656134 ignition[1364]: INFO : PUT result: OK Sep 4 17:11:14.663705 ignition[1364]: INFO : mount: mount passed Sep 4 17:11:14.666004 ignition[1364]: INFO : Ignition finished successfully Sep 4 17:11:14.671895 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 4 17:11:14.691196 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 4 17:11:14.767330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 4 17:11:14.798454 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1375) Sep 4 17:11:14.798520 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 26eaee0d-fa47-45db-8665-f2efa4a46ac0 Sep 4 17:11:14.798547 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Sep 4 17:11:14.801089 kernel: BTRFS info (device nvme0n1p6): using free space tree Sep 4 17:11:14.806273 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Sep 4 17:11:14.808897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 4 17:11:14.843893 ignition[1392]: INFO : Ignition 2.18.0 Sep 4 17:11:14.843893 ignition[1392]: INFO : Stage: files Sep 4 17:11:14.847240 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:14.847240 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:11:14.851556 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:11:14.854003 ignition[1392]: INFO : PUT result: OK Sep 4 17:11:14.858770 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Sep 4 17:11:14.861692 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 4 17:11:14.861692 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 4 17:11:14.880917 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 4 17:11:14.883945 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 4 17:11:14.887038 unknown[1392]: wrote ssh authorized keys file for user: core Sep 4 17:11:14.889438 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 4 17:11:14.901449 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:11:14.905509 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 4 17:11:14.958313 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 4 17:11:15.041970 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 4 17:11:15.041970 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:11:15.049656 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Sep 4 17:11:15.518497 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 4 17:11:15.946615 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Sep 4 17:11:15.946615 ignition[1392]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 4 17:11:15.956306 ignition[1392]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:11:15.956306 ignition[1392]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 4 17:11:15.956306 ignition[1392]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 4 17:11:15.956306 ignition[1392]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 4 17:11:15.956306 ignition[1392]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 4 17:11:15.956306 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:11:15.956306 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 4 17:11:15.956306 ignition[1392]: INFO : files: files passed Sep 4 17:11:15.956306 ignition[1392]: INFO : Ignition finished successfully Sep 4 17:11:15.962590 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 4 17:11:15.990112 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 4 17:11:16.003284 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 4 17:11:16.009729 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 4 17:11:16.010037 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 4 17:11:16.050371 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:11:16.050371 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:11:16.058511 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 4 17:11:16.065630 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:11:16.074041 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 4 17:11:16.086272 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 4 17:11:16.166427 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 4 17:11:16.168502 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 4 17:11:16.172342 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 4 17:11:16.176071 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 4 17:11:16.178262 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 4 17:11:16.192452 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 4 17:11:16.234087 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:11:16.247288 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 4 17:11:16.280344 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:11:16.285757 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:11:16.289182 systemd[1]: Stopped target timers.target - Timer Units. Sep 4 17:11:16.294166 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 4 17:11:16.295631 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 4 17:11:16.304099 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 4 17:11:16.307544 systemd[1]: Stopped target basic.target - Basic System. Sep 4 17:11:16.313603 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 4 17:11:16.316761 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 4 17:11:16.321683 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 4 17:11:16.328792 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 4 17:11:16.331265 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 4 17:11:16.336611 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 4 17:11:16.340344 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 4 17:11:16.342622 systemd[1]: Stopped target swap.target - Swaps. Sep 4 17:11:16.344618 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 4 17:11:16.345759 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 4 17:11:16.348794 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:11:16.351570 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:11:16.355744 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 4 17:11:16.361297 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:11:16.370421 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 4 17:11:16.370705 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 4 17:11:16.373600 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 4 17:11:16.373972 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 4 17:11:16.377237 systemd[1]: ignition-files.service: Deactivated successfully. Sep 4 17:11:16.377523 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 4 17:11:16.394852 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 4 17:11:16.404133 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 4 17:11:16.405885 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:11:16.423249 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 4 17:11:16.426035 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 4 17:11:16.426363 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:11:16.434036 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 4 17:11:16.436590 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 4 17:11:16.458331 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 4 17:11:16.458555 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 4 17:11:16.470897 ignition[1445]: INFO : Ignition 2.18.0 Sep 4 17:11:16.470897 ignition[1445]: INFO : Stage: umount Sep 4 17:11:16.474536 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 4 17:11:16.476783 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Sep 4 17:11:16.476783 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Sep 4 17:11:16.483194 ignition[1445]: INFO : PUT result: OK Sep 4 17:11:16.491390 ignition[1445]: INFO : umount: umount passed Sep 4 17:11:16.495106 ignition[1445]: INFO : Ignition finished successfully Sep 4 17:11:16.501525 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 4 17:11:16.501806 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 4 17:11:16.515077 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 4 17:11:16.518296 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 4 17:11:16.519968 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 4 17:11:16.524633 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 4 17:11:16.526332 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 4 17:11:16.531092 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 4 17:11:16.531225 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 4 17:11:16.535292 systemd[1]: Stopped target network.target - Network. Sep 4 17:11:16.541336 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 4 17:11:16.541482 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 4 17:11:16.543951 systemd[1]: Stopped target paths.target - Path Units. Sep 4 17:11:16.545699 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 4 17:11:16.554281 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:11:16.557515 systemd[1]: Stopped target slices.target - Slice Units. Sep 4 17:11:16.559479 systemd[1]: Stopped target sockets.target - Socket Units. Sep 4 17:11:16.561797 systemd[1]: iscsid.socket: Deactivated successfully. Sep 4 17:11:16.561929 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 4 17:11:16.566420 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 4 17:11:16.567047 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 4 17:11:16.570391 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 4 17:11:16.570511 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 4 17:11:16.578246 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 4 17:11:16.578357 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 4 17:11:16.581155 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 4 17:11:16.584978 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 4 17:11:16.591887 systemd-networkd[1197]: eth0: DHCPv6 lease lost Sep 4 17:11:16.613578 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 4 17:11:16.613801 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 4 17:11:16.622822 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 4 17:11:16.623095 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 4 17:11:16.635550 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 4 17:11:16.635932 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 4 17:11:16.647301 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 4 17:11:16.647452 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:11:16.655526 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 4 17:11:16.655656 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 4 17:11:16.673654 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 4 17:11:16.678940 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 4 17:11:16.679089 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 4 17:11:16.684537 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 4 17:11:16.684680 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:11:16.695985 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 4 17:11:16.696126 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 4 17:11:16.698802 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 4 17:11:16.698990 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:11:16.710588 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:11:16.737572 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 4 17:11:16.741227 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:11:16.748455 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 4 17:11:16.748634 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 4 17:11:16.754617 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 4 17:11:16.754751 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:11:16.769166 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 4 17:11:16.769282 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 4 17:11:16.779461 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 4 17:11:16.779596 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 4 17:11:16.782533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 4 17:11:16.782658 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 4 17:11:16.799192 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 4 17:11:16.804140 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 4 17:11:16.804279 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:11:16.823076 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 4 17:11:16.823216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:16.829503 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 4 17:11:16.829999 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 4 17:11:16.840171 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 4 17:11:16.840387 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 4 17:11:16.844399 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 4 17:11:16.857240 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 4 17:11:16.889126 systemd[1]: Switching root. Sep 4 17:11:16.923946 systemd-journald[250]: Journal stopped Sep 4 17:11:19.704734 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Sep 4 17:11:19.704936 kernel: SELinux: policy capability network_peer_controls=1 Sep 4 17:11:19.704990 kernel: SELinux: policy capability open_perms=1 Sep 4 17:11:19.705024 kernel: SELinux: policy capability extended_socket_class=1 Sep 4 17:11:19.705057 kernel: SELinux: policy capability always_check_network=0 Sep 4 17:11:19.705100 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 4 17:11:19.705140 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 4 17:11:19.705172 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 4 17:11:19.705201 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 4 17:11:19.705232 kernel: audit: type=1403 audit(1725469877.847:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 4 17:11:19.705282 systemd[1]: Successfully loaded SELinux policy in 59.846ms. Sep 4 17:11:19.705335 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.945ms. Sep 4 17:11:19.705372 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 4 17:11:19.705407 systemd[1]: Detected virtualization amazon. Sep 4 17:11:19.705440 systemd[1]: Detected architecture arm64. Sep 4 17:11:19.705479 systemd[1]: Detected first boot. Sep 4 17:11:19.705518 systemd[1]: Initializing machine ID from VM UUID. Sep 4 17:11:19.705554 zram_generator::config[1488]: No configuration found. Sep 4 17:11:19.705596 systemd[1]: Populated /etc with preset unit settings. Sep 4 17:11:19.705626 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 4 17:11:19.705661 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 4 17:11:19.705698 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 4 17:11:19.705732 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 4 17:11:19.705772 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 4 17:11:19.705808 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 4 17:11:19.705919 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 4 17:11:19.705961 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 4 17:11:19.705995 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 4 17:11:19.706030 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 4 17:11:19.706065 systemd[1]: Created slice user.slice - User and Session Slice. Sep 4 17:11:19.706098 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 4 17:11:19.706143 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 4 17:11:19.706180 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 4 17:11:19.706214 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 4 17:11:19.706244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 4 17:11:19.706278 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 4 17:11:19.706312 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 4 17:11:19.706355 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 4 17:11:19.706386 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 4 17:11:19.706419 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 4 17:11:19.706457 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 4 17:11:19.706490 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 4 17:11:19.706520 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 4 17:11:19.706551 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 4 17:11:19.706580 systemd[1]: Reached target slices.target - Slice Units. Sep 4 17:11:19.706610 systemd[1]: Reached target swap.target - Swaps. Sep 4 17:11:19.706639 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 4 17:11:19.706670 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 4 17:11:19.706705 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 4 17:11:19.706735 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 4 17:11:19.706765 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 4 17:11:19.706797 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 4 17:11:19.706869 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 4 17:11:19.706906 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 4 17:11:19.706939 systemd[1]: Mounting media.mount - External Media Directory... Sep 4 17:11:19.706969 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 4 17:11:19.707001 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 4 17:11:19.707039 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 4 17:11:19.707078 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 4 17:11:19.707110 systemd[1]: Reached target machines.target - Containers. Sep 4 17:11:19.707143 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 4 17:11:19.707174 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:19.707207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 4 17:11:19.707238 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 4 17:11:19.707268 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:19.707319 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:11:19.707355 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:19.707386 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 4 17:11:19.707415 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:19.707446 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 4 17:11:19.707478 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 4 17:11:19.707517 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 4 17:11:19.707554 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 4 17:11:19.707591 systemd[1]: Stopped systemd-fsck-usr.service. Sep 4 17:11:19.707632 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 4 17:11:19.707668 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 4 17:11:19.707698 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 4 17:11:19.707727 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 4 17:11:19.707759 kernel: fuse: init (API version 7.39) Sep 4 17:11:19.707792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 4 17:11:19.707867 systemd[1]: verity-setup.service: Deactivated successfully. Sep 4 17:11:19.707911 systemd[1]: Stopped verity-setup.service. Sep 4 17:11:19.707943 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 4 17:11:19.707980 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 4 17:11:19.708009 systemd[1]: Mounted media.mount - External Media Directory. Sep 4 17:11:19.708039 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 4 17:11:19.708070 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 4 17:11:19.708104 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 4 17:11:19.708173 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 4 17:11:19.708208 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 4 17:11:19.708243 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 4 17:11:19.708274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:19.708304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:19.708333 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:19.708363 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:19.708392 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 4 17:11:19.708428 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 4 17:11:19.708467 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 4 17:11:19.708498 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 4 17:11:19.708528 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 4 17:11:19.708557 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 4 17:11:19.708591 kernel: loop: module loaded Sep 4 17:11:19.708626 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 4 17:11:19.708657 kernel: ACPI: bus type drm_connector registered Sep 4 17:11:19.708711 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:19.708743 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:19.708772 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:11:19.708804 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:11:19.708869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 4 17:11:19.708907 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 4 17:11:19.708943 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 4 17:11:19.708974 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 4 17:11:19.709054 systemd-journald[1569]: Collecting audit messages is disabled. Sep 4 17:11:19.709106 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 4 17:11:19.709138 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 4 17:11:19.709170 systemd-journald[1569]: Journal started Sep 4 17:11:19.709222 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec269c156cad257356c44431ba3fd363) is 8.0M, max 75.3M, 67.3M free. Sep 4 17:11:18.961366 systemd[1]: Queued start job for default target multi-user.target. Sep 4 17:11:18.992411 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Sep 4 17:11:18.993484 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 4 17:11:19.734601 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 4 17:11:19.753649 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 4 17:11:19.756898 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:19.771870 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 4 17:11:19.771975 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:11:19.800435 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 4 17:11:19.800586 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:11:19.825481 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 4 17:11:19.841925 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 4 17:11:19.852728 systemd[1]: Started systemd-journald.service - Journal Service. Sep 4 17:11:19.861173 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 4 17:11:19.868957 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 4 17:11:19.875121 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 4 17:11:19.925417 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 4 17:11:19.937973 kernel: loop0: detected capacity change from 0 to 194512 Sep 4 17:11:19.945779 kernel: block loop0: the capability attribute has been deprecated. Sep 4 17:11:19.946679 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 4 17:11:19.958264 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 4 17:11:19.972233 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 4 17:11:19.986949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 4 17:11:20.008875 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 4 17:11:20.023582 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 4 17:11:20.035023 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec269c156cad257356c44431ba3fd363 is 131.807ms for 914 entries. Sep 4 17:11:20.035023 systemd-journald[1569]: System Journal (/var/log/journal/ec269c156cad257356c44431ba3fd363) is 8.0M, max 195.6M, 187.6M free. Sep 4 17:11:20.195478 systemd-journald[1569]: Received client request to flush runtime journal. Sep 4 17:11:20.195590 kernel: loop1: detected capacity change from 0 to 59688 Sep 4 17:11:20.047310 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 4 17:11:20.114520 udevadm[1629]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 4 17:11:20.183569 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 4 17:11:20.205929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 4 17:11:20.241094 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 4 17:11:20.254199 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 4 17:11:20.257806 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 4 17:11:20.284900 kernel: loop2: detected capacity change from 0 to 113672 Sep 4 17:11:20.317325 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Sep 4 17:11:20.318035 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Sep 4 17:11:20.328975 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 4 17:11:20.370871 kernel: loop3: detected capacity change from 0 to 51896 Sep 4 17:11:20.462886 kernel: loop4: detected capacity change from 0 to 194512 Sep 4 17:11:20.488751 kernel: loop5: detected capacity change from 0 to 59688 Sep 4 17:11:20.512607 kernel: loop6: detected capacity change from 0 to 113672 Sep 4 17:11:20.527249 kernel: loop7: detected capacity change from 0 to 51896 Sep 4 17:11:20.539909 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Sep 4 17:11:20.541860 (sd-merge)[1643]: Merged extensions into '/usr'. Sep 4 17:11:20.554921 systemd[1]: Reloading requested from client PID 1598 ('systemd-sysext') (unit systemd-sysext.service)... Sep 4 17:11:20.555502 systemd[1]: Reloading... Sep 4 17:11:20.730877 zram_generator::config[1665]: No configuration found. Sep 4 17:11:21.108215 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:21.231988 systemd[1]: Reloading finished in 675 ms. Sep 4 17:11:21.272120 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 4 17:11:21.275818 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 4 17:11:21.290159 systemd[1]: Starting ensure-sysext.service... Sep 4 17:11:21.302221 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Sep 4 17:11:21.310211 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 4 17:11:21.333618 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit ensure-sysext.service)... Sep 4 17:11:21.333663 systemd[1]: Reloading... Sep 4 17:11:21.358315 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 4 17:11:21.361754 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 4 17:11:21.367341 systemd-tmpfiles[1720]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 4 17:11:21.370171 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Sep 4 17:11:21.370352 systemd-tmpfiles[1720]: ACLs are not supported, ignoring. Sep 4 17:11:21.386733 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:11:21.386762 systemd-tmpfiles[1720]: Skipping /boot Sep 4 17:11:21.418544 systemd-udevd[1721]: Using default interface naming scheme 'v255'. Sep 4 17:11:21.447024 systemd-tmpfiles[1720]: Detected autofs mount point /boot during canonicalization of boot. Sep 4 17:11:21.447045 systemd-tmpfiles[1720]: Skipping /boot Sep 4 17:11:21.544821 ldconfig[1594]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 4 17:11:21.555906 zram_generator::config[1743]: No configuration found. Sep 4 17:11:21.696551 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1758) Sep 4 17:11:21.795554 (udev-worker)[1755]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:22.018523 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:22.052888 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1756) Sep 4 17:11:22.180500 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 4 17:11:22.182087 systemd[1]: Reloading finished in 847 ms. Sep 4 17:11:22.208400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 4 17:11:22.211883 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 4 17:11:22.224138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Sep 4 17:11:22.314970 systemd[1]: Finished ensure-sysext.service. Sep 4 17:11:22.336661 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 4 17:11:22.350144 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Sep 4 17:11:22.359286 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:22.376335 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 4 17:11:22.380341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 4 17:11:22.384050 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 4 17:11:22.393310 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 4 17:11:22.406587 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 4 17:11:22.422545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 4 17:11:22.430275 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 4 17:11:22.433192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 4 17:11:22.441140 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 4 17:11:22.451734 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 4 17:11:22.459772 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 4 17:11:22.468372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 4 17:11:22.471254 systemd[1]: Reached target time-set.target - System Time Set. Sep 4 17:11:22.477447 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 4 17:11:22.484511 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 4 17:11:22.489155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 4 17:11:22.490998 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 4 17:11:22.494056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 4 17:11:22.494422 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 4 17:11:22.500950 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 4 17:11:22.512677 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 4 17:11:22.514150 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 4 17:11:22.516326 lvm[1918]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:11:22.565523 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 4 17:11:22.566560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 4 17:11:22.570023 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 4 17:11:22.576317 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 4 17:11:22.627933 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 4 17:11:22.638107 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 4 17:11:22.680741 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 4 17:11:22.681696 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 4 17:11:22.697361 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 4 17:11:22.707787 augenrules[1956]: No rules Sep 4 17:11:22.719339 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:22.727946 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 4 17:11:22.729120 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 4 17:11:22.743224 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 4 17:11:22.743668 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 4 17:11:22.748909 lvm[1955]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 4 17:11:22.780950 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 4 17:11:22.810976 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 4 17:11:22.818967 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 4 17:11:22.822697 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 4 17:11:22.915647 systemd-networkd[1931]: lo: Link UP Sep 4 17:11:22.915674 systemd-networkd[1931]: lo: Gained carrier Sep 4 17:11:22.918963 systemd-networkd[1931]: Enumeration completed Sep 4 17:11:22.919188 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 4 17:11:22.922504 systemd-networkd[1931]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:22.922529 systemd-networkd[1931]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 4 17:11:22.927388 systemd-networkd[1931]: eth0: Link UP Sep 4 17:11:22.927742 systemd-networkd[1931]: eth0: Gained carrier Sep 4 17:11:22.927776 systemd-networkd[1931]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 4 17:11:22.934889 systemd-resolved[1932]: Positive Trust Anchors: Sep 4 17:11:22.935297 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 4 17:11:22.938112 systemd-resolved[1932]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 4 17:11:22.938188 systemd-resolved[1932]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Sep 4 17:11:22.945134 systemd-networkd[1931]: eth0: DHCPv4 address 172.31.31.13/20, gateway 172.31.16.1 acquired from 172.31.16.1 Sep 4 17:11:22.947467 systemd-resolved[1932]: Defaulting to hostname 'linux'. Sep 4 17:11:22.951774 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 4 17:11:22.955067 systemd[1]: Reached target network.target - Network. Sep 4 17:11:22.957005 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 4 17:11:22.959807 systemd[1]: Reached target sysinit.target - System Initialization. Sep 4 17:11:22.962282 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 4 17:11:22.964914 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 4 17:11:22.967915 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 4 17:11:22.970482 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 4 17:11:22.973141 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 4 17:11:22.975724 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 4 17:11:22.975801 systemd[1]: Reached target paths.target - Path Units. Sep 4 17:11:22.977962 systemd[1]: Reached target timers.target - Timer Units. Sep 4 17:11:22.981994 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 4 17:11:22.987420 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 4 17:11:23.000271 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 4 17:11:23.004260 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 4 17:11:23.006717 systemd[1]: Reached target sockets.target - Socket Units. Sep 4 17:11:23.008788 systemd[1]: Reached target basic.target - Basic System. Sep 4 17:11:23.010983 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:11:23.011073 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 4 17:11:23.019220 systemd[1]: Starting containerd.service - containerd container runtime... Sep 4 17:11:23.026189 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 4 17:11:23.034277 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 4 17:11:23.049519 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 4 17:11:23.057521 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 4 17:11:23.059735 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 4 17:11:23.073438 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 4 17:11:23.082271 systemd[1]: Started ntpd.service - Network Time Service. Sep 4 17:11:23.092181 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 4 17:11:23.101104 systemd[1]: Starting setup-oem.service - Setup OEM... Sep 4 17:11:23.109399 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 4 17:11:23.119514 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 4 17:11:23.123126 jq[1984]: false Sep 4 17:11:23.132030 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 4 17:11:23.136244 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 4 17:11:23.137340 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 4 17:11:23.141724 systemd[1]: Starting update-engine.service - Update Engine... Sep 4 17:11:23.151707 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 4 17:11:23.160622 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 4 17:11:23.162966 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 4 17:11:23.176977 dbus-daemon[1983]: [system] SELinux support is enabled Sep 4 17:11:23.186064 dbus-daemon[1983]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1931 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 4 17:11:23.209016 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 4 17:11:23.246088 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 4 17:11:23.246153 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 4 17:11:23.251558 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 4 17:11:23.251824 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 4 17:11:23.251909 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 4 17:11:23.273549 jq[1996]: true Sep 4 17:11:23.275457 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 4 17:11:23.296935 extend-filesystems[1985]: Found loop4 Sep 4 17:11:23.296935 extend-filesystems[1985]: Found loop5 Sep 4 17:11:23.296935 extend-filesystems[1985]: Found loop6 Sep 4 17:11:23.296935 extend-filesystems[1985]: Found loop7 Sep 4 17:11:23.296935 extend-filesystems[1985]: Found nvme0n1 Sep 4 17:11:23.296935 extend-filesystems[1985]: Found nvme0n1p1 Sep 4 17:11:23.296935 extend-filesystems[1985]: Found nvme0n1p2 Sep 4 17:11:23.296935 extend-filesystems[1985]: Found nvme0n1p3 Sep 4 17:11:23.296935 extend-filesystems[1985]: Found usr Sep 4 17:11:23.360886 extend-filesystems[1985]: Found nvme0n1p4 Sep 4 17:11:23.360886 extend-filesystems[1985]: Found nvme0n1p6 Sep 4 17:11:23.360886 extend-filesystems[1985]: Found nvme0n1p7 Sep 4 17:11:23.360886 extend-filesystems[1985]: Found nvme0n1p9 Sep 4 17:11:23.360886 extend-filesystems[1985]: Checking size of /dev/nvme0n1p9 Sep 4 17:11:23.318420 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 4 17:11:23.319963 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 4 17:11:23.344506 systemd[1]: motdgen.service: Deactivated successfully. Sep 4 17:11:23.347002 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 4 17:11:23.432725 (ntainerd)[2016]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 4 17:11:23.435210 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:11:23.435210 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:11:23.435210 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: ---------------------------------------------------- Sep 4 17:11:23.435210 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:11:23.435210 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:11:23.435210 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 4 17:11:23.435210 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: available at https://www.nwtime.org/support Sep 4 17:11:23.435210 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: ---------------------------------------------------- Sep 4 17:11:23.433575 ntpd[1987]: ntpd 4.2.8p17@1.4004-o Wed Sep 4 15:13:39 UTC 2024 (1): Starting Sep 4 17:11:23.433625 ntpd[1987]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Sep 4 17:11:23.453079 coreos-metadata[1982]: Sep 04 17:11:23.450 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:11:23.453516 tar[2002]: linux-arm64/helm Sep 4 17:11:23.453931 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: proto: precision = 0.096 usec (-23) Sep 4 17:11:23.454012 update_engine[1994]: I0904 17:11:23.446018 1994 main.cc:92] Flatcar Update Engine starting Sep 4 17:11:23.433647 ntpd[1987]: ---------------------------------------------------- Sep 4 17:11:23.433666 ntpd[1987]: ntp-4 is maintained by Network Time Foundation, Sep 4 17:11:23.433684 ntpd[1987]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Sep 4 17:11:23.433703 ntpd[1987]: corporation. Support and training for ntp-4 are Sep 4 17:11:23.433721 ntpd[1987]: available at https://www.nwtime.org/support Sep 4 17:11:23.433740 ntpd[1987]: ---------------------------------------------------- Sep 4 17:11:23.447468 ntpd[1987]: proto: precision = 0.096 usec (-23) Sep 4 17:11:23.467186 jq[2015]: true Sep 4 17:11:23.487135 update_engine[1994]: I0904 17:11:23.461130 1994 update_check_scheduler.cc:74] Next update check in 8m59s Sep 4 17:11:23.487206 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: basedate set to 2024-08-23 Sep 4 17:11:23.487206 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: gps base set to 2024-08-25 (week 2329) Sep 4 17:11:23.487206 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:11:23.487449 coreos-metadata[1982]: Sep 04 17:11:23.460 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Sep 4 17:11:23.487449 coreos-metadata[1982]: Sep 04 17:11:23.466 INFO Fetch successful Sep 4 17:11:23.487449 coreos-metadata[1982]: Sep 04 17:11:23.466 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Sep 4 17:11:23.455508 ntpd[1987]: basedate set to 2024-08-23 Sep 4 17:11:23.475507 systemd[1]: Started update-engine.service - Update Engine. Sep 4 17:11:23.489896 extend-filesystems[1985]: Resized partition /dev/nvme0n1p9 Sep 4 17:11:23.455552 ntpd[1987]: gps base set to 2024-08-25 (week 2329) Sep 4 17:11:23.488161 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 4 17:11:23.503352 coreos-metadata[1982]: Sep 04 17:11:23.488 INFO Fetch successful Sep 4 17:11:23.503352 coreos-metadata[1982]: Sep 04 17:11:23.489 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Sep 4 17:11:23.503541 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:11:23.503541 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:11:23.503541 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: Listen normally on 3 eth0 172.31.31.13:123 Sep 4 17:11:23.503541 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 4 17:11:23.503541 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: bind(21) AF_INET6 fe80::48f:caff:fe03:3d19%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:23.503541 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: unable to create socket on eth0 (5) for fe80::48f:caff:fe03:3d19%2#123 Sep 4 17:11:23.503541 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: failed to init interface for address fe80::48f:caff:fe03:3d19%2 Sep 4 17:11:23.503541 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 4 17:11:23.469751 ntpd[1987]: Listen and drop on 0 v6wildcard [::]:123 Sep 4 17:11:23.492919 ntpd[1987]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Sep 4 17:11:23.493249 ntpd[1987]: Listen normally on 2 lo 127.0.0.1:123 Sep 4 17:11:23.493318 ntpd[1987]: Listen normally on 3 eth0 172.31.31.13:123 Sep 4 17:11:23.493387 ntpd[1987]: Listen normally on 4 lo [::1]:123 Sep 4 17:11:23.493473 ntpd[1987]: bind(21) AF_INET6 fe80::48f:caff:fe03:3d19%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:23.493513 ntpd[1987]: unable to create socket on eth0 (5) for fe80::48f:caff:fe03:3d19%2#123 Sep 4 17:11:23.493540 ntpd[1987]: failed to init interface for address fe80::48f:caff:fe03:3d19%2 Sep 4 17:11:23.493594 ntpd[1987]: Listening on routing socket on fd #21 for interface updates Sep 4 17:11:23.520193 coreos-metadata[1982]: Sep 04 17:11:23.512 INFO Fetch successful Sep 4 17:11:23.520193 coreos-metadata[1982]: Sep 04 17:11:23.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Sep 4 17:11:23.511041 systemd[1]: Finished setup-oem.service - Setup OEM. Sep 4 17:11:23.520440 extend-filesystems[2034]: resize2fs 1.47.0 (5-Feb-2023) Sep 4 17:11:23.533868 coreos-metadata[1982]: Sep 04 17:11:23.521 INFO Fetch successful Sep 4 17:11:23.533868 coreos-metadata[1982]: Sep 04 17:11:23.521 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Sep 4 17:11:23.533868 coreos-metadata[1982]: Sep 04 17:11:23.531 INFO Fetch failed with 404: resource not found Sep 4 17:11:23.533868 coreos-metadata[1982]: Sep 04 17:11:23.531 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Sep 4 17:11:23.539495 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Sep 4 17:11:23.539644 coreos-metadata[1982]: Sep 04 17:11:23.539 INFO Fetch successful Sep 4 17:11:23.539644 coreos-metadata[1982]: Sep 04 17:11:23.539 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Sep 4 17:11:23.541180 coreos-metadata[1982]: Sep 04 17:11:23.540 INFO Fetch successful Sep 4 17:11:23.541180 coreos-metadata[1982]: Sep 04 17:11:23.541 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Sep 4 17:11:23.541763 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:23.542304 coreos-metadata[1982]: Sep 04 17:11:23.542 INFO Fetch successful Sep 4 17:11:23.542304 coreos-metadata[1982]: Sep 04 17:11:23.542 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Sep 4 17:11:23.546202 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:23.546202 ntpd[1987]: 4 Sep 17:11:23 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:23.544715 ntpd[1987]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Sep 4 17:11:23.552885 coreos-metadata[1982]: Sep 04 17:11:23.548 INFO Fetch successful Sep 4 17:11:23.552885 coreos-metadata[1982]: Sep 04 17:11:23.548 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Sep 4 17:11:23.559865 coreos-metadata[1982]: Sep 04 17:11:23.558 INFO Fetch successful Sep 4 17:11:23.617730 systemd-logind[1992]: Watching system buttons on /dev/input/event0 (Power Button) Sep 4 17:11:23.617799 systemd-logind[1992]: Watching system buttons on /dev/input/event1 (Sleep Button) Sep 4 17:11:23.618242 systemd-logind[1992]: New seat seat0. Sep 4 17:11:23.620171 systemd[1]: Started systemd-logind.service - User Login Management. Sep 4 17:11:23.671161 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Sep 4 17:11:23.724596 extend-filesystems[2034]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Sep 4 17:11:23.724596 extend-filesystems[2034]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 4 17:11:23.724596 extend-filesystems[2034]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Sep 4 17:11:23.737634 extend-filesystems[1985]: Resized filesystem in /dev/nvme0n1p9 Sep 4 17:11:23.752882 bash[2062]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:11:23.782345 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 4 17:11:23.784533 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 4 17:11:23.789601 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 4 17:11:23.794031 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 4 17:11:23.813417 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 4 17:11:23.824416 systemd[1]: Starting sshkeys.service... Sep 4 17:11:23.833184 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 4 17:11:23.833480 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 4 17:11:23.836146 dbus-daemon[1983]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2007 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 4 17:11:23.847379 systemd[1]: Starting polkit.service - Authorization Manager... Sep 4 17:11:23.919782 polkitd[2075]: Started polkitd version 121 Sep 4 17:11:23.933541 polkitd[2075]: Loading rules from directory /etc/polkit-1/rules.d Sep 4 17:11:23.933701 polkitd[2075]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 4 17:11:23.935429 polkitd[2075]: Finished loading, compiling and executing 2 rules Sep 4 17:11:23.965061 dbus-daemon[1983]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 4 17:11:23.967295 polkitd[2075]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 4 17:11:24.017030 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1788) Sep 4 17:11:24.025911 systemd[1]: Started polkit.service - Authorization Manager. Sep 4 17:11:24.034969 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 4 17:11:24.039439 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 4 17:11:24.042978 systemd-hostnamed[2007]: Hostname set to (transient) Sep 4 17:11:24.043936 systemd-resolved[1932]: System hostname changed to 'ip-172-31-31-13'. Sep 4 17:11:24.054237 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 4 17:11:24.091046 locksmithd[2035]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 4 17:11:24.356547 coreos-metadata[2121]: Sep 04 17:11:24.356 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Sep 4 17:11:24.364530 coreos-metadata[2121]: Sep 04 17:11:24.361 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Sep 4 17:11:24.367122 coreos-metadata[2121]: Sep 04 17:11:24.366 INFO Fetch successful Sep 4 17:11:24.367122 coreos-metadata[2121]: Sep 04 17:11:24.367 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 4 17:11:24.367612 coreos-metadata[2121]: Sep 04 17:11:24.367 INFO Fetch successful Sep 4 17:11:24.371300 containerd[2016]: time="2024-09-04T17:11:24.371147830Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Sep 4 17:11:24.373188 unknown[2121]: wrote ssh authorized keys file for user: core Sep 4 17:11:24.437893 update-ssh-keys[2177]: Updated "/home/core/.ssh/authorized_keys" Sep 4 17:11:24.438298 ntpd[1987]: 4 Sep 17:11:24 ntpd[1987]: bind(24) AF_INET6 fe80::48f:caff:fe03:3d19%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:24.438298 ntpd[1987]: 4 Sep 17:11:24 ntpd[1987]: unable to create socket on eth0 (6) for fe80::48f:caff:fe03:3d19%2#123 Sep 4 17:11:24.438298 ntpd[1987]: 4 Sep 17:11:24 ntpd[1987]: failed to init interface for address fe80::48f:caff:fe03:3d19%2 Sep 4 17:11:24.434416 ntpd[1987]: bind(24) AF_INET6 fe80::48f:caff:fe03:3d19%2#123 flags 0x11 failed: Cannot assign requested address Sep 4 17:11:24.438955 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 4 17:11:24.434470 ntpd[1987]: unable to create socket on eth0 (6) for fe80::48f:caff:fe03:3d19%2#123 Sep 4 17:11:24.434499 ntpd[1987]: failed to init interface for address fe80::48f:caff:fe03:3d19%2 Sep 4 17:11:24.461563 systemd[1]: Finished sshkeys.service. Sep 4 17:11:24.511398 containerd[2016]: time="2024-09-04T17:11:24.511339379Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 4 17:11:24.511566 containerd[2016]: time="2024-09-04T17:11:24.511537727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:24.514252 containerd[2016]: time="2024-09-04T17:11:24.514192763Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.48-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:24.514405 containerd[2016]: time="2024-09-04T17:11:24.514376051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:24.514864 containerd[2016]: time="2024-09-04T17:11:24.514800203Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:24.514984 containerd[2016]: time="2024-09-04T17:11:24.514956167Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 4 17:11:24.515241 containerd[2016]: time="2024-09-04T17:11:24.515211203Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:24.515475 containerd[2016]: time="2024-09-04T17:11:24.515443043Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:24.515576 containerd[2016]: time="2024-09-04T17:11:24.515548847Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:24.515817 containerd[2016]: time="2024-09-04T17:11:24.515786387Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:24.516371 containerd[2016]: time="2024-09-04T17:11:24.516329939Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:24.517141 containerd[2016]: time="2024-09-04T17:11:24.516476231Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Sep 4 17:11:24.517141 containerd[2016]: time="2024-09-04T17:11:24.516508199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 4 17:11:24.517141 containerd[2016]: time="2024-09-04T17:11:24.516730211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 4 17:11:24.517141 containerd[2016]: time="2024-09-04T17:11:24.516763007Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 4 17:11:24.517141 containerd[2016]: time="2024-09-04T17:11:24.516902963Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Sep 4 17:11:24.517141 containerd[2016]: time="2024-09-04T17:11:24.516929975Z" level=info msg="metadata content store policy set" policy=shared Sep 4 17:11:24.524863 containerd[2016]: time="2024-09-04T17:11:24.523510307Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 4 17:11:24.524863 containerd[2016]: time="2024-09-04T17:11:24.523574783Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 4 17:11:24.524863 containerd[2016]: time="2024-09-04T17:11:24.523608431Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.525098879Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.526912031Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.526945127Z" level=info msg="NRI interface is disabled by configuration." Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.526981823Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527234435Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527268995Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527301143Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527333363Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527366003Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527401247Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527432555Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527465207Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527497055Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 4 17:11:24.527905 containerd[2016]: time="2024-09-04T17:11:24.527530475Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 4 17:11:24.528525 containerd[2016]: time="2024-09-04T17:11:24.527558939Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 4 17:11:24.528525 containerd[2016]: time="2024-09-04T17:11:24.527585579Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 4 17:11:24.528525 containerd[2016]: time="2024-09-04T17:11:24.527760647Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 4 17:11:24.529095 containerd[2016]: time="2024-09-04T17:11:24.529054271Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 4 17:11:24.529272 containerd[2016]: time="2024-09-04T17:11:24.529241927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.529398 containerd[2016]: time="2024-09-04T17:11:24.529369979Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 4 17:11:24.529541 containerd[2016]: time="2024-09-04T17:11:24.529512011Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 4 17:11:24.530673 containerd[2016]: time="2024-09-04T17:11:24.530625899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.530856107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.530897543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.530938595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.530969639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531001367Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531030503Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531059387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531092303Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531406619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531441731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531471167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531502871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531532103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531565631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.531854 containerd[2016]: time="2024-09-04T17:11:24.531594383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.532550 containerd[2016]: time="2024-09-04T17:11:24.531620579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 4 17:11:24.533058 containerd[2016]: time="2024-09-04T17:11:24.532937639Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 4 17:11:24.533376 containerd[2016]: time="2024-09-04T17:11:24.533347775Z" level=info msg="Connect containerd service" Sep 4 17:11:24.533588 containerd[2016]: time="2024-09-04T17:11:24.533540447Z" level=info msg="using legacy CRI server" Sep 4 17:11:24.533723 containerd[2016]: time="2024-09-04T17:11:24.533697971Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 4 17:11:24.534384 containerd[2016]: time="2024-09-04T17:11:24.534322763Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 4 17:11:24.550509 containerd[2016]: time="2024-09-04T17:11:24.550448663Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 4 17:11:24.551865 containerd[2016]: time="2024-09-04T17:11:24.551325335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 4 17:11:24.551865 containerd[2016]: time="2024-09-04T17:11:24.551504447Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 4 17:11:24.551865 containerd[2016]: time="2024-09-04T17:11:24.551533583Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 4 17:11:24.551865 containerd[2016]: time="2024-09-04T17:11:24.551563751Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 4 17:11:24.552850 containerd[2016]: time="2024-09-04T17:11:24.551445971Z" level=info msg="Start subscribing containerd event" Sep 4 17:11:24.552850 containerd[2016]: time="2024-09-04T17:11:24.552224039Z" level=info msg="Start recovering state" Sep 4 17:11:24.552850 containerd[2016]: time="2024-09-04T17:11:24.552344675Z" level=info msg="Start event monitor" Sep 4 17:11:24.552850 containerd[2016]: time="2024-09-04T17:11:24.552368483Z" level=info msg="Start snapshots syncer" Sep 4 17:11:24.552850 containerd[2016]: time="2024-09-04T17:11:24.552389087Z" level=info msg="Start cni network conf syncer for default" Sep 4 17:11:24.552850 containerd[2016]: time="2024-09-04T17:11:24.552406691Z" level=info msg="Start streaming server" Sep 4 17:11:24.553512 containerd[2016]: time="2024-09-04T17:11:24.553469243Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 4 17:11:24.553867 containerd[2016]: time="2024-09-04T17:11:24.553809947Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 4 17:11:24.554171 systemd[1]: Started containerd.service - containerd container runtime. Sep 4 17:11:24.556098 containerd[2016]: time="2024-09-04T17:11:24.554542595Z" level=info msg="containerd successfully booted in 0.190865s" Sep 4 17:11:24.869004 systemd-networkd[1931]: eth0: Gained IPv6LL Sep 4 17:11:24.877388 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 4 17:11:24.884536 systemd[1]: Reached target network-online.target - Network is Online. Sep 4 17:11:24.897303 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Sep 4 17:11:24.907023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:24.914316 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 4 17:11:25.056533 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 4 17:11:25.099284 amazon-ssm-agent[2189]: Initializing new seelog logger Sep 4 17:11:25.102960 amazon-ssm-agent[2189]: New Seelog Logger Creation Complete Sep 4 17:11:25.102960 amazon-ssm-agent[2189]: 2024/09/04 17:11:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:25.102960 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:25.103255 amazon-ssm-agent[2189]: 2024/09/04 17:11:25 processing appconfig overrides Sep 4 17:11:25.106861 amazon-ssm-agent[2189]: 2024/09/04 17:11:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:25.106861 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:25.106861 amazon-ssm-agent[2189]: 2024/09/04 17:11:25 processing appconfig overrides Sep 4 17:11:25.106861 amazon-ssm-agent[2189]: 2024/09/04 17:11:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:25.106861 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:25.106861 amazon-ssm-agent[2189]: 2024/09/04 17:11:25 processing appconfig overrides Sep 4 17:11:25.106861 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO Proxy environment variables: Sep 4 17:11:25.112317 amazon-ssm-agent[2189]: 2024/09/04 17:11:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:25.112874 amazon-ssm-agent[2189]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Sep 4 17:11:25.113131 amazon-ssm-agent[2189]: 2024/09/04 17:11:25 processing appconfig overrides Sep 4 17:11:25.208647 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO http_proxy: Sep 4 17:11:25.213325 tar[2002]: linux-arm64/LICENSE Sep 4 17:11:25.213895 tar[2002]: linux-arm64/README.md Sep 4 17:11:25.260928 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 4 17:11:25.308852 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO no_proxy: Sep 4 17:11:25.408088 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO https_proxy: Sep 4 17:11:25.508274 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO Checking if agent identity type OnPrem can be assumed Sep 4 17:11:25.605642 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO Checking if agent identity type EC2 can be assumed Sep 4 17:11:25.704989 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO Agent will take identity from EC2 Sep 4 17:11:25.809075 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:11:25.845908 sshd_keygen[2018]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 4 17:11:25.908304 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:11:25.936486 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 4 17:11:25.953200 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 4 17:11:25.970791 systemd[1]: Started sshd@0-172.31.31.13:22-139.178.89.65:56456.service - OpenSSH per-connection server daemon (139.178.89.65:56456). Sep 4 17:11:25.993105 systemd[1]: issuegen.service: Deactivated successfully. Sep 4 17:11:25.996501 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 4 17:11:26.008376 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 4 17:11:26.017890 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Sep 4 17:11:26.049900 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 4 17:11:26.065293 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 4 17:11:26.073542 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 4 17:11:26.076314 systemd[1]: Reached target getty.target - Login Prompts. Sep 4 17:11:26.115922 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Sep 4 17:11:26.215912 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Sep 4 17:11:26.221883 sshd[2220]: Accepted publickey for core from 139.178.89.65 port 56456 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:26.228577 sshd[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:26.266151 systemd-logind[1992]: New session 1 of user core. Sep 4 17:11:26.269347 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 4 17:11:26.276313 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 4 17:11:26.287576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:26.294095 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 4 17:11:26.306459 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:26.316312 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO [amazon-ssm-agent] Starting Core Agent Sep 4 17:11:26.339969 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 4 17:11:26.357547 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 4 17:11:26.386150 (systemd)[2237]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:26.417223 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration Sep 4 17:11:26.517567 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO [Registrar] Starting registrar module Sep 4 17:11:26.617928 amazon-ssm-agent[2189]: 2024-09-04 17:11:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Sep 4 17:11:26.712639 systemd[2237]: Queued start job for default target default.target. Sep 4 17:11:26.721991 systemd[2237]: Created slice app.slice - User Application Slice. Sep 4 17:11:26.722053 systemd[2237]: Reached target paths.target - Paths. Sep 4 17:11:26.722086 systemd[2237]: Reached target timers.target - Timers. Sep 4 17:11:26.726132 systemd[2237]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 4 17:11:26.771540 systemd[2237]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 4 17:11:26.771793 systemd[2237]: Reached target sockets.target - Sockets. Sep 4 17:11:26.772889 systemd[2237]: Reached target basic.target - Basic System. Sep 4 17:11:26.773027 systemd[2237]: Reached target default.target - Main User Target. Sep 4 17:11:26.773096 systemd[2237]: Startup finished in 372ms. Sep 4 17:11:26.773124 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 4 17:11:26.782617 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 4 17:11:26.785872 systemd[1]: Startup finished in 1.196s (kernel) + 9.035s (initrd) + 8.996s (userspace) = 19.227s. Sep 4 17:11:26.960449 systemd[1]: Started sshd@1-172.31.31.13:22-139.178.89.65:56466.service - OpenSSH per-connection server daemon (139.178.89.65:56466). Sep 4 17:11:27.201733 sshd[2256]: Accepted publickey for core from 139.178.89.65 port 56466 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:27.202785 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:27.216017 systemd-logind[1992]: New session 2 of user core. Sep 4 17:11:27.223854 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 4 17:11:27.321361 kubelet[2234]: E0904 17:11:27.321127 2234 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:27.326575 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:27.328951 amazon-ssm-agent[2189]: 2024-09-04 17:11:27 INFO [EC2Identity] EC2 registration was successful. Sep 4 17:11:27.331163 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:11:27.333049 systemd[1]: kubelet.service: Consumed 1.337s CPU time. Sep 4 17:11:27.360082 amazon-ssm-agent[2189]: 2024-09-04 17:11:27 INFO [CredentialRefresher] credentialRefresher has started Sep 4 17:11:27.360082 amazon-ssm-agent[2189]: 2024-09-04 17:11:27 INFO [CredentialRefresher] Starting credentials refresher loop Sep 4 17:11:27.360082 amazon-ssm-agent[2189]: 2024-09-04 17:11:27 INFO EC2RoleProvider Successfully connected with instance profile role credentials Sep 4 17:11:27.362186 sshd[2256]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:27.369775 systemd[1]: sshd@1-172.31.31.13:22-139.178.89.65:56466.service: Deactivated successfully. Sep 4 17:11:27.373608 systemd[1]: session-2.scope: Deactivated successfully. Sep 4 17:11:27.375525 systemd-logind[1992]: Session 2 logged out. Waiting for processes to exit. Sep 4 17:11:27.377763 systemd-logind[1992]: Removed session 2. Sep 4 17:11:27.402408 systemd[1]: Started sshd@2-172.31.31.13:22-139.178.89.65:56470.service - OpenSSH per-connection server daemon (139.178.89.65:56470). Sep 4 17:11:27.428983 amazon-ssm-agent[2189]: 2024-09-04 17:11:27 INFO [CredentialRefresher] Next credential rotation will be in 30.516658685066666 minutes Sep 4 17:11:27.434298 ntpd[1987]: Listen normally on 7 eth0 [fe80::48f:caff:fe03:3d19%2]:123 Sep 4 17:11:27.434778 ntpd[1987]: 4 Sep 17:11:27 ntpd[1987]: Listen normally on 7 eth0 [fe80::48f:caff:fe03:3d19%2]:123 Sep 4 17:11:27.583514 sshd[2267]: Accepted publickey for core from 139.178.89.65 port 56470 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:27.586728 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:27.596051 systemd-logind[1992]: New session 3 of user core. Sep 4 17:11:27.606208 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 4 17:11:27.725678 sshd[2267]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:27.731600 systemd[1]: sshd@2-172.31.31.13:22-139.178.89.65:56470.service: Deactivated successfully. Sep 4 17:11:27.736084 systemd[1]: session-3.scope: Deactivated successfully. Sep 4 17:11:27.737709 systemd-logind[1992]: Session 3 logged out. Waiting for processes to exit. Sep 4 17:11:27.739356 systemd-logind[1992]: Removed session 3. Sep 4 17:11:27.764349 systemd[1]: Started sshd@3-172.31.31.13:22-139.178.89.65:50340.service - OpenSSH per-connection server daemon (139.178.89.65:50340). Sep 4 17:11:27.945114 sshd[2274]: Accepted publickey for core from 139.178.89.65 port 50340 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:27.947633 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:27.954964 systemd-logind[1992]: New session 4 of user core. Sep 4 17:11:27.964088 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 4 17:11:28.090913 sshd[2274]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:28.095905 systemd[1]: sshd@3-172.31.31.13:22-139.178.89.65:50340.service: Deactivated successfully. Sep 4 17:11:28.099409 systemd[1]: session-4.scope: Deactivated successfully. Sep 4 17:11:28.102219 systemd-logind[1992]: Session 4 logged out. Waiting for processes to exit. Sep 4 17:11:28.104105 systemd-logind[1992]: Removed session 4. Sep 4 17:11:28.133324 systemd[1]: Started sshd@4-172.31.31.13:22-139.178.89.65:50342.service - OpenSSH per-connection server daemon (139.178.89.65:50342). Sep 4 17:11:28.300721 sshd[2281]: Accepted publickey for core from 139.178.89.65 port 50342 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:28.302646 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:28.310030 systemd-logind[1992]: New session 5 of user core. Sep 4 17:11:28.321123 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 4 17:11:28.394997 amazon-ssm-agent[2189]: 2024-09-04 17:11:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Sep 4 17:11:28.461501 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 4 17:11:28.463003 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:28.485881 sudo[2286]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:28.496490 amazon-ssm-agent[2189]: 2024-09-04 17:11:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2285) started Sep 4 17:11:28.510173 sshd[2281]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:28.522156 systemd[1]: sshd@4-172.31.31.13:22-139.178.89.65:50342.service: Deactivated successfully. Sep 4 17:11:28.527843 systemd[1]: session-5.scope: Deactivated successfully. Sep 4 17:11:28.529963 systemd-logind[1992]: Session 5 logged out. Waiting for processes to exit. Sep 4 17:11:28.555033 systemd[1]: Started sshd@5-172.31.31.13:22-139.178.89.65:50354.service - OpenSSH per-connection server daemon (139.178.89.65:50354). Sep 4 17:11:28.556780 systemd-logind[1992]: Removed session 5. Sep 4 17:11:28.596775 amazon-ssm-agent[2189]: 2024-09-04 17:11:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Sep 4 17:11:28.743669 sshd[2296]: Accepted publickey for core from 139.178.89.65 port 50354 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:28.745280 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:28.754617 systemd-logind[1992]: New session 6 of user core. Sep 4 17:11:28.766138 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 4 17:11:28.873241 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 4 17:11:28.873790 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:28.879948 sudo[2304]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:28.891236 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 4 17:11:28.892009 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:28.915150 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:28.928920 auditctl[2307]: No rules Sep 4 17:11:28.929710 systemd[1]: audit-rules.service: Deactivated successfully. Sep 4 17:11:28.930182 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:28.943754 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 4 17:11:28.986954 augenrules[2325]: No rules Sep 4 17:11:28.989973 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 4 17:11:28.991864 sudo[2303]: pam_unix(sudo:session): session closed for user root Sep 4 17:11:29.015873 sshd[2296]: pam_unix(sshd:session): session closed for user core Sep 4 17:11:29.022732 systemd[1]: sshd@5-172.31.31.13:22-139.178.89.65:50354.service: Deactivated successfully. Sep 4 17:11:29.026717 systemd[1]: session-6.scope: Deactivated successfully. Sep 4 17:11:29.028389 systemd-logind[1992]: Session 6 logged out. Waiting for processes to exit. Sep 4 17:11:29.030530 systemd-logind[1992]: Removed session 6. Sep 4 17:11:29.059426 systemd[1]: Started sshd@6-172.31.31.13:22-139.178.89.65:50358.service - OpenSSH per-connection server daemon (139.178.89.65:50358). Sep 4 17:11:29.240330 sshd[2333]: Accepted publickey for core from 139.178.89.65 port 50358 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:11:29.242023 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:11:29.250098 systemd-logind[1992]: New session 7 of user core. Sep 4 17:11:29.265163 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 4 17:11:29.369692 sudo[2336]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 4 17:11:29.370260 sudo[2336]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Sep 4 17:11:29.568341 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 4 17:11:29.577458 (dockerd)[2345]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 4 17:11:29.996951 dockerd[2345]: time="2024-09-04T17:11:29.996784890Z" level=info msg="Starting up" Sep 4 17:11:30.587214 dockerd[2345]: time="2024-09-04T17:11:30.587136101Z" level=info msg="Loading containers: start." Sep 4 17:11:30.777950 kernel: Initializing XFRM netlink socket Sep 4 17:11:30.834638 (udev-worker)[2405]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:11:30.926638 systemd-networkd[1931]: docker0: Link UP Sep 4 17:11:30.954295 dockerd[2345]: time="2024-09-04T17:11:30.954207859Z" level=info msg="Loading containers: done." Sep 4 17:11:31.093484 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2604731099-merged.mount: Deactivated successfully. Sep 4 17:11:31.110351 dockerd[2345]: time="2024-09-04T17:11:31.109917007Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 4 17:11:31.110351 dockerd[2345]: time="2024-09-04T17:11:31.110289397Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Sep 4 17:11:31.111655 dockerd[2345]: time="2024-09-04T17:11:31.111222350Z" level=info msg="Daemon has completed initialization" Sep 4 17:11:31.194059 dockerd[2345]: time="2024-09-04T17:11:31.193856944Z" level=info msg="API listen on /run/docker.sock" Sep 4 17:11:31.196309 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 4 17:11:32.304922 containerd[2016]: time="2024-09-04T17:11:32.304385569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\"" Sep 4 17:11:33.091067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount163010896.mount: Deactivated successfully. Sep 4 17:11:34.974216 containerd[2016]: time="2024-09-04T17:11:34.974135914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:34.976068 containerd[2016]: time="2024-09-04T17:11:34.976010660Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.8: active requests=0, bytes read=32283562" Sep 4 17:11:34.977509 containerd[2016]: time="2024-09-04T17:11:34.977413407Z" level=info msg="ImageCreate event name:\"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:34.983894 containerd[2016]: time="2024-09-04T17:11:34.983816787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:34.986477 containerd[2016]: time="2024-09-04T17:11:34.986134724Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.8\" with image id \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6f72fa926c9b05e10629fe1a092fd28dcd65b4fdfd0cc7bd55f85a57a6ba1fa5\", size \"32280362\" in 2.681678426s" Sep 4 17:11:34.986477 containerd[2016]: time="2024-09-04T17:11:34.986199863Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.8\" returns image reference \"sha256:6b88c4d45de58e9ed0353538f5b2ae206a8582fcb53e67d0505abbe3a567fbae\"" Sep 4 17:11:35.025866 containerd[2016]: time="2024-09-04T17:11:35.025511350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\"" Sep 4 17:11:36.931908 containerd[2016]: time="2024-09-04T17:11:36.931558831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:36.934085 containerd[2016]: time="2024-09-04T17:11:36.933964875Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.8: active requests=0, bytes read=29368210" Sep 4 17:11:36.935300 containerd[2016]: time="2024-09-04T17:11:36.935210272Z" level=info msg="ImageCreate event name:\"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:36.941463 containerd[2016]: time="2024-09-04T17:11:36.941400844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:36.944102 containerd[2016]: time="2024-09-04T17:11:36.943884921Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.8\" with image id \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6f27d63ded20614c68554b477cd7a78eda78a498a92bfe8935cf964ca5b74d0b\", size \"30855477\" in 1.918311418s" Sep 4 17:11:36.944102 containerd[2016]: time="2024-09-04T17:11:36.943943620Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.8\" returns image reference \"sha256:bddc5fa0c49f499b7ec60c114671fcbb0436c22300448964f77acb6c13f0ffed\"" Sep 4 17:11:36.987191 containerd[2016]: time="2024-09-04T17:11:36.987133078Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\"" Sep 4 17:11:37.345898 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 4 17:11:37.354214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:38.058261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:38.072419 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:38.236442 kubelet[2557]: E0904 17:11:38.236284 2557 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:38.248728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:38.249498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:11:38.567026 containerd[2016]: time="2024-09-04T17:11:38.566940376Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:38.569065 containerd[2016]: time="2024-09-04T17:11:38.568916568Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.8: active requests=0, bytes read=15751073" Sep 4 17:11:38.570739 containerd[2016]: time="2024-09-04T17:11:38.570623446Z" level=info msg="ImageCreate event name:\"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:38.580886 containerd[2016]: time="2024-09-04T17:11:38.579198688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:38.583418 containerd[2016]: time="2024-09-04T17:11:38.583329182Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.8\" with image id \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:da74a66675d95e39ec25da5e70729da746d0fa0b15ee0da872ac980519bc28bd\", size \"17238358\" in 1.596135462s" Sep 4 17:11:38.583418 containerd[2016]: time="2024-09-04T17:11:38.583403616Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.8\" returns image reference \"sha256:db329f69447ed4eb4b489d7c357c7723493b3a72946edb35a6c16973d5f257d4\"" Sep 4 17:11:38.633047 containerd[2016]: time="2024-09-04T17:11:38.632970573Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\"" Sep 4 17:11:40.054777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4163282176.mount: Deactivated successfully. Sep 4 17:11:40.526297 containerd[2016]: time="2024-09-04T17:11:40.526197727Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:40.527782 containerd[2016]: time="2024-09-04T17:11:40.527630638Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.8: active requests=0, bytes read=25251883" Sep 4 17:11:40.530866 containerd[2016]: time="2024-09-04T17:11:40.529257625Z" level=info msg="ImageCreate event name:\"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:40.533951 containerd[2016]: time="2024-09-04T17:11:40.533875662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:40.535138 containerd[2016]: time="2024-09-04T17:11:40.535082739Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.8\" with image id \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\", repo tag \"registry.k8s.io/kube-proxy:v1.29.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:559a093080f70ca863922f5e4bb90d6926d52653a91edb5b72c685ebb65f1858\", size \"25250902\" in 1.902031998s" Sep 4 17:11:40.535264 containerd[2016]: time="2024-09-04T17:11:40.535135572Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.8\" returns image reference \"sha256:61223b17dfa4bd3d116a0b714c4f2cc2e3d83853942dfb8578f50cc8e91eb399\"" Sep 4 17:11:40.575021 containerd[2016]: time="2024-09-04T17:11:40.574915676Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Sep 4 17:11:41.181621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3359116097.mount: Deactivated successfully. Sep 4 17:11:42.379568 containerd[2016]: time="2024-09-04T17:11:42.379483758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:42.384887 containerd[2016]: time="2024-09-04T17:11:42.384803409Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Sep 4 17:11:42.400043 containerd[2016]: time="2024-09-04T17:11:42.399948557Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:42.417647 containerd[2016]: time="2024-09-04T17:11:42.417542447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:42.420354 containerd[2016]: time="2024-09-04T17:11:42.420078399Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.845010579s" Sep 4 17:11:42.420354 containerd[2016]: time="2024-09-04T17:11:42.420156060Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Sep 4 17:11:42.460651 containerd[2016]: time="2024-09-04T17:11:42.460587630Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Sep 4 17:11:43.034227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount347027261.mount: Deactivated successfully. Sep 4 17:11:43.043604 containerd[2016]: time="2024-09-04T17:11:43.043105028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:43.044849 containerd[2016]: time="2024-09-04T17:11:43.044760044Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Sep 4 17:11:43.046369 containerd[2016]: time="2024-09-04T17:11:43.046281363Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:43.051021 containerd[2016]: time="2024-09-04T17:11:43.050937397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:43.052741 containerd[2016]: time="2024-09-04T17:11:43.052570321Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 591.924064ms" Sep 4 17:11:43.052741 containerd[2016]: time="2024-09-04T17:11:43.052620827Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Sep 4 17:11:43.094481 containerd[2016]: time="2024-09-04T17:11:43.094384859Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Sep 4 17:11:43.682612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3419295765.mount: Deactivated successfully. Sep 4 17:11:46.384382 containerd[2016]: time="2024-09-04T17:11:46.384298337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:46.406686 containerd[2016]: time="2024-09-04T17:11:46.406611231Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Sep 4 17:11:46.433510 containerd[2016]: time="2024-09-04T17:11:46.433223578Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:46.456783 containerd[2016]: time="2024-09-04T17:11:46.456257588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:11:46.458236 containerd[2016]: time="2024-09-04T17:11:46.457946883Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.363502773s" Sep 4 17:11:46.458236 containerd[2016]: time="2024-09-04T17:11:46.458007128Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Sep 4 17:11:48.346821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 4 17:11:48.356434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:48.885341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:48.900923 (kubelet)[2752]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 4 17:11:49.004868 kubelet[2752]: E0904 17:11:49.003287 2752 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 4 17:11:49.010150 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 4 17:11:49.010681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 4 17:11:54.083483 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 4 17:11:55.345950 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:55.358392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:55.397168 systemd[1]: Reloading requested from client PID 2769 ('systemctl') (unit session-7.scope)... Sep 4 17:11:55.397195 systemd[1]: Reloading... Sep 4 17:11:55.642895 zram_generator::config[2810]: No configuration found. Sep 4 17:11:55.912068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:11:56.090658 systemd[1]: Reloading finished in 692 ms. Sep 4 17:11:56.199279 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:56.207367 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:11:56.207882 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:56.222365 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:11:56.668166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:11:56.679474 (kubelet)[2872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:11:56.770621 kubelet[2872]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:56.770621 kubelet[2872]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:11:56.771202 kubelet[2872]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:11:56.772505 kubelet[2872]: I0904 17:11:56.772373 2872 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:11:57.532945 kubelet[2872]: I0904 17:11:57.532624 2872 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:11:57.532945 kubelet[2872]: I0904 17:11:57.532693 2872 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:11:57.533456 kubelet[2872]: I0904 17:11:57.533225 2872 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:11:57.574970 kubelet[2872]: E0904 17:11:57.574628 2872 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:57.574970 kubelet[2872]: I0904 17:11:57.574722 2872 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:11:57.591904 kubelet[2872]: I0904 17:11:57.591227 2872 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:11:57.591904 kubelet[2872]: I0904 17:11:57.591661 2872 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:11:57.592332 kubelet[2872]: I0904 17:11:57.592292 2872 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:11:57.592596 kubelet[2872]: I0904 17:11:57.592569 2872 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:11:57.592722 kubelet[2872]: I0904 17:11:57.592701 2872 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:11:57.597077 kubelet[2872]: I0904 17:11:57.597023 2872 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:57.601883 kubelet[2872]: I0904 17:11:57.601720 2872 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:11:57.601883 kubelet[2872]: I0904 17:11:57.601783 2872 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:11:57.601883 kubelet[2872]: I0904 17:11:57.601850 2872 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:11:57.602903 kubelet[2872]: I0904 17:11:57.602177 2872 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:11:57.605671 kubelet[2872]: W0904 17:11:57.605591 2872 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.31.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-13&limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:57.605971 kubelet[2872]: E0904 17:11:57.605942 2872 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-13&limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:57.607942 kubelet[2872]: W0904 17:11:57.607820 2872 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.31.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:57.608164 kubelet[2872]: E0904 17:11:57.608135 2872 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:57.610880 kubelet[2872]: I0904 17:11:57.608489 2872 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:11:57.610880 kubelet[2872]: I0904 17:11:57.609221 2872 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:11:57.610880 kubelet[2872]: W0904 17:11:57.609352 2872 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 4 17:11:57.613956 kubelet[2872]: I0904 17:11:57.613891 2872 server.go:1256] "Started kubelet" Sep 4 17:11:57.623996 kubelet[2872]: I0904 17:11:57.623934 2872 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:11:57.629772 kubelet[2872]: E0904 17:11:57.629717 2872 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.31.13:6443/api/v1/namespaces/default/events\": dial tcp 172.31.31.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-31-13.17f219bf85fe6067 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-13,UID:ip-172-31-31-13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-13,},FirstTimestamp:2024-09-04 17:11:57.613809767 +0000 UTC m=+0.926139498,LastTimestamp:2024-09-04 17:11:57.613809767 +0000 UTC m=+0.926139498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-13,}" Sep 4 17:11:57.634452 kubelet[2872]: I0904 17:11:57.634406 2872 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:11:57.636996 kubelet[2872]: I0904 17:11:57.635266 2872 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:11:57.638389 kubelet[2872]: I0904 17:11:57.638333 2872 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:11:57.639131 kubelet[2872]: I0904 17:11:57.639064 2872 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:11:57.639131 kubelet[2872]: E0904 17:11:57.636535 2872 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:11:57.640587 kubelet[2872]: I0904 17:11:57.640514 2872 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:11:57.640787 kubelet[2872]: I0904 17:11:57.640731 2872 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:11:57.641277 kubelet[2872]: W0904 17:11:57.641199 2872 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.31.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:57.641527 kubelet[2872]: E0904 17:11:57.641493 2872 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:57.641811 kubelet[2872]: E0904 17:11:57.641777 2872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-13?timeout=10s\": dial tcp 172.31.31.13:6443: connect: connection refused" interval="200ms" Sep 4 17:11:57.642477 kubelet[2872]: I0904 17:11:57.642420 2872 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:11:57.646627 kubelet[2872]: I0904 17:11:57.646555 2872 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:11:57.648433 kubelet[2872]: I0904 17:11:57.648360 2872 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:11:57.650977 kubelet[2872]: I0904 17:11:57.650296 2872 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:11:57.681797 kubelet[2872]: I0904 17:11:57.681682 2872 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:11:57.682435 kubelet[2872]: I0904 17:11:57.682023 2872 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:11:57.682435 kubelet[2872]: I0904 17:11:57.682068 2872 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:11:57.687818 kubelet[2872]: I0904 17:11:57.687770 2872 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:11:57.690534 kubelet[2872]: I0904 17:11:57.690489 2872 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:11:57.691333 kubelet[2872]: I0904 17:11:57.690724 2872 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:11:57.691333 kubelet[2872]: I0904 17:11:57.690774 2872 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:11:57.691333 kubelet[2872]: E0904 17:11:57.690939 2872 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:11:57.696462 kubelet[2872]: I0904 17:11:57.696410 2872 policy_none.go:49] "None policy: Start" Sep 4 17:11:57.697696 kubelet[2872]: W0904 17:11:57.697545 2872 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.31.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:57.697696 kubelet[2872]: E0904 17:11:57.697609 2872 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:57.700758 kubelet[2872]: I0904 17:11:57.700157 2872 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:11:57.700758 kubelet[2872]: I0904 17:11:57.700238 2872 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:11:57.711362 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 4 17:11:57.729211 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 4 17:11:57.737684 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 4 17:11:57.740797 kubelet[2872]: I0904 17:11:57.740756 2872 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-13" Sep 4 17:11:57.741784 kubelet[2872]: E0904 17:11:57.741720 2872 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.13:6443/api/v1/nodes\": dial tcp 172.31.31.13:6443: connect: connection refused" node="ip-172-31-31-13" Sep 4 17:11:57.748179 kubelet[2872]: I0904 17:11:57.747983 2872 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:11:57.750553 kubelet[2872]: I0904 17:11:57.750513 2872 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:11:57.754534 kubelet[2872]: E0904 17:11:57.754332 2872 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-13\" not found" Sep 4 17:11:57.793999 kubelet[2872]: I0904 17:11:57.791873 2872 topology_manager.go:215] "Topology Admit Handler" podUID="3c77709120fffb8e37db74e3da7513fe" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-13" Sep 4 17:11:57.795661 kubelet[2872]: I0904 17:11:57.795203 2872 topology_manager.go:215] "Topology Admit Handler" podUID="ce3e8061e99f721d78a2f4bb16ad404b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-13" Sep 4 17:11:57.798911 kubelet[2872]: I0904 17:11:57.798812 2872 topology_manager.go:215] "Topology Admit Handler" podUID="ad525826676285d852dc32a910977102" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-13" Sep 4 17:11:57.815426 systemd[1]: Created slice kubepods-burstable-pod3c77709120fffb8e37db74e3da7513fe.slice - libcontainer container kubepods-burstable-pod3c77709120fffb8e37db74e3da7513fe.slice. Sep 4 17:11:57.842685 kubelet[2872]: E0904 17:11:57.842593 2872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-13?timeout=10s\": dial tcp 172.31.31.13:6443: connect: connection refused" interval="400ms" Sep 4 17:11:57.848925 systemd[1]: Created slice kubepods-burstable-podce3e8061e99f721d78a2f4bb16ad404b.slice - libcontainer container kubepods-burstable-podce3e8061e99f721d78a2f4bb16ad404b.slice. Sep 4 17:11:57.872995 systemd[1]: Created slice kubepods-burstable-podad525826676285d852dc32a910977102.slice - libcontainer container kubepods-burstable-podad525826676285d852dc32a910977102.slice. Sep 4 17:11:57.939940 kubelet[2872]: I0904 17:11:57.939790 2872 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c77709120fffb8e37db74e3da7513fe-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-13\" (UID: \"3c77709120fffb8e37db74e3da7513fe\") " pod="kube-system/kube-apiserver-ip-172-31-31-13" Sep 4 17:11:57.939940 kubelet[2872]: I0904 17:11:57.939955 2872 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:11:57.940227 kubelet[2872]: I0904 17:11:57.940010 2872 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:11:57.940227 kubelet[2872]: I0904 17:11:57.940061 2872 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:11:57.940227 kubelet[2872]: I0904 17:11:57.940113 2872 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:11:57.940227 kubelet[2872]: I0904 17:11:57.940161 2872 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad525826676285d852dc32a910977102-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-13\" (UID: \"ad525826676285d852dc32a910977102\") " pod="kube-system/kube-scheduler-ip-172-31-31-13" Sep 4 17:11:57.940227 kubelet[2872]: I0904 17:11:57.940207 2872 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c77709120fffb8e37db74e3da7513fe-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-13\" (UID: \"3c77709120fffb8e37db74e3da7513fe\") " pod="kube-system/kube-apiserver-ip-172-31-31-13" Sep 4 17:11:57.940733 kubelet[2872]: I0904 17:11:57.940276 2872 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:11:57.940733 kubelet[2872]: I0904 17:11:57.940324 2872 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c77709120fffb8e37db74e3da7513fe-ca-certs\") pod \"kube-apiserver-ip-172-31-31-13\" (UID: \"3c77709120fffb8e37db74e3da7513fe\") " pod="kube-system/kube-apiserver-ip-172-31-31-13" Sep 4 17:11:57.945871 kubelet[2872]: I0904 17:11:57.945763 2872 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-13" Sep 4 17:11:57.946357 kubelet[2872]: E0904 17:11:57.946317 2872 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.13:6443/api/v1/nodes\": dial tcp 172.31.31.13:6443: connect: connection refused" node="ip-172-31-31-13" Sep 4 17:11:58.142191 containerd[2016]: time="2024-09-04T17:11:58.142124973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-13,Uid:3c77709120fffb8e37db74e3da7513fe,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:58.177357 containerd[2016]: time="2024-09-04T17:11:58.177087609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-13,Uid:ce3e8061e99f721d78a2f4bb16ad404b,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:58.180586 containerd[2016]: time="2024-09-04T17:11:58.180136941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-13,Uid:ad525826676285d852dc32a910977102,Namespace:kube-system,Attempt:0,}" Sep 4 17:11:58.244147 kubelet[2872]: E0904 17:11:58.244082 2872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-13?timeout=10s\": dial tcp 172.31.31.13:6443: connect: connection refused" interval="800ms" Sep 4 17:11:58.349622 kubelet[2872]: I0904 17:11:58.349569 2872 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-13" Sep 4 17:11:58.350368 kubelet[2872]: E0904 17:11:58.350110 2872 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.13:6443/api/v1/nodes\": dial tcp 172.31.31.13:6443: connect: connection refused" node="ip-172-31-31-13" Sep 4 17:11:58.515508 kubelet[2872]: W0904 17:11:58.515061 2872 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.31.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-13&limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:58.515508 kubelet[2872]: E0904 17:11:58.515253 2872 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-13&limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:58.676715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479872527.mount: Deactivated successfully. Sep 4 17:11:58.690363 containerd[2016]: time="2024-09-04T17:11:58.690267408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:58.692626 containerd[2016]: time="2024-09-04T17:11:58.692518464Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:58.694815 containerd[2016]: time="2024-09-04T17:11:58.694721376Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:11:58.695465 containerd[2016]: time="2024-09-04T17:11:58.695394156Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Sep 4 17:11:58.697909 containerd[2016]: time="2024-09-04T17:11:58.697769988Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:58.700443 containerd[2016]: time="2024-09-04T17:11:58.700199340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 4 17:11:58.702011 containerd[2016]: time="2024-09-04T17:11:58.701240088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:58.707019 kubelet[2872]: W0904 17:11:58.706953 2872 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.31.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:58.707533 kubelet[2872]: E0904 17:11:58.707441 2872 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:58.710174 containerd[2016]: time="2024-09-04T17:11:58.710065872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 4 17:11:58.712653 containerd[2016]: time="2024-09-04T17:11:58.712294308Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 535.060371ms" Sep 4 17:11:58.717258 containerd[2016]: time="2024-09-04T17:11:58.717163896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 536.881059ms" Sep 4 17:11:58.729488 containerd[2016]: time="2024-09-04T17:11:58.729392124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 587.105667ms" Sep 4 17:11:58.756865 kubelet[2872]: W0904 17:11:58.753452 2872 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.31.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:58.756865 kubelet[2872]: E0904 17:11:58.753569 2872 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:58.771101 kubelet[2872]: W0904 17:11:58.770907 2872 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.31.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:58.771101 kubelet[2872]: E0904 17:11:58.771012 2872 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.13:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.13:6443: connect: connection refused Sep 4 17:11:58.934704 containerd[2016]: time="2024-09-04T17:11:58.934528393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:58.936296 containerd[2016]: time="2024-09-04T17:11:58.935730757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:58.936296 containerd[2016]: time="2024-09-04T17:11:58.935796277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:58.936296 containerd[2016]: time="2024-09-04T17:11:58.935866261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:58.942341 containerd[2016]: time="2024-09-04T17:11:58.940368769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:58.942341 containerd[2016]: time="2024-09-04T17:11:58.941426053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:58.942341 containerd[2016]: time="2024-09-04T17:11:58.941488525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:58.942341 containerd[2016]: time="2024-09-04T17:11:58.941524897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:58.942777 containerd[2016]: time="2024-09-04T17:11:58.940644325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:11:58.942777 containerd[2016]: time="2024-09-04T17:11:58.940774765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:58.942777 containerd[2016]: time="2024-09-04T17:11:58.940863697Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:11:58.942777 containerd[2016]: time="2024-09-04T17:11:58.940903285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:11:58.995508 systemd[1]: Started cri-containerd-d346f31134239a8c9130186bb038524d1e8710dc0ca7c8397c6912d61a0bb253.scope - libcontainer container d346f31134239a8c9130186bb038524d1e8710dc0ca7c8397c6912d61a0bb253. Sep 4 17:11:59.014702 systemd[1]: Started cri-containerd-b4800feb032b361422053eb8fef3cb087b169f0fb8b80f00130e3273f5706d29.scope - libcontainer container b4800feb032b361422053eb8fef3cb087b169f0fb8b80f00130e3273f5706d29. Sep 4 17:11:59.027124 systemd[1]: Started cri-containerd-e605da24b2efc0e186406c81178d26ab61bd0675261f4b2bc518cece9a39f864.scope - libcontainer container e605da24b2efc0e186406c81178d26ab61bd0675261f4b2bc518cece9a39f864. Sep 4 17:11:59.047110 kubelet[2872]: E0904 17:11:59.046759 2872 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-13?timeout=10s\": dial tcp 172.31.31.13:6443: connect: connection refused" interval="1.6s" Sep 4 17:11:59.147240 containerd[2016]: time="2024-09-04T17:11:59.145041466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-13,Uid:3c77709120fffb8e37db74e3da7513fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"d346f31134239a8c9130186bb038524d1e8710dc0ca7c8397c6912d61a0bb253\"" Sep 4 17:11:59.159034 kubelet[2872]: I0904 17:11:59.157584 2872 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-13" Sep 4 17:11:59.160452 kubelet[2872]: E0904 17:11:59.160414 2872 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.31.13:6443/api/v1/nodes\": dial tcp 172.31.31.13:6443: connect: connection refused" node="ip-172-31-31-13" Sep 4 17:11:59.165973 containerd[2016]: time="2024-09-04T17:11:59.165683902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-13,Uid:ad525826676285d852dc32a910977102,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4800feb032b361422053eb8fef3cb087b169f0fb8b80f00130e3273f5706d29\"" Sep 4 17:11:59.168880 containerd[2016]: time="2024-09-04T17:11:59.168716170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-13,Uid:ce3e8061e99f721d78a2f4bb16ad404b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e605da24b2efc0e186406c81178d26ab61bd0675261f4b2bc518cece9a39f864\"" Sep 4 17:11:59.172316 containerd[2016]: time="2024-09-04T17:11:59.171995350Z" level=info msg="CreateContainer within sandbox \"d346f31134239a8c9130186bb038524d1e8710dc0ca7c8397c6912d61a0bb253\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 4 17:11:59.179095 containerd[2016]: time="2024-09-04T17:11:59.178866274Z" level=info msg="CreateContainer within sandbox \"b4800feb032b361422053eb8fef3cb087b169f0fb8b80f00130e3273f5706d29\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 4 17:11:59.182327 containerd[2016]: time="2024-09-04T17:11:59.182090206Z" level=info msg="CreateContainer within sandbox \"e605da24b2efc0e186406c81178d26ab61bd0675261f4b2bc518cece9a39f864\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 4 17:11:59.217879 containerd[2016]: time="2024-09-04T17:11:59.217506383Z" level=info msg="CreateContainer within sandbox \"d346f31134239a8c9130186bb038524d1e8710dc0ca7c8397c6912d61a0bb253\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"482d5770b19b495c58e10f79652233178aff907b273cc91d2f0026078fbab893\"" Sep 4 17:11:59.219701 containerd[2016]: time="2024-09-04T17:11:59.219621899Z" level=info msg="StartContainer for \"482d5770b19b495c58e10f79652233178aff907b273cc91d2f0026078fbab893\"" Sep 4 17:11:59.225580 containerd[2016]: time="2024-09-04T17:11:59.225466631Z" level=info msg="CreateContainer within sandbox \"b4800feb032b361422053eb8fef3cb087b169f0fb8b80f00130e3273f5706d29\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12\"" Sep 4 17:11:59.227702 containerd[2016]: time="2024-09-04T17:11:59.226772303Z" level=info msg="StartContainer for \"dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12\"" Sep 4 17:11:59.235179 containerd[2016]: time="2024-09-04T17:11:59.235094819Z" level=info msg="CreateContainer within sandbox \"e605da24b2efc0e186406c81178d26ab61bd0675261f4b2bc518cece9a39f864\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7\"" Sep 4 17:11:59.235900 containerd[2016]: time="2024-09-04T17:11:59.235793735Z" level=info msg="StartContainer for \"19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7\"" Sep 4 17:11:59.304374 systemd[1]: Started cri-containerd-482d5770b19b495c58e10f79652233178aff907b273cc91d2f0026078fbab893.scope - libcontainer container 482d5770b19b495c58e10f79652233178aff907b273cc91d2f0026078fbab893. Sep 4 17:11:59.315218 systemd[1]: Started cri-containerd-19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7.scope - libcontainer container 19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7. Sep 4 17:11:59.348152 systemd[1]: Started cri-containerd-dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12.scope - libcontainer container dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12. Sep 4 17:11:59.452702 containerd[2016]: time="2024-09-04T17:11:59.452355240Z" level=info msg="StartContainer for \"482d5770b19b495c58e10f79652233178aff907b273cc91d2f0026078fbab893\" returns successfully" Sep 4 17:11:59.507550 containerd[2016]: time="2024-09-04T17:11:59.507062496Z" level=info msg="StartContainer for \"19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7\" returns successfully" Sep 4 17:11:59.507550 containerd[2016]: time="2024-09-04T17:11:59.507339288Z" level=info msg="StartContainer for \"dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12\" returns successfully" Sep 4 17:12:00.765148 kubelet[2872]: I0904 17:12:00.765099 2872 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-13" Sep 4 17:12:03.307747 kubelet[2872]: E0904 17:12:03.307669 2872 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-13\" not found" node="ip-172-31-31-13" Sep 4 17:12:03.308394 kubelet[2872]: I0904 17:12:03.308263 2872 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-13" Sep 4 17:12:03.356433 kubelet[2872]: E0904 17:12:03.356383 2872 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-13.17f219bf85fe6067 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-13,UID:ip-172-31-31-13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-31-13,},FirstTimestamp:2024-09-04 17:11:57.613809767 +0000 UTC m=+0.926139498,LastTimestamp:2024-09-04 17:11:57.613809767 +0000 UTC m=+0.926139498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-13,}" Sep 4 17:12:03.512556 kubelet[2872]: E0904 17:12:03.512499 2872 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-31-13.17f219bf87587653 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-31-13,UID:ip-172-31-31-13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ip-172-31-31-13,},FirstTimestamp:2024-09-04 17:11:57.636490835 +0000 UTC m=+0.948820602,LastTimestamp:2024-09-04 17:11:57.636490835 +0000 UTC m=+0.948820602,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-31-13,}" Sep 4 17:12:03.608326 kubelet[2872]: I0904 17:12:03.608255 2872 apiserver.go:52] "Watching apiserver" Sep 4 17:12:03.639750 kubelet[2872]: I0904 17:12:03.639694 2872 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:12:06.363685 systemd[1]: Reloading requested from client PID 3151 ('systemctl') (unit session-7.scope)... Sep 4 17:12:06.363718 systemd[1]: Reloading... Sep 4 17:12:06.534923 zram_generator::config[3189]: No configuration found. Sep 4 17:12:06.778974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 4 17:12:06.979538 systemd[1]: Reloading finished in 614 ms. Sep 4 17:12:07.056785 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:12:07.072656 systemd[1]: kubelet.service: Deactivated successfully. Sep 4 17:12:07.073174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:12:07.073270 systemd[1]: kubelet.service: Consumed 1.713s CPU time, 115.1M memory peak, 0B memory swap peak. Sep 4 17:12:07.081589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 4 17:12:07.485696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 4 17:12:07.496475 (kubelet)[3249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 4 17:12:07.650308 kubelet[3249]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:12:07.650308 kubelet[3249]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 4 17:12:07.650308 kubelet[3249]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 4 17:12:07.650817 kubelet[3249]: I0904 17:12:07.650415 3249 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 4 17:12:07.665027 kubelet[3249]: I0904 17:12:07.664962 3249 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Sep 4 17:12:07.665027 kubelet[3249]: I0904 17:12:07.665017 3249 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 4 17:12:07.665880 kubelet[3249]: I0904 17:12:07.665411 3249 server.go:919] "Client rotation is on, will bootstrap in background" Sep 4 17:12:07.674109 kubelet[3249]: I0904 17:12:07.674036 3249 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 4 17:12:07.678889 kubelet[3249]: I0904 17:12:07.678264 3249 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 4 17:12:07.703810 kubelet[3249]: I0904 17:12:07.702739 3249 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 4 17:12:07.703810 kubelet[3249]: I0904 17:12:07.703321 3249 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 4 17:12:07.703810 kubelet[3249]: I0904 17:12:07.703594 3249 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Sep 4 17:12:07.703810 kubelet[3249]: I0904 17:12:07.703634 3249 topology_manager.go:138] "Creating topology manager with none policy" Sep 4 17:12:07.703810 kubelet[3249]: I0904 17:12:07.703655 3249 container_manager_linux.go:301] "Creating device plugin manager" Sep 4 17:12:07.704337 kubelet[3249]: I0904 17:12:07.704174 3249 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:12:07.704983 kubelet[3249]: I0904 17:12:07.704427 3249 kubelet.go:396] "Attempting to sync node with API server" Sep 4 17:12:07.705968 kubelet[3249]: I0904 17:12:07.705281 3249 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 4 17:12:07.705968 kubelet[3249]: I0904 17:12:07.705365 3249 kubelet.go:312] "Adding apiserver pod source" Sep 4 17:12:07.705968 kubelet[3249]: I0904 17:12:07.705391 3249 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 4 17:12:07.712123 kubelet[3249]: I0904 17:12:07.712068 3249 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Sep 4 17:12:07.712922 kubelet[3249]: I0904 17:12:07.712427 3249 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 4 17:12:07.713199 kubelet[3249]: I0904 17:12:07.713136 3249 server.go:1256] "Started kubelet" Sep 4 17:12:07.716485 kubelet[3249]: I0904 17:12:07.716429 3249 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 4 17:12:07.736778 kubelet[3249]: I0904 17:12:07.735191 3249 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Sep 4 17:12:07.736778 kubelet[3249]: I0904 17:12:07.736570 3249 server.go:461] "Adding debug handlers to kubelet server" Sep 4 17:12:07.749053 kubelet[3249]: I0904 17:12:07.748899 3249 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 4 17:12:07.749786 kubelet[3249]: I0904 17:12:07.749250 3249 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 4 17:12:07.763126 kubelet[3249]: I0904 17:12:07.762882 3249 volume_manager.go:291] "Starting Kubelet Volume Manager" Sep 4 17:12:07.763860 kubelet[3249]: I0904 17:12:07.763775 3249 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Sep 4 17:12:07.764130 kubelet[3249]: I0904 17:12:07.764090 3249 reconciler_new.go:29] "Reconciler: start to sync state" Sep 4 17:12:07.770385 kubelet[3249]: I0904 17:12:07.767749 3249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 4 17:12:07.776058 kubelet[3249]: I0904 17:12:07.776005 3249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 4 17:12:07.776058 kubelet[3249]: I0904 17:12:07.776068 3249 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 4 17:12:07.776269 kubelet[3249]: I0904 17:12:07.776101 3249 kubelet.go:2329] "Starting kubelet main sync loop" Sep 4 17:12:07.776269 kubelet[3249]: E0904 17:12:07.776202 3249 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 4 17:12:07.832502 kubelet[3249]: E0904 17:12:07.832342 3249 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 4 17:12:07.840884 kubelet[3249]: I0904 17:12:07.840537 3249 factory.go:221] Registration of the containerd container factory successfully Sep 4 17:12:07.840884 kubelet[3249]: I0904 17:12:07.840571 3249 factory.go:221] Registration of the systemd container factory successfully Sep 4 17:12:07.840884 kubelet[3249]: I0904 17:12:07.840863 3249 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 4 17:12:07.880551 kubelet[3249]: E0904 17:12:07.876924 3249 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 4 17:12:07.880551 kubelet[3249]: I0904 17:12:07.877521 3249 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-31-13" Sep 4 17:12:07.896967 kubelet[3249]: I0904 17:12:07.896891 3249 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-31-13" Sep 4 17:12:07.897221 kubelet[3249]: I0904 17:12:07.897025 3249 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-31-13" Sep 4 17:12:07.969491 kubelet[3249]: I0904 17:12:07.969448 3249 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 4 17:12:07.969491 kubelet[3249]: I0904 17:12:07.969487 3249 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 4 17:12:07.969792 kubelet[3249]: I0904 17:12:07.969521 3249 state_mem.go:36] "Initialized new in-memory state store" Sep 4 17:12:07.969792 kubelet[3249]: I0904 17:12:07.969755 3249 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 4 17:12:07.969792 kubelet[3249]: I0904 17:12:07.969816 3249 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 4 17:12:07.969792 kubelet[3249]: I0904 17:12:07.969897 3249 policy_none.go:49] "None policy: Start" Sep 4 17:12:07.972412 kubelet[3249]: I0904 17:12:07.972367 3249 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 4 17:12:07.972544 kubelet[3249]: I0904 17:12:07.972421 3249 state_mem.go:35] "Initializing new in-memory state store" Sep 4 17:12:07.972942 kubelet[3249]: I0904 17:12:07.972908 3249 state_mem.go:75] "Updated machine memory state" Sep 4 17:12:07.986072 kubelet[3249]: I0904 17:12:07.986021 3249 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 4 17:12:07.987258 kubelet[3249]: I0904 17:12:07.986679 3249 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 4 17:12:08.077956 kubelet[3249]: I0904 17:12:08.077686 3249 topology_manager.go:215] "Topology Admit Handler" podUID="3c77709120fffb8e37db74e3da7513fe" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-13" Sep 4 17:12:08.080199 kubelet[3249]: I0904 17:12:08.079723 3249 topology_manager.go:215] "Topology Admit Handler" podUID="ce3e8061e99f721d78a2f4bb16ad404b" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-13" Sep 4 17:12:08.080199 kubelet[3249]: I0904 17:12:08.079933 3249 topology_manager.go:215] "Topology Admit Handler" podUID="ad525826676285d852dc32a910977102" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-13" Sep 4 17:12:08.099625 kubelet[3249]: E0904 17:12:08.099186 3249 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-13\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-13" Sep 4 17:12:08.104760 kubelet[3249]: E0904 17:12:08.104642 3249 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-13\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:12:08.104760 kubelet[3249]: E0904 17:12:08.104663 3249 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-31-13\" already exists" pod="kube-system/kube-scheduler-ip-172-31-31-13" Sep 4 17:12:08.168336 kubelet[3249]: I0904 17:12:08.168220 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:12:08.168336 kubelet[3249]: I0904 17:12:08.168335 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:12:08.168778 kubelet[3249]: I0904 17:12:08.168384 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c77709120fffb8e37db74e3da7513fe-ca-certs\") pod \"kube-apiserver-ip-172-31-31-13\" (UID: \"3c77709120fffb8e37db74e3da7513fe\") " pod="kube-system/kube-apiserver-ip-172-31-31-13" Sep 4 17:12:08.168778 kubelet[3249]: I0904 17:12:08.168431 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c77709120fffb8e37db74e3da7513fe-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-13\" (UID: \"3c77709120fffb8e37db74e3da7513fe\") " pod="kube-system/kube-apiserver-ip-172-31-31-13" Sep 4 17:12:08.168778 kubelet[3249]: I0904 17:12:08.168479 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c77709120fffb8e37db74e3da7513fe-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-13\" (UID: \"3c77709120fffb8e37db74e3da7513fe\") " pod="kube-system/kube-apiserver-ip-172-31-31-13" Sep 4 17:12:08.168778 kubelet[3249]: I0904 17:12:08.168522 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:12:08.168778 kubelet[3249]: I0904 17:12:08.168566 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:12:08.169095 kubelet[3249]: I0904 17:12:08.168631 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce3e8061e99f721d78a2f4bb16ad404b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-13\" (UID: \"ce3e8061e99f721d78a2f4bb16ad404b\") " pod="kube-system/kube-controller-manager-ip-172-31-31-13" Sep 4 17:12:08.169095 kubelet[3249]: I0904 17:12:08.168678 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad525826676285d852dc32a910977102-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-13\" (UID: \"ad525826676285d852dc32a910977102\") " pod="kube-system/kube-scheduler-ip-172-31-31-13" Sep 4 17:12:08.558124 update_engine[1994]: I0904 17:12:08.558049 1994 update_attempter.cc:509] Updating boot flags... Sep 4 17:12:08.706873 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3302) Sep 4 17:12:08.723497 kubelet[3249]: I0904 17:12:08.722925 3249 apiserver.go:52] "Watching apiserver" Sep 4 17:12:08.764703 kubelet[3249]: I0904 17:12:08.764640 3249 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Sep 4 17:12:08.992863 kubelet[3249]: E0904 17:12:08.991409 3249 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-13\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-13" Sep 4 17:12:09.084283 kubelet[3249]: I0904 17:12:09.083944 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-13" podStartSLOduration=3.083815676 podStartE2EDuration="3.083815676s" podCreationTimestamp="2024-09-04 17:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:08.995294255 +0000 UTC m=+1.486585256" watchObservedRunningTime="2024-09-04 17:12:09.083815676 +0000 UTC m=+1.575106665" Sep 4 17:12:09.084283 kubelet[3249]: I0904 17:12:09.084144 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-13" podStartSLOduration=3.084107204 podStartE2EDuration="3.084107204s" podCreationTimestamp="2024-09-04 17:12:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:09.083371016 +0000 UTC m=+1.574662017" watchObservedRunningTime="2024-09-04 17:12:09.084107204 +0000 UTC m=+1.575398193" Sep 4 17:12:09.306295 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3306) Sep 4 17:12:09.368906 kubelet[3249]: I0904 17:12:09.368589 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-13" podStartSLOduration=4.368506305 podStartE2EDuration="4.368506305s" podCreationTimestamp="2024-09-04 17:12:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:09.170098148 +0000 UTC m=+1.661389161" watchObservedRunningTime="2024-09-04 17:12:09.368506305 +0000 UTC m=+1.859797306" Sep 4 17:12:15.014032 sudo[2336]: pam_unix(sudo:session): session closed for user root Sep 4 17:12:15.040272 sshd[2333]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:15.046243 systemd[1]: sshd@6-172.31.31.13:22-139.178.89.65:50358.service: Deactivated successfully. Sep 4 17:12:15.049548 systemd[1]: session-7.scope: Deactivated successfully. Sep 4 17:12:15.049981 systemd[1]: session-7.scope: Consumed 12.264s CPU time, 131.9M memory peak, 0B memory swap peak. Sep 4 17:12:15.052929 systemd-logind[1992]: Session 7 logged out. Waiting for processes to exit. Sep 4 17:12:15.055749 systemd-logind[1992]: Removed session 7. Sep 4 17:12:20.728783 kubelet[3249]: I0904 17:12:20.728719 3249 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 4 17:12:20.729550 containerd[2016]: time="2024-09-04T17:12:20.729450801Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 4 17:12:20.730854 kubelet[3249]: I0904 17:12:20.729887 3249 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 4 17:12:21.530448 kubelet[3249]: I0904 17:12:21.530375 3249 topology_manager.go:215] "Topology Admit Handler" podUID="447f696f-ee69-409b-88a8-a505e20fcd00" podNamespace="kube-system" podName="kube-proxy-9xhtf" Sep 4 17:12:21.554217 systemd[1]: Created slice kubepods-besteffort-pod447f696f_ee69_409b_88a8_a505e20fcd00.slice - libcontainer container kubepods-besteffort-pod447f696f_ee69_409b_88a8_a505e20fcd00.slice. Sep 4 17:12:21.563686 kubelet[3249]: I0904 17:12:21.563629 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/447f696f-ee69-409b-88a8-a505e20fcd00-kube-proxy\") pod \"kube-proxy-9xhtf\" (UID: \"447f696f-ee69-409b-88a8-a505e20fcd00\") " pod="kube-system/kube-proxy-9xhtf" Sep 4 17:12:21.565451 kubelet[3249]: I0904 17:12:21.563710 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/447f696f-ee69-409b-88a8-a505e20fcd00-lib-modules\") pod \"kube-proxy-9xhtf\" (UID: \"447f696f-ee69-409b-88a8-a505e20fcd00\") " pod="kube-system/kube-proxy-9xhtf" Sep 4 17:12:21.565451 kubelet[3249]: I0904 17:12:21.563758 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl8f9\" (UniqueName: \"kubernetes.io/projected/447f696f-ee69-409b-88a8-a505e20fcd00-kube-api-access-hl8f9\") pod \"kube-proxy-9xhtf\" (UID: \"447f696f-ee69-409b-88a8-a505e20fcd00\") " pod="kube-system/kube-proxy-9xhtf" Sep 4 17:12:21.565451 kubelet[3249]: I0904 17:12:21.563806 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/447f696f-ee69-409b-88a8-a505e20fcd00-xtables-lock\") pod \"kube-proxy-9xhtf\" (UID: \"447f696f-ee69-409b-88a8-a505e20fcd00\") " pod="kube-system/kube-proxy-9xhtf" Sep 4 17:12:21.830049 kubelet[3249]: I0904 17:12:21.829259 3249 topology_manager.go:215] "Topology Admit Handler" podUID="a479223a-7a78-4d22-abec-bcf13f1f9ed5" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-7xp8d" Sep 4 17:12:21.850885 systemd[1]: Created slice kubepods-besteffort-poda479223a_7a78_4d22_abec_bcf13f1f9ed5.slice - libcontainer container kubepods-besteffort-poda479223a_7a78_4d22_abec_bcf13f1f9ed5.slice. Sep 4 17:12:21.858600 kubelet[3249]: W0904 17:12:21.858219 3249 reflector.go:539] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:21.858600 kubelet[3249]: E0904 17:12:21.858334 3249 reflector.go:147] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:21.859312 kubelet[3249]: W0904 17:12:21.859105 3249 reflector.go:539] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:21.859583 kubelet[3249]: E0904 17:12:21.859401 3249 reflector.go:147] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:21.867956 kubelet[3249]: I0904 17:12:21.867713 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a479223a-7a78-4d22-abec-bcf13f1f9ed5-var-lib-calico\") pod \"tigera-operator-5d56685c77-7xp8d\" (UID: \"a479223a-7a78-4d22-abec-bcf13f1f9ed5\") " pod="tigera-operator/tigera-operator-5d56685c77-7xp8d" Sep 4 17:12:21.867956 kubelet[3249]: I0904 17:12:21.867792 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdxv5\" (UniqueName: \"kubernetes.io/projected/a479223a-7a78-4d22-abec-bcf13f1f9ed5-kube-api-access-jdxv5\") pod \"tigera-operator-5d56685c77-7xp8d\" (UID: \"a479223a-7a78-4d22-abec-bcf13f1f9ed5\") " pod="tigera-operator/tigera-operator-5d56685c77-7xp8d" Sep 4 17:12:21.868633 containerd[2016]: time="2024-09-04T17:12:21.868577771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xhtf,Uid:447f696f-ee69-409b-88a8-a505e20fcd00,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:21.942865 containerd[2016]: time="2024-09-04T17:12:21.941551631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:21.942865 containerd[2016]: time="2024-09-04T17:12:21.941660699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:21.942865 containerd[2016]: time="2024-09-04T17:12:21.941722883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:21.942865 containerd[2016]: time="2024-09-04T17:12:21.941770979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:22.012430 systemd[1]: Started cri-containerd-a5ae62e99315d17c37728bef934e1082c50d281831e63ff4ce47a197a781c28b.scope - libcontainer container a5ae62e99315d17c37728bef934e1082c50d281831e63ff4ce47a197a781c28b. Sep 4 17:12:22.059532 containerd[2016]: time="2024-09-04T17:12:22.059039804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xhtf,Uid:447f696f-ee69-409b-88a8-a505e20fcd00,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5ae62e99315d17c37728bef934e1082c50d281831e63ff4ce47a197a781c28b\"" Sep 4 17:12:22.068514 containerd[2016]: time="2024-09-04T17:12:22.068436488Z" level=info msg="CreateContainer within sandbox \"a5ae62e99315d17c37728bef934e1082c50d281831e63ff4ce47a197a781c28b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 4 17:12:22.096596 containerd[2016]: time="2024-09-04T17:12:22.096415496Z" level=info msg="CreateContainer within sandbox \"a5ae62e99315d17c37728bef934e1082c50d281831e63ff4ce47a197a781c28b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0865bf1df4baed81fd84ade92f2694e86bbf7bfac81872054eb6e620b05408c7\"" Sep 4 17:12:22.098130 containerd[2016]: time="2024-09-04T17:12:22.098081456Z" level=info msg="StartContainer for \"0865bf1df4baed81fd84ade92f2694e86bbf7bfac81872054eb6e620b05408c7\"" Sep 4 17:12:22.151172 systemd[1]: Started cri-containerd-0865bf1df4baed81fd84ade92f2694e86bbf7bfac81872054eb6e620b05408c7.scope - libcontainer container 0865bf1df4baed81fd84ade92f2694e86bbf7bfac81872054eb6e620b05408c7. Sep 4 17:12:22.217387 containerd[2016]: time="2024-09-04T17:12:22.216990189Z" level=info msg="StartContainer for \"0865bf1df4baed81fd84ade92f2694e86bbf7bfac81872054eb6e620b05408c7\" returns successfully" Sep 4 17:12:22.688684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3337374990.mount: Deactivated successfully. Sep 4 17:12:22.931647 kubelet[3249]: I0904 17:12:22.931233 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9xhtf" podStartSLOduration=1.9311577 podStartE2EDuration="1.9311577s" podCreationTimestamp="2024-09-04 17:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:22.929635596 +0000 UTC m=+15.420926597" watchObservedRunningTime="2024-09-04 17:12:22.9311577 +0000 UTC m=+15.422448713" Sep 4 17:12:23.062266 containerd[2016]: time="2024-09-04T17:12:23.061858497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-7xp8d,Uid:a479223a-7a78-4d22-abec-bcf13f1f9ed5,Namespace:tigera-operator,Attempt:0,}" Sep 4 17:12:23.109484 containerd[2016]: time="2024-09-04T17:12:23.109089309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:23.109484 containerd[2016]: time="2024-09-04T17:12:23.109191753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:23.109484 containerd[2016]: time="2024-09-04T17:12:23.109223301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:23.109484 containerd[2016]: time="2024-09-04T17:12:23.109246833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:23.151221 systemd[1]: Started cri-containerd-0a9233b94368f13b6daaddc7c1ed3f7f0ee12641f227d6a44a525d597fc7a75e.scope - libcontainer container 0a9233b94368f13b6daaddc7c1ed3f7f0ee12641f227d6a44a525d597fc7a75e. Sep 4 17:12:23.215280 containerd[2016]: time="2024-09-04T17:12:23.215211598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-7xp8d,Uid:a479223a-7a78-4d22-abec-bcf13f1f9ed5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0a9233b94368f13b6daaddc7c1ed3f7f0ee12641f227d6a44a525d597fc7a75e\"" Sep 4 17:12:23.219772 containerd[2016]: time="2024-09-04T17:12:23.219703414Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Sep 4 17:12:24.599010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347611065.mount: Deactivated successfully. Sep 4 17:12:25.301563 containerd[2016]: time="2024-09-04T17:12:25.301482660Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:25.303214 containerd[2016]: time="2024-09-04T17:12:25.303156144Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485895" Sep 4 17:12:25.304762 containerd[2016]: time="2024-09-04T17:12:25.304643172Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:25.310775 containerd[2016]: time="2024-09-04T17:12:25.310691868Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:25.312330 containerd[2016]: time="2024-09-04T17:12:25.312266868Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.092486306s" Sep 4 17:12:25.312432 containerd[2016]: time="2024-09-04T17:12:25.312327300Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Sep 4 17:12:25.316077 containerd[2016]: time="2024-09-04T17:12:25.315805236Z" level=info msg="CreateContainer within sandbox \"0a9233b94368f13b6daaddc7c1ed3f7f0ee12641f227d6a44a525d597fc7a75e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 4 17:12:25.338012 containerd[2016]: time="2024-09-04T17:12:25.337596792Z" level=info msg="CreateContainer within sandbox \"0a9233b94368f13b6daaddc7c1ed3f7f0ee12641f227d6a44a525d597fc7a75e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4\"" Sep 4 17:12:25.338677 containerd[2016]: time="2024-09-04T17:12:25.338558004Z" level=info msg="StartContainer for \"01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4\"" Sep 4 17:12:25.404169 systemd[1]: Started cri-containerd-01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4.scope - libcontainer container 01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4. Sep 4 17:12:25.450517 containerd[2016]: time="2024-09-04T17:12:25.450328273Z" level=info msg="StartContainer for \"01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4\" returns successfully" Sep 4 17:12:27.805756 kubelet[3249]: I0904 17:12:27.805071 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-7xp8d" podStartSLOduration=4.709572351 podStartE2EDuration="6.805001081s" podCreationTimestamp="2024-09-04 17:12:21 +0000 UTC" firstStartedPulling="2024-09-04 17:12:23.217315702 +0000 UTC m=+15.708606691" lastFinishedPulling="2024-09-04 17:12:25.312744432 +0000 UTC m=+17.804035421" observedRunningTime="2024-09-04 17:12:25.939594075 +0000 UTC m=+18.430885088" watchObservedRunningTime="2024-09-04 17:12:27.805001081 +0000 UTC m=+20.296292070" Sep 4 17:12:30.487940 kubelet[3249]: I0904 17:12:30.487876 3249 topology_manager.go:215] "Topology Admit Handler" podUID="4006e3cb-a929-4e8e-9fcf-b76036905aa5" podNamespace="calico-system" podName="calico-typha-55b78c5549-hnzj5" Sep 4 17:12:30.505935 systemd[1]: Created slice kubepods-besteffort-pod4006e3cb_a929_4e8e_9fcf_b76036905aa5.slice - libcontainer container kubepods-besteffort-pod4006e3cb_a929_4e8e_9fcf_b76036905aa5.slice. Sep 4 17:12:30.507314 kubelet[3249]: W0904 17:12:30.506921 3249 reflector.go:539] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:30.509872 kubelet[3249]: E0904 17:12:30.507023 3249 reflector.go:147] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:30.509872 kubelet[3249]: W0904 17:12:30.507759 3249 reflector.go:539] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:30.509872 kubelet[3249]: E0904 17:12:30.507850 3249 reflector.go:147] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:30.509872 kubelet[3249]: W0904 17:12:30.507963 3249 reflector.go:539] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:30.509872 kubelet[3249]: E0904 17:12:30.508019 3249 reflector.go:147] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-31-13" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-31-13' and this object Sep 4 17:12:30.524511 kubelet[3249]: I0904 17:12:30.524441 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4006e3cb-a929-4e8e-9fcf-b76036905aa5-tigera-ca-bundle\") pod \"calico-typha-55b78c5549-hnzj5\" (UID: \"4006e3cb-a929-4e8e-9fcf-b76036905aa5\") " pod="calico-system/calico-typha-55b78c5549-hnzj5" Sep 4 17:12:30.524679 kubelet[3249]: I0904 17:12:30.524530 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5n4b\" (UniqueName: \"kubernetes.io/projected/4006e3cb-a929-4e8e-9fcf-b76036905aa5-kube-api-access-l5n4b\") pod \"calico-typha-55b78c5549-hnzj5\" (UID: \"4006e3cb-a929-4e8e-9fcf-b76036905aa5\") " pod="calico-system/calico-typha-55b78c5549-hnzj5" Sep 4 17:12:30.524679 kubelet[3249]: I0904 17:12:30.524585 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4006e3cb-a929-4e8e-9fcf-b76036905aa5-typha-certs\") pod \"calico-typha-55b78c5549-hnzj5\" (UID: \"4006e3cb-a929-4e8e-9fcf-b76036905aa5\") " pod="calico-system/calico-typha-55b78c5549-hnzj5" Sep 4 17:12:30.713296 kubelet[3249]: I0904 17:12:30.713222 3249 topology_manager.go:215] "Topology Admit Handler" podUID="196c564f-0d10-4fe5-b950-6b201e0a3638" podNamespace="calico-system" podName="calico-node-f4csl" Sep 4 17:12:30.726868 kubelet[3249]: I0904 17:12:30.726142 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxqnv\" (UniqueName: \"kubernetes.io/projected/196c564f-0d10-4fe5-b950-6b201e0a3638-kube-api-access-cxqnv\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728183 kubelet[3249]: I0904 17:12:30.727169 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/196c564f-0d10-4fe5-b950-6b201e0a3638-tigera-ca-bundle\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728183 kubelet[3249]: I0904 17:12:30.727238 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-flexvol-driver-host\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728183 kubelet[3249]: I0904 17:12:30.727285 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-var-lib-calico\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728183 kubelet[3249]: I0904 17:12:30.727364 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-lib-modules\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728183 kubelet[3249]: I0904 17:12:30.727440 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-xtables-lock\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728570 kubelet[3249]: I0904 17:12:30.727488 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/196c564f-0d10-4fe5-b950-6b201e0a3638-node-certs\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728570 kubelet[3249]: I0904 17:12:30.727536 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-var-run-calico\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728570 kubelet[3249]: I0904 17:12:30.727579 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-bin-dir\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728570 kubelet[3249]: I0904 17:12:30.727658 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-log-dir\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.728570 kubelet[3249]: I0904 17:12:30.727707 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-net-dir\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.729969 kubelet[3249]: I0904 17:12:30.727752 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-policysync\") pod \"calico-node-f4csl\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " pod="calico-system/calico-node-f4csl" Sep 4 17:12:30.735166 systemd[1]: Created slice kubepods-besteffort-pod196c564f_0d10_4fe5_b950_6b201e0a3638.slice - libcontainer container kubepods-besteffort-pod196c564f_0d10_4fe5_b950_6b201e0a3638.slice. Sep 4 17:12:30.817073 kubelet[3249]: I0904 17:12:30.814143 3249 topology_manager.go:215] "Topology Admit Handler" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" podNamespace="calico-system" podName="csi-node-driver-s6dqv" Sep 4 17:12:30.818189 kubelet[3249]: E0904 17:12:30.817710 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6dqv" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" Sep 4 17:12:30.828556 kubelet[3249]: I0904 17:12:30.828490 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/30c214dc-77a9-494e-bbbc-1b760a49564b-varrun\") pod \"csi-node-driver-s6dqv\" (UID: \"30c214dc-77a9-494e-bbbc-1b760a49564b\") " pod="calico-system/csi-node-driver-s6dqv" Sep 4 17:12:30.828556 kubelet[3249]: I0904 17:12:30.828578 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30c214dc-77a9-494e-bbbc-1b760a49564b-registration-dir\") pod \"csi-node-driver-s6dqv\" (UID: \"30c214dc-77a9-494e-bbbc-1b760a49564b\") " pod="calico-system/csi-node-driver-s6dqv" Sep 4 17:12:30.828804 kubelet[3249]: I0904 17:12:30.828670 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30c214dc-77a9-494e-bbbc-1b760a49564b-kubelet-dir\") pod \"csi-node-driver-s6dqv\" (UID: \"30c214dc-77a9-494e-bbbc-1b760a49564b\") " pod="calico-system/csi-node-driver-s6dqv" Sep 4 17:12:30.828804 kubelet[3249]: I0904 17:12:30.828716 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30c214dc-77a9-494e-bbbc-1b760a49564b-socket-dir\") pod \"csi-node-driver-s6dqv\" (UID: \"30c214dc-77a9-494e-bbbc-1b760a49564b\") " pod="calico-system/csi-node-driver-s6dqv" Sep 4 17:12:30.828804 kubelet[3249]: I0904 17:12:30.828785 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4f9v\" (UniqueName: \"kubernetes.io/projected/30c214dc-77a9-494e-bbbc-1b760a49564b-kube-api-access-n4f9v\") pod \"csi-node-driver-s6dqv\" (UID: \"30c214dc-77a9-494e-bbbc-1b760a49564b\") " pod="calico-system/csi-node-driver-s6dqv" Sep 4 17:12:30.831861 kubelet[3249]: E0904 17:12:30.830667 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.831861 kubelet[3249]: W0904 17:12:30.830709 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.831861 kubelet[3249]: E0904 17:12:30.830747 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.831861 kubelet[3249]: E0904 17:12:30.831854 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.832208 kubelet[3249]: W0904 17:12:30.831883 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.832208 kubelet[3249]: E0904 17:12:30.831922 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.832334 kubelet[3249]: E0904 17:12:30.832250 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.832334 kubelet[3249]: W0904 17:12:30.832267 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.832334 kubelet[3249]: E0904 17:12:30.832291 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.834239 kubelet[3249]: E0904 17:12:30.833139 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.834239 kubelet[3249]: W0904 17:12:30.833175 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.834239 kubelet[3249]: E0904 17:12:30.833210 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.845236 kubelet[3249]: E0904 17:12:30.845183 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.845236 kubelet[3249]: W0904 17:12:30.845225 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.845446 kubelet[3249]: E0904 17:12:30.845265 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.930875 kubelet[3249]: E0904 17:12:30.929902 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.930875 kubelet[3249]: W0904 17:12:30.929936 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.930875 kubelet[3249]: E0904 17:12:30.929970 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.931518 kubelet[3249]: E0904 17:12:30.931484 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.931967 kubelet[3249]: W0904 17:12:30.931648 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.931967 kubelet[3249]: E0904 17:12:30.931692 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.932222 kubelet[3249]: E0904 17:12:30.932198 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.934958 kubelet[3249]: W0904 17:12:30.934906 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.935436 kubelet[3249]: E0904 17:12:30.935168 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.935933 kubelet[3249]: E0904 17:12:30.935709 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.935933 kubelet[3249]: W0904 17:12:30.935736 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.935933 kubelet[3249]: E0904 17:12:30.935770 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.936497 kubelet[3249]: E0904 17:12:30.936452 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.936497 kubelet[3249]: W0904 17:12:30.936489 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.936651 kubelet[3249]: E0904 17:12:30.936540 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.937039 kubelet[3249]: E0904 17:12:30.937002 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.937039 kubelet[3249]: W0904 17:12:30.937032 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.937373 kubelet[3249]: E0904 17:12:30.937212 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.937473 kubelet[3249]: E0904 17:12:30.937440 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.937473 kubelet[3249]: W0904 17:12:30.937467 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.937599 kubelet[3249]: E0904 17:12:30.937506 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.938061 kubelet[3249]: E0904 17:12:30.938026 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.938061 kubelet[3249]: W0904 17:12:30.938057 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.938248 kubelet[3249]: E0904 17:12:30.938098 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.939177 kubelet[3249]: E0904 17:12:30.939145 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.939491 kubelet[3249]: W0904 17:12:30.939290 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.939881 kubelet[3249]: E0904 17:12:30.939817 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.941877 kubelet[3249]: E0904 17:12:30.940237 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.941877 kubelet[3249]: W0904 17:12:30.940402 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.943672 kubelet[3249]: E0904 17:12:30.943352 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.943672 kubelet[3249]: E0904 17:12:30.943473 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.943672 kubelet[3249]: W0904 17:12:30.943578 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.943672 kubelet[3249]: E0904 17:12:30.943638 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.944596 kubelet[3249]: E0904 17:12:30.944457 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.944596 kubelet[3249]: W0904 17:12:30.944487 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.944596 kubelet[3249]: E0904 17:12:30.944569 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.945322 kubelet[3249]: E0904 17:12:30.945124 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.945322 kubelet[3249]: W0904 17:12:30.945148 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.945322 kubelet[3249]: E0904 17:12:30.945212 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.945902 kubelet[3249]: E0904 17:12:30.945674 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.945902 kubelet[3249]: W0904 17:12:30.945696 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.945902 kubelet[3249]: E0904 17:12:30.945758 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.946656 kubelet[3249]: E0904 17:12:30.946447 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.946656 kubelet[3249]: W0904 17:12:30.946474 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.946656 kubelet[3249]: E0904 17:12:30.946571 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.947632 kubelet[3249]: E0904 17:12:30.947427 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.947632 kubelet[3249]: W0904 17:12:30.947455 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.947632 kubelet[3249]: E0904 17:12:30.947525 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.948187 kubelet[3249]: E0904 17:12:30.948047 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.948187 kubelet[3249]: W0904 17:12:30.948070 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.948187 kubelet[3249]: E0904 17:12:30.948152 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.948805 kubelet[3249]: E0904 17:12:30.948678 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.948805 kubelet[3249]: W0904 17:12:30.948702 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.948805 kubelet[3249]: E0904 17:12:30.948765 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.949588 kubelet[3249]: E0904 17:12:30.949348 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.949588 kubelet[3249]: W0904 17:12:30.949373 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.949588 kubelet[3249]: E0904 17:12:30.949547 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.950104 kubelet[3249]: E0904 17:12:30.949988 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.950104 kubelet[3249]: W0904 17:12:30.950012 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.950104 kubelet[3249]: E0904 17:12:30.950074 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.950967 kubelet[3249]: E0904 17:12:30.950616 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.950967 kubelet[3249]: W0904 17:12:30.950640 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.950967 kubelet[3249]: E0904 17:12:30.950715 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.951557 kubelet[3249]: E0904 17:12:30.951348 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.951557 kubelet[3249]: W0904 17:12:30.951375 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.951875 kubelet[3249]: E0904 17:12:30.951766 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.952650 kubelet[3249]: E0904 17:12:30.952161 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.952785 kubelet[3249]: W0904 17:12:30.952761 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.953142 kubelet[3249]: E0904 17:12:30.952986 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.953851 kubelet[3249]: E0904 17:12:30.953783 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.953956 kubelet[3249]: W0904 17:12:30.953869 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.954078 kubelet[3249]: E0904 17:12:30.954042 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.954678 kubelet[3249]: E0904 17:12:30.954613 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.954678 kubelet[3249]: W0904 17:12:30.954669 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.955050 kubelet[3249]: E0904 17:12:30.954989 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.955354 kubelet[3249]: E0904 17:12:30.955320 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.955431 kubelet[3249]: W0904 17:12:30.955348 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.955515 kubelet[3249]: E0904 17:12:30.955483 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.956202 kubelet[3249]: E0904 17:12:30.956107 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.956202 kubelet[3249]: W0904 17:12:30.956191 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.957032 kubelet[3249]: E0904 17:12:30.956996 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.957429 kubelet[3249]: E0904 17:12:30.957391 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.957429 kubelet[3249]: W0904 17:12:30.957420 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.957692 kubelet[3249]: E0904 17:12:30.957463 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.958150 kubelet[3249]: E0904 17:12:30.958113 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.958150 kubelet[3249]: W0904 17:12:30.958145 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.958334 kubelet[3249]: E0904 17:12:30.958185 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:30.959176 kubelet[3249]: E0904 17:12:30.959133 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:30.959176 kubelet[3249]: W0904 17:12:30.959164 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:30.959333 kubelet[3249]: E0904 17:12:30.959197 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.052865 kubelet[3249]: E0904 17:12:31.052790 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.052865 kubelet[3249]: W0904 17:12:31.052866 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.053122 kubelet[3249]: E0904 17:12:31.052909 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.053345 kubelet[3249]: E0904 17:12:31.053317 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.053422 kubelet[3249]: W0904 17:12:31.053345 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.053422 kubelet[3249]: E0904 17:12:31.053374 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.053720 kubelet[3249]: E0904 17:12:31.053692 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.053796 kubelet[3249]: W0904 17:12:31.053722 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.053796 kubelet[3249]: E0904 17:12:31.053749 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.054112 kubelet[3249]: E0904 17:12:31.054086 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.054178 kubelet[3249]: W0904 17:12:31.054111 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.054178 kubelet[3249]: E0904 17:12:31.054138 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.054449 kubelet[3249]: E0904 17:12:31.054424 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.054511 kubelet[3249]: W0904 17:12:31.054448 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.054511 kubelet[3249]: E0904 17:12:31.054476 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.054780 kubelet[3249]: E0904 17:12:31.054756 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.054882 kubelet[3249]: W0904 17:12:31.054779 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.054882 kubelet[3249]: E0904 17:12:31.054804 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.157138 kubelet[3249]: E0904 17:12:31.156962 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.157138 kubelet[3249]: W0904 17:12:31.156998 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.157138 kubelet[3249]: E0904 17:12:31.157032 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.158707 kubelet[3249]: E0904 17:12:31.157817 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.158707 kubelet[3249]: W0904 17:12:31.157867 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.158707 kubelet[3249]: E0904 17:12:31.157899 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.159521 kubelet[3249]: E0904 17:12:31.159281 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.159521 kubelet[3249]: W0904 17:12:31.159305 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.159521 kubelet[3249]: E0904 17:12:31.159332 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.160576 kubelet[3249]: E0904 17:12:31.160537 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.160576 kubelet[3249]: W0904 17:12:31.160569 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.160761 kubelet[3249]: E0904 17:12:31.160605 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.161042 kubelet[3249]: E0904 17:12:31.161008 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.161110 kubelet[3249]: W0904 17:12:31.161050 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.161110 kubelet[3249]: E0904 17:12:31.161080 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.161906 kubelet[3249]: E0904 17:12:31.161428 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.161906 kubelet[3249]: W0904 17:12:31.161452 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.161906 kubelet[3249]: E0904 17:12:31.161484 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.263325 kubelet[3249]: E0904 17:12:31.263278 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.263325 kubelet[3249]: W0904 17:12:31.263314 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.263594 kubelet[3249]: E0904 17:12:31.263350 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.263759 kubelet[3249]: E0904 17:12:31.263728 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.263759 kubelet[3249]: W0904 17:12:31.263755 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.263919 kubelet[3249]: E0904 17:12:31.263783 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.264254 kubelet[3249]: E0904 17:12:31.264220 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.264254 kubelet[3249]: W0904 17:12:31.264249 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.264400 kubelet[3249]: E0904 17:12:31.264277 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.264673 kubelet[3249]: E0904 17:12:31.264642 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.264673 kubelet[3249]: W0904 17:12:31.264667 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.264804 kubelet[3249]: E0904 17:12:31.264694 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.265060 kubelet[3249]: E0904 17:12:31.265033 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.265060 kubelet[3249]: W0904 17:12:31.265049 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.265216 kubelet[3249]: E0904 17:12:31.265074 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.265410 kubelet[3249]: E0904 17:12:31.265381 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.265471 kubelet[3249]: W0904 17:12:31.265408 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.265471 kubelet[3249]: E0904 17:12:31.265436 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.366877 kubelet[3249]: E0904 17:12:31.366615 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.366877 kubelet[3249]: W0904 17:12:31.366645 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.366877 kubelet[3249]: E0904 17:12:31.366677 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.367654 kubelet[3249]: E0904 17:12:31.367447 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.367654 kubelet[3249]: W0904 17:12:31.367471 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.367654 kubelet[3249]: E0904 17:12:31.367500 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.368223 kubelet[3249]: E0904 17:12:31.368201 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.368430 kubelet[3249]: W0904 17:12:31.368306 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.368430 kubelet[3249]: E0904 17:12:31.368339 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.369034 kubelet[3249]: E0904 17:12:31.368901 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.369034 kubelet[3249]: W0904 17:12:31.368923 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.369034 kubelet[3249]: E0904 17:12:31.368948 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.369736 kubelet[3249]: E0904 17:12:31.369467 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.369736 kubelet[3249]: W0904 17:12:31.369487 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.369736 kubelet[3249]: E0904 17:12:31.369512 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.370199 kubelet[3249]: E0904 17:12:31.370109 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.370199 kubelet[3249]: W0904 17:12:31.370128 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.370199 kubelet[3249]: E0904 17:12:31.370153 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.472378 kubelet[3249]: E0904 17:12:31.472253 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.473056 kubelet[3249]: W0904 17:12:31.472522 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.473056 kubelet[3249]: E0904 17:12:31.472566 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.475261 kubelet[3249]: E0904 17:12:31.475102 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.475261 kubelet[3249]: W0904 17:12:31.475133 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.475261 kubelet[3249]: E0904 17:12:31.475173 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.475748 kubelet[3249]: E0904 17:12:31.475720 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.475815 kubelet[3249]: W0904 17:12:31.475746 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.475815 kubelet[3249]: E0904 17:12:31.475774 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.476194 kubelet[3249]: E0904 17:12:31.476165 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.476274 kubelet[3249]: W0904 17:12:31.476192 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.476274 kubelet[3249]: E0904 17:12:31.476221 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.476650 kubelet[3249]: E0904 17:12:31.476625 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.476737 kubelet[3249]: W0904 17:12:31.476649 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.476737 kubelet[3249]: E0904 17:12:31.476675 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.477053 kubelet[3249]: E0904 17:12:31.477027 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.477163 kubelet[3249]: W0904 17:12:31.477052 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.477163 kubelet[3249]: E0904 17:12:31.477079 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.569763 kubelet[3249]: E0904 17:12:31.569714 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.570338 kubelet[3249]: W0904 17:12:31.569751 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.570338 kubelet[3249]: E0904 17:12:31.569808 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.573170 kubelet[3249]: E0904 17:12:31.573126 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.573170 kubelet[3249]: W0904 17:12:31.573161 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.573359 kubelet[3249]: E0904 17:12:31.573195 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.577797 kubelet[3249]: E0904 17:12:31.577758 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.577797 kubelet[3249]: W0904 17:12:31.577789 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.578027 kubelet[3249]: E0904 17:12:31.577820 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.578254 kubelet[3249]: E0904 17:12:31.578221 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.578254 kubelet[3249]: W0904 17:12:31.578249 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.578371 kubelet[3249]: E0904 17:12:31.578277 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.578739 kubelet[3249]: E0904 17:12:31.578707 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.578739 kubelet[3249]: W0904 17:12:31.578732 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.578888 kubelet[3249]: E0904 17:12:31.578762 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.579197 kubelet[3249]: E0904 17:12:31.579161 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.579197 kubelet[3249]: W0904 17:12:31.579189 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.579311 kubelet[3249]: E0904 17:12:31.579217 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.603718 kubelet[3249]: E0904 17:12:31.602683 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.603718 kubelet[3249]: W0904 17:12:31.602716 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.603718 kubelet[3249]: E0904 17:12:31.602748 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.617856 kubelet[3249]: E0904 17:12:31.616194 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.618058 kubelet[3249]: W0904 17:12:31.618007 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.618220 kubelet[3249]: E0904 17:12:31.618198 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.623148 kubelet[3249]: E0904 17:12:31.623102 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.623148 kubelet[3249]: W0904 17:12:31.623138 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.623355 kubelet[3249]: E0904 17:12:31.623175 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.630328 kubelet[3249]: E0904 17:12:31.626719 3249 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Sep 4 17:12:31.631773 kubelet[3249]: E0904 17:12:31.630569 3249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4006e3cb-a929-4e8e-9fcf-b76036905aa5-typha-certs podName:4006e3cb-a929-4e8e-9fcf-b76036905aa5 nodeName:}" failed. No retries permitted until 2024-09-04 17:12:32.126821268 +0000 UTC m=+24.618112257 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/4006e3cb-a929-4e8e-9fcf-b76036905aa5-typha-certs") pod "calico-typha-55b78c5549-hnzj5" (UID: "4006e3cb-a929-4e8e-9fcf-b76036905aa5") : failed to sync secret cache: timed out waiting for the condition Sep 4 17:12:31.646243 containerd[2016]: time="2024-09-04T17:12:31.646082372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f4csl,Uid:196c564f-0d10-4fe5-b950-6b201e0a3638,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:31.681617 kubelet[3249]: E0904 17:12:31.681570 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.681617 kubelet[3249]: W0904 17:12:31.681607 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.681883 kubelet[3249]: E0904 17:12:31.681642 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.700707 containerd[2016]: time="2024-09-04T17:12:31.700416224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:31.700707 containerd[2016]: time="2024-09-04T17:12:31.700595264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:31.700707 containerd[2016]: time="2024-09-04T17:12:31.700646156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:31.700707 containerd[2016]: time="2024-09-04T17:12:31.700705352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:31.737359 systemd[1]: Started cri-containerd-fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5.scope - libcontainer container fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5. Sep 4 17:12:31.784011 kubelet[3249]: E0904 17:12:31.783193 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.784011 kubelet[3249]: W0904 17:12:31.783227 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.784011 kubelet[3249]: E0904 17:12:31.783262 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.810558 containerd[2016]: time="2024-09-04T17:12:31.808180088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f4csl,Uid:196c564f-0d10-4fe5-b950-6b201e0a3638,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\"" Sep 4 17:12:31.816262 containerd[2016]: time="2024-09-04T17:12:31.815882036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Sep 4 17:12:31.884402 kubelet[3249]: E0904 17:12:31.884367 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.884642 kubelet[3249]: W0904 17:12:31.884614 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.884793 kubelet[3249]: E0904 17:12:31.884772 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:31.986464 kubelet[3249]: E0904 17:12:31.986399 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:31.986895 kubelet[3249]: W0904 17:12:31.986434 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:31.986895 kubelet[3249]: E0904 17:12:31.986664 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:32.088888 kubelet[3249]: E0904 17:12:32.088606 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:32.088888 kubelet[3249]: W0904 17:12:32.088639 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:32.088888 kubelet[3249]: E0904 17:12:32.088693 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:32.189669 kubelet[3249]: E0904 17:12:32.189628 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:32.190194 kubelet[3249]: W0904 17:12:32.189853 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:32.190194 kubelet[3249]: E0904 17:12:32.189902 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:32.190902 kubelet[3249]: E0904 17:12:32.190650 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:32.190902 kubelet[3249]: W0904 17:12:32.190678 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:32.190902 kubelet[3249]: E0904 17:12:32.190710 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:32.191333 kubelet[3249]: E0904 17:12:32.191305 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:32.191680 kubelet[3249]: W0904 17:12:32.191470 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:32.191680 kubelet[3249]: E0904 17:12:32.191513 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:32.192428 kubelet[3249]: E0904 17:12:32.192168 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:32.192428 kubelet[3249]: W0904 17:12:32.192197 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:32.192428 kubelet[3249]: E0904 17:12:32.192230 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:32.193523 kubelet[3249]: E0904 17:12:32.192935 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:32.193523 kubelet[3249]: W0904 17:12:32.192968 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:32.193523 kubelet[3249]: E0904 17:12:32.193002 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:32.205970 kubelet[3249]: E0904 17:12:32.205783 3249 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 4 17:12:32.205970 kubelet[3249]: W0904 17:12:32.205817 3249 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 4 17:12:32.205970 kubelet[3249]: E0904 17:12:32.205885 3249 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 4 17:12:32.313644 containerd[2016]: time="2024-09-04T17:12:32.313580239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b78c5549-hnzj5,Uid:4006e3cb-a929-4e8e-9fcf-b76036905aa5,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:32.360961 containerd[2016]: time="2024-09-04T17:12:32.359689807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:32.360961 containerd[2016]: time="2024-09-04T17:12:32.359801731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:32.360961 containerd[2016]: time="2024-09-04T17:12:32.359915443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:32.360961 containerd[2016]: time="2024-09-04T17:12:32.359952751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:32.422137 systemd[1]: Started cri-containerd-b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09.scope - libcontainer container b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09. Sep 4 17:12:32.497993 containerd[2016]: time="2024-09-04T17:12:32.497477000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55b78c5549-hnzj5,Uid:4006e3cb-a929-4e8e-9fcf-b76036905aa5,Namespace:calico-system,Attempt:0,} returns sandbox id \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\"" Sep 4 17:12:32.779247 kubelet[3249]: E0904 17:12:32.777244 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6dqv" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" Sep 4 17:12:33.072111 containerd[2016]: time="2024-09-04T17:12:33.071429347Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:33.075584 containerd[2016]: time="2024-09-04T17:12:33.075416875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Sep 4 17:12:33.078455 containerd[2016]: time="2024-09-04T17:12:33.077331235Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:33.082614 containerd[2016]: time="2024-09-04T17:12:33.082486735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:33.084638 containerd[2016]: time="2024-09-04T17:12:33.084328723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.268365507s" Sep 4 17:12:33.084638 containerd[2016]: time="2024-09-04T17:12:33.084398947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Sep 4 17:12:33.086615 containerd[2016]: time="2024-09-04T17:12:33.086502367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Sep 4 17:12:33.091896 containerd[2016]: time="2024-09-04T17:12:33.091665847Z" level=info msg="CreateContainer within sandbox \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:12:33.131297 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1690522638.mount: Deactivated successfully. Sep 4 17:12:33.144476 containerd[2016]: time="2024-09-04T17:12:33.143198383Z" level=info msg="CreateContainer within sandbox \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\"" Sep 4 17:12:33.145203 containerd[2016]: time="2024-09-04T17:12:33.144975847Z" level=info msg="StartContainer for \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\"" Sep 4 17:12:33.219613 systemd[1]: run-containerd-runc-k8s.io-0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900-runc.zODzpc.mount: Deactivated successfully. Sep 4 17:12:33.241698 systemd[1]: Started cri-containerd-0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900.scope - libcontainer container 0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900. Sep 4 17:12:33.404386 containerd[2016]: time="2024-09-04T17:12:33.404231420Z" level=info msg="StartContainer for \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\" returns successfully" Sep 4 17:12:33.481808 systemd[1]: cri-containerd-0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900.scope: Deactivated successfully. Sep 4 17:12:33.964419 containerd[2016]: time="2024-09-04T17:12:33.964318871Z" level=info msg="StopContainer for \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\" with timeout 5 (s)" Sep 4 17:12:34.003007 containerd[2016]: time="2024-09-04T17:12:34.002552995Z" level=info msg="Stop container \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\" with signal terminated" Sep 4 17:12:34.004713 containerd[2016]: time="2024-09-04T17:12:34.004015975Z" level=info msg="shim disconnected" id=0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900 namespace=k8s.io Sep 4 17:12:34.004713 containerd[2016]: time="2024-09-04T17:12:34.004110811Z" level=warning msg="cleaning up after shim disconnected" id=0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900 namespace=k8s.io Sep 4 17:12:34.004713 containerd[2016]: time="2024-09-04T17:12:34.004151875Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:34.006679 containerd[2016]: time="2024-09-04T17:12:34.004819399Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/2caacd458adeb29bcbba1b34dcba7d3ba1f1467c51f02769e89a157ea6e918fd->@: write: broken pipe" Sep 4 17:12:34.021356 containerd[2016]: time="2024-09-04T17:12:34.020605063Z" level=error msg="StopContainer for \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\" failed" error="failed to stop container \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\": ttrpc: closed: unknown" Sep 4 17:12:34.021543 kubelet[3249]: E0904 17:12:34.020975 3249 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to stop container \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\": ttrpc: closed: unknown" containerID="0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900" Sep 4 17:12:34.021543 kubelet[3249]: E0904 17:12:34.021069 3249 kuberuntime_container.go:775] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = failed to stop container \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\": ttrpc: closed: unknown" pod="calico-system/calico-node-f4csl" podUID="196c564f-0d10-4fe5-b950-6b201e0a3638" containerName="flexvol-driver" containerID="containerd://0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900" gracePeriod=5 Sep 4 17:12:34.021543 kubelet[3249]: E0904 17:12:34.021139 3249 kuberuntime_container.go:813] "Kill container failed" err="rpc error: code = Unknown desc = failed to stop container \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\": ttrpc: closed: unknown" pod="calico-system/calico-node-f4csl" podUID="196c564f-0d10-4fe5-b950-6b201e0a3638" containerName="flexvol-driver" containerID={"Type":"containerd","ID":"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900"} Sep 4 17:12:34.027901 containerd[2016]: time="2024-09-04T17:12:34.026918791Z" level=info msg="StopPodSandbox for \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\"" Sep 4 17:12:34.049490 systemd[1]: cri-containerd-fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5.scope: Deactivated successfully. Sep 4 17:12:34.107707 containerd[2016]: time="2024-09-04T17:12:34.107610992Z" level=info msg="shim disconnected" id=fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5 namespace=k8s.io Sep 4 17:12:34.107707 containerd[2016]: time="2024-09-04T17:12:34.107698388Z" level=warning msg="cleaning up after shim disconnected" id=fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5 namespace=k8s.io Sep 4 17:12:34.108452 containerd[2016]: time="2024-09-04T17:12:34.107721812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:34.122161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900-rootfs.mount: Deactivated successfully. Sep 4 17:12:34.122373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5-rootfs.mount: Deactivated successfully. Sep 4 17:12:34.122506 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5-shm.mount: Deactivated successfully. Sep 4 17:12:34.158529 containerd[2016]: time="2024-09-04T17:12:34.158414060Z" level=info msg="TearDown network for sandbox \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" successfully" Sep 4 17:12:34.158529 containerd[2016]: time="2024-09-04T17:12:34.158496332Z" level=info msg="StopPodSandbox for \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" returns successfully" Sep 4 17:12:34.160793 kubelet[3249]: E0904 17:12:34.160744 3249 kubelet.go:2032] failed to "KillContainer" for "flexvol-driver" with KillContainerError: "rpc error: code = Unknown desc = failed to stop container \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\": ttrpc: closed: unknown" Sep 4 17:12:34.161762 kubelet[3249]: E0904 17:12:34.160875 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"flexvol-driver\" with KillContainerError: \"rpc error: code = Unknown desc = failed to stop container \\\"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\\\": ttrpc: closed: unknown\"" pod="calico-system/calico-node-f4csl" podUID="196c564f-0d10-4fe5-b950-6b201e0a3638" Sep 4 17:12:34.776873 kubelet[3249]: E0904 17:12:34.776779 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6dqv" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" Sep 4 17:12:34.977339 kubelet[3249]: I0904 17:12:34.976794 3249 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5" Sep 4 17:12:34.979969 containerd[2016]: time="2024-09-04T17:12:34.979495956Z" level=info msg="StopPodSandbox for \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\"" Sep 4 17:12:34.981182 containerd[2016]: time="2024-09-04T17:12:34.979702416Z" level=info msg="Container to stop \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:12:34.981182 containerd[2016]: time="2024-09-04T17:12:34.980874900Z" level=info msg="TearDown network for sandbox \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" successfully" Sep 4 17:12:34.981182 containerd[2016]: time="2024-09-04T17:12:34.981036972Z" level=info msg="StopPodSandbox for \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" returns successfully" Sep 4 17:12:35.121595 kubelet[3249]: I0904 17:12:35.120943 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/196c564f-0d10-4fe5-b950-6b201e0a3638-tigera-ca-bundle\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.127469 kubelet[3249]: I0904 17:12:35.125492 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-lib-modules\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.127469 kubelet[3249]: I0904 17:12:35.125595 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-var-run-calico\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.127469 kubelet[3249]: I0904 17:12:35.125647 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-var-lib-calico\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.127469 kubelet[3249]: I0904 17:12:35.125706 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/196c564f-0d10-4fe5-b950-6b201e0a3638-node-certs\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.127469 kubelet[3249]: I0904 17:12:35.125748 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-log-dir\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.127469 kubelet[3249]: I0904 17:12:35.125791 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-policysync\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.128367 kubelet[3249]: I0904 17:12:35.125814 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/196c564f-0d10-4fe5-b950-6b201e0a3638-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:12:35.128367 kubelet[3249]: I0904 17:12:35.125870 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxqnv\" (UniqueName: \"kubernetes.io/projected/196c564f-0d10-4fe5-b950-6b201e0a3638-kube-api-access-cxqnv\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.128367 kubelet[3249]: I0904 17:12:35.125917 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-xtables-lock\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.128367 kubelet[3249]: I0904 17:12:35.125920 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:12:35.128367 kubelet[3249]: I0904 17:12:35.125970 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:12:35.132621 kubelet[3249]: I0904 17:12:35.125983 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-flexvol-driver-host\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.132621 kubelet[3249]: I0904 17:12:35.126010 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:12:35.132621 kubelet[3249]: I0904 17:12:35.126030 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-bin-dir\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.132621 kubelet[3249]: I0904 17:12:35.126057 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-policysync" (OuterVolumeSpecName: "policysync") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:12:35.132621 kubelet[3249]: I0904 17:12:35.126073 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-net-dir\") pod \"196c564f-0d10-4fe5-b950-6b201e0a3638\" (UID: \"196c564f-0d10-4fe5-b950-6b201e0a3638\") " Sep 4 17:12:35.132621 kubelet[3249]: I0904 17:12:35.126149 3249 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/196c564f-0d10-4fe5-b950-6b201e0a3638-tigera-ca-bundle\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.133636 kubelet[3249]: I0904 17:12:35.126175 3249 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-lib-modules\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.133636 kubelet[3249]: I0904 17:12:35.126200 3249 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-var-run-calico\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.133636 kubelet[3249]: I0904 17:12:35.126224 3249 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-var-lib-calico\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.133636 kubelet[3249]: I0904 17:12:35.126249 3249 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-policysync\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.133636 kubelet[3249]: I0904 17:12:35.126291 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:12:35.145051 kubelet[3249]: I0904 17:12:35.144997 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:12:35.145406 kubelet[3249]: I0904 17:12:35.145321 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:12:35.146945 kubelet[3249]: I0904 17:12:35.146872 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:12:35.149068 systemd[1]: var-lib-kubelet-pods-196c564f\x2d0d10\x2d4fe5\x2db950\x2d6b201e0a3638-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Sep 4 17:12:35.153442 kubelet[3249]: I0904 17:12:35.153091 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 4 17:12:35.160814 kubelet[3249]: I0904 17:12:35.160216 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/196c564f-0d10-4fe5-b950-6b201e0a3638-kube-api-access-cxqnv" (OuterVolumeSpecName: "kube-api-access-cxqnv") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "kube-api-access-cxqnv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:12:35.162125 kubelet[3249]: I0904 17:12:35.162049 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/196c564f-0d10-4fe5-b950-6b201e0a3638-node-certs" (OuterVolumeSpecName: "node-certs") pod "196c564f-0d10-4fe5-b950-6b201e0a3638" (UID: "196c564f-0d10-4fe5-b950-6b201e0a3638"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:12:35.162939 systemd[1]: var-lib-kubelet-pods-196c564f\x2d0d10\x2d4fe5\x2db950\x2d6b201e0a3638-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcxqnv.mount: Deactivated successfully. Sep 4 17:12:35.227486 kubelet[3249]: I0904 17:12:35.226667 3249 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/196c564f-0d10-4fe5-b950-6b201e0a3638-node-certs\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.227486 kubelet[3249]: I0904 17:12:35.226716 3249 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-log-dir\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.227486 kubelet[3249]: I0904 17:12:35.226743 3249 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cxqnv\" (UniqueName: \"kubernetes.io/projected/196c564f-0d10-4fe5-b950-6b201e0a3638-kube-api-access-cxqnv\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.227486 kubelet[3249]: I0904 17:12:35.226771 3249 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-xtables-lock\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.229150 kubelet[3249]: I0904 17:12:35.228978 3249 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-flexvol-driver-host\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.229150 kubelet[3249]: I0904 17:12:35.229074 3249 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-bin-dir\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.229150 kubelet[3249]: I0904 17:12:35.229102 3249 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/196c564f-0d10-4fe5-b950-6b201e0a3638-cni-net-dir\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:35.819919 systemd[1]: Removed slice kubepods-besteffort-pod196c564f_0d10_4fe5_b950_6b201e0a3638.slice - libcontainer container kubepods-besteffort-pod196c564f_0d10_4fe5_b950_6b201e0a3638.slice. Sep 4 17:12:35.906650 containerd[2016]: time="2024-09-04T17:12:35.906536089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:35.909027 containerd[2016]: time="2024-09-04T17:12:35.908656909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Sep 4 17:12:35.910984 containerd[2016]: time="2024-09-04T17:12:35.910782913Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:35.920508 containerd[2016]: time="2024-09-04T17:12:35.919803565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:35.922377 containerd[2016]: time="2024-09-04T17:12:35.922284553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.835715442s" Sep 4 17:12:35.922377 containerd[2016]: time="2024-09-04T17:12:35.922350385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Sep 4 17:12:35.957462 containerd[2016]: time="2024-09-04T17:12:35.957403717Z" level=info msg="CreateContainer within sandbox \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:12:35.996925 containerd[2016]: time="2024-09-04T17:12:35.996237961Z" level=info msg="CreateContainer within sandbox \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\"" Sep 4 17:12:35.999257 containerd[2016]: time="2024-09-04T17:12:35.998097349Z" level=info msg="StartContainer for \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\"" Sep 4 17:12:36.079559 systemd[1]: Started cri-containerd-4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4.scope - libcontainer container 4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4. Sep 4 17:12:36.158468 kubelet[3249]: I0904 17:12:36.153760 3249 topology_manager.go:215] "Topology Admit Handler" podUID="80793598-1f33-4c5c-be4f-58e44aa19706" podNamespace="calico-system" podName="calico-node-9hwxs" Sep 4 17:12:36.169266 kubelet[3249]: E0904 17:12:36.166065 3249 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="196c564f-0d10-4fe5-b950-6b201e0a3638" containerName="flexvol-driver" Sep 4 17:12:36.169266 kubelet[3249]: I0904 17:12:36.166185 3249 memory_manager.go:354] "RemoveStaleState removing state" podUID="196c564f-0d10-4fe5-b950-6b201e0a3638" containerName="flexvol-driver" Sep 4 17:12:36.193517 systemd[1]: Created slice kubepods-besteffort-pod80793598_1f33_4c5c_be4f_58e44aa19706.slice - libcontainer container kubepods-besteffort-pod80793598_1f33_4c5c_be4f_58e44aa19706.slice. Sep 4 17:12:36.204814 containerd[2016]: time="2024-09-04T17:12:36.204739198Z" level=info msg="StartContainer for \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\" returns successfully" Sep 4 17:12:36.340391 kubelet[3249]: I0904 17:12:36.340163 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/80793598-1f33-4c5c-be4f-58e44aa19706-policysync\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.341847 kubelet[3249]: I0904 17:12:36.341793 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80793598-1f33-4c5c-be4f-58e44aa19706-xtables-lock\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.342203 kubelet[3249]: I0904 17:12:36.342168 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80793598-1f33-4c5c-be4f-58e44aa19706-tigera-ca-bundle\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.342727 kubelet[3249]: I0904 17:12:36.342596 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/80793598-1f33-4c5c-be4f-58e44aa19706-var-run-calico\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.342948 kubelet[3249]: I0904 17:12:36.342927 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/80793598-1f33-4c5c-be4f-58e44aa19706-cni-bin-dir\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.343183 kubelet[3249]: I0904 17:12:36.343108 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/80793598-1f33-4c5c-be4f-58e44aa19706-var-lib-calico\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.345133 kubelet[3249]: I0904 17:12:36.345048 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80793598-1f33-4c5c-be4f-58e44aa19706-lib-modules\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.345791 kubelet[3249]: I0904 17:12:36.345337 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/80793598-1f33-4c5c-be4f-58e44aa19706-node-certs\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.345791 kubelet[3249]: I0904 17:12:36.345458 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcfrp\" (UniqueName: \"kubernetes.io/projected/80793598-1f33-4c5c-be4f-58e44aa19706-kube-api-access-gcfrp\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.345791 kubelet[3249]: I0904 17:12:36.345514 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/80793598-1f33-4c5c-be4f-58e44aa19706-flexvol-driver-host\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.345791 kubelet[3249]: I0904 17:12:36.345570 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/80793598-1f33-4c5c-be4f-58e44aa19706-cni-net-dir\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.345791 kubelet[3249]: I0904 17:12:36.345616 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/80793598-1f33-4c5c-be4f-58e44aa19706-cni-log-dir\") pod \"calico-node-9hwxs\" (UID: \"80793598-1f33-4c5c-be4f-58e44aa19706\") " pod="calico-system/calico-node-9hwxs" Sep 4 17:12:36.506348 containerd[2016]: time="2024-09-04T17:12:36.506145384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9hwxs,Uid:80793598-1f33-4c5c-be4f-58e44aa19706,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:36.570871 containerd[2016]: time="2024-09-04T17:12:36.569594832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:36.570871 containerd[2016]: time="2024-09-04T17:12:36.569689236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:36.573946 containerd[2016]: time="2024-09-04T17:12:36.573582828Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:36.573946 containerd[2016]: time="2024-09-04T17:12:36.573666216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:36.629193 systemd[1]: Started cri-containerd-89421789f9bcb94d0c35b467339b61a8512c7ce84f56fd421a7b2f3d05578a59.scope - libcontainer container 89421789f9bcb94d0c35b467339b61a8512c7ce84f56fd421a7b2f3d05578a59. Sep 4 17:12:36.699509 containerd[2016]: time="2024-09-04T17:12:36.699425281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9hwxs,Uid:80793598-1f33-4c5c-be4f-58e44aa19706,Namespace:calico-system,Attempt:0,} returns sandbox id \"89421789f9bcb94d0c35b467339b61a8512c7ce84f56fd421a7b2f3d05578a59\"" Sep 4 17:12:36.713875 containerd[2016]: time="2024-09-04T17:12:36.713400889Z" level=info msg="CreateContainer within sandbox \"89421789f9bcb94d0c35b467339b61a8512c7ce84f56fd421a7b2f3d05578a59\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 4 17:12:36.754681 containerd[2016]: time="2024-09-04T17:12:36.754576429Z" level=info msg="CreateContainer within sandbox \"89421789f9bcb94d0c35b467339b61a8512c7ce84f56fd421a7b2f3d05578a59\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0e26cb49ae5863a12570cb56cdf1f30f4b40eda1802dca6c5c01eae5df63401e\"" Sep 4 17:12:36.757420 containerd[2016]: time="2024-09-04T17:12:36.756725713Z" level=info msg="StartContainer for \"0e26cb49ae5863a12570cb56cdf1f30f4b40eda1802dca6c5c01eae5df63401e\"" Sep 4 17:12:36.777638 kubelet[3249]: E0904 17:12:36.777548 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6dqv" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" Sep 4 17:12:36.825560 systemd[1]: Started cri-containerd-0e26cb49ae5863a12570cb56cdf1f30f4b40eda1802dca6c5c01eae5df63401e.scope - libcontainer container 0e26cb49ae5863a12570cb56cdf1f30f4b40eda1802dca6c5c01eae5df63401e. Sep 4 17:12:36.921537 containerd[2016]: time="2024-09-04T17:12:36.920993438Z" level=info msg="StartContainer for \"0e26cb49ae5863a12570cb56cdf1f30f4b40eda1802dca6c5c01eae5df63401e\" returns successfully" Sep 4 17:12:36.989695 containerd[2016]: time="2024-09-04T17:12:36.989513762Z" level=info msg="StopContainer for \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\" with timeout 300 (s)" Sep 4 17:12:36.990981 systemd[1]: cri-containerd-0e26cb49ae5863a12570cb56cdf1f30f4b40eda1802dca6c5c01eae5df63401e.scope: Deactivated successfully. Sep 4 17:12:36.997535 containerd[2016]: time="2024-09-04T17:12:36.995505350Z" level=info msg="Stop container \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\" with signal terminated" Sep 4 17:12:37.081421 systemd[1]: cri-containerd-4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4.scope: Deactivated successfully. Sep 4 17:12:37.096961 kubelet[3249]: I0904 17:12:37.096900 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-55b78c5549-hnzj5" podStartSLOduration=3.670239178 podStartE2EDuration="7.092801291s" podCreationTimestamp="2024-09-04 17:12:30 +0000 UTC" firstStartedPulling="2024-09-04 17:12:32.500245652 +0000 UTC m=+24.991536641" lastFinishedPulling="2024-09-04 17:12:35.922807777 +0000 UTC m=+28.414098754" observedRunningTime="2024-09-04 17:12:37.042075298 +0000 UTC m=+29.533366323" watchObservedRunningTime="2024-09-04 17:12:37.092801291 +0000 UTC m=+29.584092304" Sep 4 17:12:37.172600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4-rootfs.mount: Deactivated successfully. Sep 4 17:12:37.383692 containerd[2016]: time="2024-09-04T17:12:37.382714956Z" level=info msg="shim disconnected" id=4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4 namespace=k8s.io Sep 4 17:12:37.383692 containerd[2016]: time="2024-09-04T17:12:37.382805712Z" level=warning msg="cleaning up after shim disconnected" id=4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4 namespace=k8s.io Sep 4 17:12:37.383692 containerd[2016]: time="2024-09-04T17:12:37.382847628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:37.383692 containerd[2016]: time="2024-09-04T17:12:37.383294568Z" level=info msg="shim disconnected" id=0e26cb49ae5863a12570cb56cdf1f30f4b40eda1802dca6c5c01eae5df63401e namespace=k8s.io Sep 4 17:12:37.383692 containerd[2016]: time="2024-09-04T17:12:37.383353032Z" level=warning msg="cleaning up after shim disconnected" id=0e26cb49ae5863a12570cb56cdf1f30f4b40eda1802dca6c5c01eae5df63401e namespace=k8s.io Sep 4 17:12:37.383692 containerd[2016]: time="2024-09-04T17:12:37.383373348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:37.432966 containerd[2016]: time="2024-09-04T17:12:37.431108508Z" level=info msg="StopContainer for \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\" returns successfully" Sep 4 17:12:37.432966 containerd[2016]: time="2024-09-04T17:12:37.432741516Z" level=info msg="StopPodSandbox for \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\"" Sep 4 17:12:37.432966 containerd[2016]: time="2024-09-04T17:12:37.432803820Z" level=info msg="Container to stop \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 4 17:12:37.437674 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09-shm.mount: Deactivated successfully. Sep 4 17:12:37.456935 systemd[1]: cri-containerd-b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09.scope: Deactivated successfully. Sep 4 17:12:37.526054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09-rootfs.mount: Deactivated successfully. Sep 4 17:12:37.536247 containerd[2016]: time="2024-09-04T17:12:37.536125345Z" level=info msg="shim disconnected" id=b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09 namespace=k8s.io Sep 4 17:12:37.536247 containerd[2016]: time="2024-09-04T17:12:37.536222257Z" level=warning msg="cleaning up after shim disconnected" id=b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09 namespace=k8s.io Sep 4 17:12:37.536247 containerd[2016]: time="2024-09-04T17:12:37.536245093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:37.578646 containerd[2016]: time="2024-09-04T17:12:37.578447173Z" level=info msg="TearDown network for sandbox \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\" successfully" Sep 4 17:12:37.578646 containerd[2016]: time="2024-09-04T17:12:37.578514913Z" level=info msg="StopPodSandbox for \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\" returns successfully" Sep 4 17:12:37.756820 kubelet[3249]: I0904 17:12:37.755937 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4006e3cb-a929-4e8e-9fcf-b76036905aa5-tigera-ca-bundle\") pod \"4006e3cb-a929-4e8e-9fcf-b76036905aa5\" (UID: \"4006e3cb-a929-4e8e-9fcf-b76036905aa5\") " Sep 4 17:12:37.758356 kubelet[3249]: I0904 17:12:37.757019 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5n4b\" (UniqueName: \"kubernetes.io/projected/4006e3cb-a929-4e8e-9fcf-b76036905aa5-kube-api-access-l5n4b\") pod \"4006e3cb-a929-4e8e-9fcf-b76036905aa5\" (UID: \"4006e3cb-a929-4e8e-9fcf-b76036905aa5\") " Sep 4 17:12:37.758356 kubelet[3249]: I0904 17:12:37.757100 3249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4006e3cb-a929-4e8e-9fcf-b76036905aa5-typha-certs\") pod \"4006e3cb-a929-4e8e-9fcf-b76036905aa5\" (UID: \"4006e3cb-a929-4e8e-9fcf-b76036905aa5\") " Sep 4 17:12:37.776466 kubelet[3249]: I0904 17:12:37.774386 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4006e3cb-a929-4e8e-9fcf-b76036905aa5-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "4006e3cb-a929-4e8e-9fcf-b76036905aa5" (UID: "4006e3cb-a929-4e8e-9fcf-b76036905aa5"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 4 17:12:37.775846 systemd[1]: var-lib-kubelet-pods-4006e3cb\x2da929\x2d4e8e\x2d9fcf\x2db76036905aa5-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Sep 4 17:12:37.776091 systemd[1]: var-lib-kubelet-pods-4006e3cb\x2da929\x2d4e8e\x2d9fcf\x2db76036905aa5-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Sep 4 17:12:37.776239 systemd[1]: var-lib-kubelet-pods-4006e3cb\x2da929\x2d4e8e\x2d9fcf\x2db76036905aa5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl5n4b.mount: Deactivated successfully. Sep 4 17:12:37.781341 kubelet[3249]: I0904 17:12:37.780501 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4006e3cb-a929-4e8e-9fcf-b76036905aa5-kube-api-access-l5n4b" (OuterVolumeSpecName: "kube-api-access-l5n4b") pod "4006e3cb-a929-4e8e-9fcf-b76036905aa5" (UID: "4006e3cb-a929-4e8e-9fcf-b76036905aa5"). InnerVolumeSpecName "kube-api-access-l5n4b". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 4 17:12:37.784104 kubelet[3249]: I0904 17:12:37.783313 3249 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4006e3cb-a929-4e8e-9fcf-b76036905aa5-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "4006e3cb-a929-4e8e-9fcf-b76036905aa5" (UID: "4006e3cb-a929-4e8e-9fcf-b76036905aa5"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 4 17:12:37.788617 kubelet[3249]: I0904 17:12:37.788553 3249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="196c564f-0d10-4fe5-b950-6b201e0a3638" path="/var/lib/kubelet/pods/196c564f-0d10-4fe5-b950-6b201e0a3638/volumes" Sep 4 17:12:37.806349 systemd[1]: Removed slice kubepods-besteffort-pod4006e3cb_a929_4e8e_9fcf_b76036905aa5.slice - libcontainer container kubepods-besteffort-pod4006e3cb_a929_4e8e_9fcf_b76036905aa5.slice. Sep 4 17:12:37.857510 kubelet[3249]: I0904 17:12:37.857424 3249 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4006e3cb-a929-4e8e-9fcf-b76036905aa5-typha-certs\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:37.857510 kubelet[3249]: I0904 17:12:37.857495 3249 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4006e3cb-a929-4e8e-9fcf-b76036905aa5-tigera-ca-bundle\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:37.857804 kubelet[3249]: I0904 17:12:37.857524 3249 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-l5n4b\" (UniqueName: \"kubernetes.io/projected/4006e3cb-a929-4e8e-9fcf-b76036905aa5-kube-api-access-l5n4b\") on node \"ip-172-31-31-13\" DevicePath \"\"" Sep 4 17:12:38.014295 kubelet[3249]: I0904 17:12:38.012761 3249 scope.go:117] "RemoveContainer" containerID="4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4" Sep 4 17:12:38.026140 containerd[2016]: time="2024-09-04T17:12:38.024017747Z" level=info msg="RemoveContainer for \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\"" Sep 4 17:12:38.026919 containerd[2016]: time="2024-09-04T17:12:38.026796155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Sep 4 17:12:38.037395 containerd[2016]: time="2024-09-04T17:12:38.037320647Z" level=info msg="RemoveContainer for \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\" returns successfully" Sep 4 17:12:38.038444 kubelet[3249]: I0904 17:12:38.037862 3249 scope.go:117] "RemoveContainer" containerID="4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4" Sep 4 17:12:38.038714 containerd[2016]: time="2024-09-04T17:12:38.038527943Z" level=error msg="ContainerStatus for \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\": not found" Sep 4 17:12:38.039535 kubelet[3249]: E0904 17:12:38.038955 3249 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\": not found" containerID="4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4" Sep 4 17:12:38.039535 kubelet[3249]: I0904 17:12:38.039154 3249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4"} err="failed to get container status \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a7334f2a5aed6af799ae4d659fc54e39138447228ab8faa1a0f81977bc00be4\": not found" Sep 4 17:12:38.758760 kubelet[3249]: I0904 17:12:38.758697 3249 topology_manager.go:215] "Topology Admit Handler" podUID="28cfbda5-6475-45cc-b512-a83939f80322" podNamespace="calico-system" podName="calico-typha-7f4ddb74cb-2jgvt" Sep 4 17:12:38.759380 kubelet[3249]: E0904 17:12:38.758794 3249 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4006e3cb-a929-4e8e-9fcf-b76036905aa5" containerName="calico-typha" Sep 4 17:12:38.759380 kubelet[3249]: I0904 17:12:38.758871 3249 memory_manager.go:354] "RemoveStaleState removing state" podUID="4006e3cb-a929-4e8e-9fcf-b76036905aa5" containerName="calico-typha" Sep 4 17:12:38.777197 kubelet[3249]: E0904 17:12:38.777140 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6dqv" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" Sep 4 17:12:38.782652 systemd[1]: Created slice kubepods-besteffort-pod28cfbda5_6475_45cc_b512_a83939f80322.slice - libcontainer container kubepods-besteffort-pod28cfbda5_6475_45cc_b512_a83939f80322.slice. Sep 4 17:12:38.867357 kubelet[3249]: I0904 17:12:38.867069 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hws2d\" (UniqueName: \"kubernetes.io/projected/28cfbda5-6475-45cc-b512-a83939f80322-kube-api-access-hws2d\") pod \"calico-typha-7f4ddb74cb-2jgvt\" (UID: \"28cfbda5-6475-45cc-b512-a83939f80322\") " pod="calico-system/calico-typha-7f4ddb74cb-2jgvt" Sep 4 17:12:38.867357 kubelet[3249]: I0904 17:12:38.867175 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28cfbda5-6475-45cc-b512-a83939f80322-tigera-ca-bundle\") pod \"calico-typha-7f4ddb74cb-2jgvt\" (UID: \"28cfbda5-6475-45cc-b512-a83939f80322\") " pod="calico-system/calico-typha-7f4ddb74cb-2jgvt" Sep 4 17:12:38.867357 kubelet[3249]: I0904 17:12:38.867230 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/28cfbda5-6475-45cc-b512-a83939f80322-typha-certs\") pod \"calico-typha-7f4ddb74cb-2jgvt\" (UID: \"28cfbda5-6475-45cc-b512-a83939f80322\") " pod="calico-system/calico-typha-7f4ddb74cb-2jgvt" Sep 4 17:12:39.094183 containerd[2016]: time="2024-09-04T17:12:39.093793813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f4ddb74cb-2jgvt,Uid:28cfbda5-6475-45cc-b512-a83939f80322,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:39.170649 containerd[2016]: time="2024-09-04T17:12:39.169274845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:39.170649 containerd[2016]: time="2024-09-04T17:12:39.169384693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:39.170649 containerd[2016]: time="2024-09-04T17:12:39.169429921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:39.170649 containerd[2016]: time="2024-09-04T17:12:39.169464481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:39.248221 systemd[1]: Started cri-containerd-ac4eaedafbef361faa12df82f2c9947d357a44082968272b89345f3af6c4073a.scope - libcontainer container ac4eaedafbef361faa12df82f2c9947d357a44082968272b89345f3af6c4073a. Sep 4 17:12:39.479590 containerd[2016]: time="2024-09-04T17:12:39.479464311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7f4ddb74cb-2jgvt,Uid:28cfbda5-6475-45cc-b512-a83939f80322,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac4eaedafbef361faa12df82f2c9947d357a44082968272b89345f3af6c4073a\"" Sep 4 17:12:39.525027 containerd[2016]: time="2024-09-04T17:12:39.524947503Z" level=info msg="CreateContainer within sandbox \"ac4eaedafbef361faa12df82f2c9947d357a44082968272b89345f3af6c4073a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 4 17:12:39.555159 containerd[2016]: time="2024-09-04T17:12:39.554006463Z" level=info msg="CreateContainer within sandbox \"ac4eaedafbef361faa12df82f2c9947d357a44082968272b89345f3af6c4073a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4e060ec50fc3037036c47b3435ffcdaa9efc90ad245edb402b9506107bc3c55b\"" Sep 4 17:12:39.557882 containerd[2016]: time="2024-09-04T17:12:39.557788899Z" level=info msg="StartContainer for \"4e060ec50fc3037036c47b3435ffcdaa9efc90ad245edb402b9506107bc3c55b\"" Sep 4 17:12:39.658209 systemd[1]: Started cri-containerd-4e060ec50fc3037036c47b3435ffcdaa9efc90ad245edb402b9506107bc3c55b.scope - libcontainer container 4e060ec50fc3037036c47b3435ffcdaa9efc90ad245edb402b9506107bc3c55b. Sep 4 17:12:39.786961 kubelet[3249]: I0904 17:12:39.786251 3249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4006e3cb-a929-4e8e-9fcf-b76036905aa5" path="/var/lib/kubelet/pods/4006e3cb-a929-4e8e-9fcf-b76036905aa5/volumes" Sep 4 17:12:39.802037 containerd[2016]: time="2024-09-04T17:12:39.801870064Z" level=info msg="StartContainer for \"4e060ec50fc3037036c47b3435ffcdaa9efc90ad245edb402b9506107bc3c55b\" returns successfully" Sep 4 17:12:40.139688 kubelet[3249]: I0904 17:12:40.139368 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7f4ddb74cb-2jgvt" podStartSLOduration=7.13928063 podStartE2EDuration="7.13928063s" podCreationTimestamp="2024-09-04 17:12:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:40.056695105 +0000 UTC m=+32.547986106" watchObservedRunningTime="2024-09-04 17:12:40.13928063 +0000 UTC m=+32.630571619" Sep 4 17:12:40.777202 kubelet[3249]: E0904 17:12:40.776959 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6dqv" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" Sep 4 17:12:42.103377 containerd[2016]: time="2024-09-04T17:12:42.103294756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:42.105032 containerd[2016]: time="2024-09-04T17:12:42.104887144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Sep 4 17:12:42.106775 containerd[2016]: time="2024-09-04T17:12:42.106691884Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:42.112539 containerd[2016]: time="2024-09-04T17:12:42.112420432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:42.114296 containerd[2016]: time="2024-09-04T17:12:42.114036484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.087148829s" Sep 4 17:12:42.114296 containerd[2016]: time="2024-09-04T17:12:42.114100204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Sep 4 17:12:42.118171 containerd[2016]: time="2024-09-04T17:12:42.118085800Z" level=info msg="CreateContainer within sandbox \"89421789f9bcb94d0c35b467339b61a8512c7ce84f56fd421a7b2f3d05578a59\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 4 17:12:42.153284 containerd[2016]: time="2024-09-04T17:12:42.153215920Z" level=info msg="CreateContainer within sandbox \"89421789f9bcb94d0c35b467339b61a8512c7ce84f56fd421a7b2f3d05578a59\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dd800d2b86b53b73fb4a7b480d12366090e902040ac86348ebf6c821ebbdf469\"" Sep 4 17:12:42.154403 containerd[2016]: time="2024-09-04T17:12:42.154348576Z" level=info msg="StartContainer for \"dd800d2b86b53b73fb4a7b480d12366090e902040ac86348ebf6c821ebbdf469\"" Sep 4 17:12:42.221274 systemd[1]: Started cri-containerd-dd800d2b86b53b73fb4a7b480d12366090e902040ac86348ebf6c821ebbdf469.scope - libcontainer container dd800d2b86b53b73fb4a7b480d12366090e902040ac86348ebf6c821ebbdf469. Sep 4 17:12:42.286937 containerd[2016]: time="2024-09-04T17:12:42.285791464Z" level=info msg="StartContainer for \"dd800d2b86b53b73fb4a7b480d12366090e902040ac86348ebf6c821ebbdf469\" returns successfully" Sep 4 17:12:42.776520 kubelet[3249]: E0904 17:12:42.776454 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s6dqv" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" Sep 4 17:12:43.629172 systemd[1]: cri-containerd-dd800d2b86b53b73fb4a7b480d12366090e902040ac86348ebf6c821ebbdf469.scope: Deactivated successfully. Sep 4 17:12:43.674168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd800d2b86b53b73fb4a7b480d12366090e902040ac86348ebf6c821ebbdf469-rootfs.mount: Deactivated successfully. Sep 4 17:12:43.716906 kubelet[3249]: I0904 17:12:43.716624 3249 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Sep 4 17:12:43.756455 kubelet[3249]: I0904 17:12:43.755701 3249 topology_manager.go:215] "Topology Admit Handler" podUID="a4ac919e-9835-42b3-bf95-eb1b1e4767ec" podNamespace="kube-system" podName="coredns-76f75df574-smljl" Sep 4 17:12:43.772244 kubelet[3249]: I0904 17:12:43.769695 3249 topology_manager.go:215] "Topology Admit Handler" podUID="d2594d7c-f8ef-43c5-8979-10060070c099" podNamespace="calico-system" podName="calico-kube-controllers-5d57759979-d9ffj" Sep 4 17:12:43.773728 kubelet[3249]: I0904 17:12:43.773091 3249 topology_manager.go:215] "Topology Admit Handler" podUID="fa829edc-8daa-47fd-bdeb-0a042a3e6b58" podNamespace="kube-system" podName="coredns-76f75df574-qsgk9" Sep 4 17:12:43.788069 systemd[1]: Created slice kubepods-burstable-poda4ac919e_9835_42b3_bf95_eb1b1e4767ec.slice - libcontainer container kubepods-burstable-poda4ac919e_9835_42b3_bf95_eb1b1e4767ec.slice. Sep 4 17:12:43.825081 systemd[1]: Created slice kubepods-besteffort-podd2594d7c_f8ef_43c5_8979_10060070c099.slice - libcontainer container kubepods-besteffort-podd2594d7c_f8ef_43c5_8979_10060070c099.slice. Sep 4 17:12:43.840142 systemd[1]: Created slice kubepods-burstable-podfa829edc_8daa_47fd_bdeb_0a042a3e6b58.slice - libcontainer container kubepods-burstable-podfa829edc_8daa_47fd_bdeb_0a042a3e6b58.slice. Sep 4 17:12:43.912951 kubelet[3249]: I0904 17:12:43.912726 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4ac919e-9835-42b3-bf95-eb1b1e4767ec-config-volume\") pod \"coredns-76f75df574-smljl\" (UID: \"a4ac919e-9835-42b3-bf95-eb1b1e4767ec\") " pod="kube-system/coredns-76f75df574-smljl" Sep 4 17:12:43.912951 kubelet[3249]: I0904 17:12:43.912891 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fa829edc-8daa-47fd-bdeb-0a042a3e6b58-config-volume\") pod \"coredns-76f75df574-qsgk9\" (UID: \"fa829edc-8daa-47fd-bdeb-0a042a3e6b58\") " pod="kube-system/coredns-76f75df574-qsgk9" Sep 4 17:12:43.913680 kubelet[3249]: I0904 17:12:43.912971 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr2cd\" (UniqueName: \"kubernetes.io/projected/d2594d7c-f8ef-43c5-8979-10060070c099-kube-api-access-cr2cd\") pod \"calico-kube-controllers-5d57759979-d9ffj\" (UID: \"d2594d7c-f8ef-43c5-8979-10060070c099\") " pod="calico-system/calico-kube-controllers-5d57759979-d9ffj" Sep 4 17:12:43.913680 kubelet[3249]: I0904 17:12:43.913031 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8csz\" (UniqueName: \"kubernetes.io/projected/fa829edc-8daa-47fd-bdeb-0a042a3e6b58-kube-api-access-v8csz\") pod \"coredns-76f75df574-qsgk9\" (UID: \"fa829edc-8daa-47fd-bdeb-0a042a3e6b58\") " pod="kube-system/coredns-76f75df574-qsgk9" Sep 4 17:12:43.913680 kubelet[3249]: I0904 17:12:43.913104 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9xb\" (UniqueName: \"kubernetes.io/projected/a4ac919e-9835-42b3-bf95-eb1b1e4767ec-kube-api-access-zq9xb\") pod \"coredns-76f75df574-smljl\" (UID: \"a4ac919e-9835-42b3-bf95-eb1b1e4767ec\") " pod="kube-system/coredns-76f75df574-smljl" Sep 4 17:12:43.913680 kubelet[3249]: I0904 17:12:43.913157 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2594d7c-f8ef-43c5-8979-10060070c099-tigera-ca-bundle\") pod \"calico-kube-controllers-5d57759979-d9ffj\" (UID: \"d2594d7c-f8ef-43c5-8979-10060070c099\") " pod="calico-system/calico-kube-controllers-5d57759979-d9ffj" Sep 4 17:12:44.104181 containerd[2016]: time="2024-09-04T17:12:44.104117142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-smljl,Uid:a4ac919e-9835-42b3-bf95-eb1b1e4767ec,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:44.108724 containerd[2016]: time="2024-09-04T17:12:44.108270846Z" level=info msg="shim disconnected" id=dd800d2b86b53b73fb4a7b480d12366090e902040ac86348ebf6c821ebbdf469 namespace=k8s.io Sep 4 17:12:44.108724 containerd[2016]: time="2024-09-04T17:12:44.108359430Z" level=warning msg="cleaning up after shim disconnected" id=dd800d2b86b53b73fb4a7b480d12366090e902040ac86348ebf6c821ebbdf469 namespace=k8s.io Sep 4 17:12:44.108724 containerd[2016]: time="2024-09-04T17:12:44.108381906Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:12:44.138537 containerd[2016]: time="2024-09-04T17:12:44.136977474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d57759979-d9ffj,Uid:d2594d7c-f8ef-43c5-8979-10060070c099,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:44.153605 containerd[2016]: time="2024-09-04T17:12:44.153520782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qsgk9,Uid:fa829edc-8daa-47fd-bdeb-0a042a3e6b58,Namespace:kube-system,Attempt:0,}" Sep 4 17:12:44.342046 containerd[2016]: time="2024-09-04T17:12:44.341785843Z" level=error msg="Failed to destroy network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.343263 containerd[2016]: time="2024-09-04T17:12:44.343200055Z" level=error msg="encountered an error cleaning up failed sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.346312 containerd[2016]: time="2024-09-04T17:12:44.345989119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d57759979-d9ffj,Uid:d2594d7c-f8ef-43c5-8979-10060070c099,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.349553 kubelet[3249]: E0904 17:12:44.349511 3249 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.350214 kubelet[3249]: E0904 17:12:44.349975 3249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d57759979-d9ffj" Sep 4 17:12:44.350214 kubelet[3249]: E0904 17:12:44.350029 3249 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d57759979-d9ffj" Sep 4 17:12:44.351890 kubelet[3249]: E0904 17:12:44.351041 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d57759979-d9ffj_calico-system(d2594d7c-f8ef-43c5-8979-10060070c099)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d57759979-d9ffj_calico-system(d2594d7c-f8ef-43c5-8979-10060070c099)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d57759979-d9ffj" podUID="d2594d7c-f8ef-43c5-8979-10060070c099" Sep 4 17:12:44.352457 containerd[2016]: time="2024-09-04T17:12:44.352280251Z" level=error msg="Failed to destroy network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.356456 containerd[2016]: time="2024-09-04T17:12:44.356194195Z" level=error msg="encountered an error cleaning up failed sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.356456 containerd[2016]: time="2024-09-04T17:12:44.356308831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qsgk9,Uid:fa829edc-8daa-47fd-bdeb-0a042a3e6b58,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.360883 kubelet[3249]: E0904 17:12:44.359049 3249 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.360883 kubelet[3249]: E0904 17:12:44.359170 3249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qsgk9" Sep 4 17:12:44.360883 kubelet[3249]: E0904 17:12:44.359218 3249 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qsgk9" Sep 4 17:12:44.361174 kubelet[3249]: E0904 17:12:44.359304 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qsgk9_kube-system(fa829edc-8daa-47fd-bdeb-0a042a3e6b58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qsgk9_kube-system(fa829edc-8daa-47fd-bdeb-0a042a3e6b58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qsgk9" podUID="fa829edc-8daa-47fd-bdeb-0a042a3e6b58" Sep 4 17:12:44.369035 containerd[2016]: time="2024-09-04T17:12:44.368971711Z" level=error msg="Failed to destroy network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.370134 containerd[2016]: time="2024-09-04T17:12:44.369919123Z" level=error msg="encountered an error cleaning up failed sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.370134 containerd[2016]: time="2024-09-04T17:12:44.370027855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-smljl,Uid:a4ac919e-9835-42b3-bf95-eb1b1e4767ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.370879 kubelet[3249]: E0904 17:12:44.370570 3249 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.370879 kubelet[3249]: E0904 17:12:44.370670 3249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-smljl" Sep 4 17:12:44.370879 kubelet[3249]: E0904 17:12:44.370708 3249 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-smljl" Sep 4 17:12:44.371088 kubelet[3249]: E0904 17:12:44.370788 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-smljl_kube-system(a4ac919e-9835-42b3-bf95-eb1b1e4767ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-smljl_kube-system(a4ac919e-9835-42b3-bf95-eb1b1e4767ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-smljl" podUID="a4ac919e-9835-42b3-bf95-eb1b1e4767ec" Sep 4 17:12:44.673868 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d-shm.mount: Deactivated successfully. Sep 4 17:12:44.787636 systemd[1]: Created slice kubepods-besteffort-pod30c214dc_77a9_494e_bbbc_1b760a49564b.slice - libcontainer container kubepods-besteffort-pod30c214dc_77a9_494e_bbbc_1b760a49564b.slice. Sep 4 17:12:44.792558 containerd[2016]: time="2024-09-04T17:12:44.792462657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6dqv,Uid:30c214dc-77a9-494e-bbbc-1b760a49564b,Namespace:calico-system,Attempt:0,}" Sep 4 17:12:44.909586 containerd[2016]: time="2024-09-04T17:12:44.909500314Z" level=error msg="Failed to destroy network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.910646 containerd[2016]: time="2024-09-04T17:12:44.910300762Z" level=error msg="encountered an error cleaning up failed sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.910646 containerd[2016]: time="2024-09-04T17:12:44.910434562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6dqv,Uid:30c214dc-77a9-494e-bbbc-1b760a49564b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.910923 kubelet[3249]: E0904 17:12:44.910768 3249 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:44.910923 kubelet[3249]: E0904 17:12:44.910883 3249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6dqv" Sep 4 17:12:44.911088 kubelet[3249]: E0904 17:12:44.910928 3249 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s6dqv" Sep 4 17:12:44.911088 kubelet[3249]: E0904 17:12:44.911023 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s6dqv_calico-system(30c214dc-77a9-494e-bbbc-1b760a49564b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s6dqv_calico-system(30c214dc-77a9-494e-bbbc-1b760a49564b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6dqv" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" Sep 4 17:12:44.917010 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7-shm.mount: Deactivated successfully. Sep 4 17:12:45.065347 kubelet[3249]: I0904 17:12:45.065192 3249 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:12:45.067868 containerd[2016]: time="2024-09-04T17:12:45.067124466Z" level=info msg="StopPodSandbox for \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\"" Sep 4 17:12:45.067868 containerd[2016]: time="2024-09-04T17:12:45.067593966Z" level=info msg="Ensure that sandbox 1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea in task-service has been cleanup successfully" Sep 4 17:12:45.079668 containerd[2016]: time="2024-09-04T17:12:45.078772578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Sep 4 17:12:45.085157 kubelet[3249]: I0904 17:12:45.085072 3249 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:12:45.086788 containerd[2016]: time="2024-09-04T17:12:45.085981290Z" level=info msg="StopPodSandbox for \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\"" Sep 4 17:12:45.086788 containerd[2016]: time="2024-09-04T17:12:45.086369358Z" level=info msg="Ensure that sandbox 9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d in task-service has been cleanup successfully" Sep 4 17:12:45.091321 kubelet[3249]: I0904 17:12:45.091003 3249 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:12:45.094901 containerd[2016]: time="2024-09-04T17:12:45.094748178Z" level=info msg="StopPodSandbox for \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\"" Sep 4 17:12:45.097015 containerd[2016]: time="2024-09-04T17:12:45.096376614Z" level=info msg="Ensure that sandbox d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7 in task-service has been cleanup successfully" Sep 4 17:12:45.103482 kubelet[3249]: I0904 17:12:45.103222 3249 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:12:45.108823 containerd[2016]: time="2024-09-04T17:12:45.108732919Z" level=info msg="StopPodSandbox for \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\"" Sep 4 17:12:45.114775 containerd[2016]: time="2024-09-04T17:12:45.114401707Z" level=info msg="Ensure that sandbox 383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1 in task-service has been cleanup successfully" Sep 4 17:12:45.232981 containerd[2016]: time="2024-09-04T17:12:45.231779515Z" level=error msg="StopPodSandbox for \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\" failed" error="failed to destroy network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:45.233557 kubelet[3249]: E0904 17:12:45.233315 3249 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:12:45.233557 kubelet[3249]: E0904 17:12:45.233386 3249 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d"} Sep 4 17:12:45.233557 kubelet[3249]: E0904 17:12:45.233449 3249 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4ac919e-9835-42b3-bf95-eb1b1e4767ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:45.233557 kubelet[3249]: E0904 17:12:45.233501 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4ac919e-9835-42b3-bf95-eb1b1e4767ec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-smljl" podUID="a4ac919e-9835-42b3-bf95-eb1b1e4767ec" Sep 4 17:12:45.240655 containerd[2016]: time="2024-09-04T17:12:45.239810071Z" level=error msg="StopPodSandbox for \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\" failed" error="failed to destroy network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:45.242795 kubelet[3249]: E0904 17:12:45.242230 3249 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:12:45.242795 kubelet[3249]: E0904 17:12:45.242470 3249 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1"} Sep 4 17:12:45.242795 kubelet[3249]: E0904 17:12:45.242722 3249 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fa829edc-8daa-47fd-bdeb-0a042a3e6b58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:45.243883 kubelet[3249]: E0904 17:12:45.243308 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fa829edc-8daa-47fd-bdeb-0a042a3e6b58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qsgk9" podUID="fa829edc-8daa-47fd-bdeb-0a042a3e6b58" Sep 4 17:12:45.248490 containerd[2016]: time="2024-09-04T17:12:45.248422675Z" level=error msg="StopPodSandbox for \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\" failed" error="failed to destroy network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:45.249887 kubelet[3249]: E0904 17:12:45.249101 3249 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:12:45.249887 kubelet[3249]: E0904 17:12:45.249169 3249 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea"} Sep 4 17:12:45.249887 kubelet[3249]: E0904 17:12:45.249251 3249 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d2594d7c-f8ef-43c5-8979-10060070c099\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:45.249887 kubelet[3249]: E0904 17:12:45.249302 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d2594d7c-f8ef-43c5-8979-10060070c099\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d57759979-d9ffj" podUID="d2594d7c-f8ef-43c5-8979-10060070c099" Sep 4 17:12:45.254877 containerd[2016]: time="2024-09-04T17:12:45.254705035Z" level=error msg="StopPodSandbox for \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\" failed" error="failed to destroy network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 4 17:12:45.255170 kubelet[3249]: E0904 17:12:45.255111 3249 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:12:45.255274 kubelet[3249]: E0904 17:12:45.255171 3249 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7"} Sep 4 17:12:45.255274 kubelet[3249]: E0904 17:12:45.255239 3249 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30c214dc-77a9-494e-bbbc-1b760a49564b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 4 17:12:45.255430 kubelet[3249]: E0904 17:12:45.255295 3249 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30c214dc-77a9-494e-bbbc-1b760a49564b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s6dqv" podUID="30c214dc-77a9-494e-bbbc-1b760a49564b" Sep 4 17:12:50.277583 systemd[1]: Started sshd@7-172.31.31.13:22-139.178.89.65:39614.service - OpenSSH per-connection server daemon (139.178.89.65:39614). Sep 4 17:12:50.465345 sshd[4687]: Accepted publickey for core from 139.178.89.65 port 39614 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:50.470516 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:50.487602 systemd-logind[1992]: New session 8 of user core. Sep 4 17:12:50.494181 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 4 17:12:50.820548 sshd[4687]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:50.830098 systemd[1]: sshd@7-172.31.31.13:22-139.178.89.65:39614.service: Deactivated successfully. Sep 4 17:12:50.837389 systemd[1]: session-8.scope: Deactivated successfully. Sep 4 17:12:50.839870 systemd-logind[1992]: Session 8 logged out. Waiting for processes to exit. Sep 4 17:12:50.843575 systemd-logind[1992]: Removed session 8. Sep 4 17:12:51.243173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515513832.mount: Deactivated successfully. Sep 4 17:12:51.312304 containerd[2016]: time="2024-09-04T17:12:51.312145489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:51.314880 containerd[2016]: time="2024-09-04T17:12:51.313846777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Sep 4 17:12:51.316443 containerd[2016]: time="2024-09-04T17:12:51.316347025Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:51.326668 containerd[2016]: time="2024-09-04T17:12:51.326588617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:12:51.329526 containerd[2016]: time="2024-09-04T17:12:51.329449201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 6.249831859s" Sep 4 17:12:51.329526 containerd[2016]: time="2024-09-04T17:12:51.329521717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Sep 4 17:12:51.355294 containerd[2016]: time="2024-09-04T17:12:51.355223378Z" level=info msg="CreateContainer within sandbox \"89421789f9bcb94d0c35b467339b61a8512c7ce84f56fd421a7b2f3d05578a59\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 4 17:12:51.406768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1379167600.mount: Deactivated successfully. Sep 4 17:12:51.413007 containerd[2016]: time="2024-09-04T17:12:51.412316474Z" level=info msg="CreateContainer within sandbox \"89421789f9bcb94d0c35b467339b61a8512c7ce84f56fd421a7b2f3d05578a59\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5b4f4356f3ed071c984f61fed9f3f35181b146bb17f38188073fc84bd5494b1a\"" Sep 4 17:12:51.419107 containerd[2016]: time="2024-09-04T17:12:51.417777194Z" level=info msg="StartContainer for \"5b4f4356f3ed071c984f61fed9f3f35181b146bb17f38188073fc84bd5494b1a\"" Sep 4 17:12:51.473204 systemd[1]: Started cri-containerd-5b4f4356f3ed071c984f61fed9f3f35181b146bb17f38188073fc84bd5494b1a.scope - libcontainer container 5b4f4356f3ed071c984f61fed9f3f35181b146bb17f38188073fc84bd5494b1a. Sep 4 17:12:51.547887 containerd[2016]: time="2024-09-04T17:12:51.544642598Z" level=info msg="StartContainer for \"5b4f4356f3ed071c984f61fed9f3f35181b146bb17f38188073fc84bd5494b1a\" returns successfully" Sep 4 17:12:51.676049 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 4 17:12:51.676188 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 4 17:12:52.271710 kubelet[3249]: I0904 17:12:52.271649 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9hwxs" podStartSLOduration=2.967691764 podStartE2EDuration="16.27158681s" podCreationTimestamp="2024-09-04 17:12:36 +0000 UTC" firstStartedPulling="2024-09-04 17:12:38.026016995 +0000 UTC m=+30.517307972" lastFinishedPulling="2024-09-04 17:12:51.329912017 +0000 UTC m=+43.821203018" observedRunningTime="2024-09-04 17:12:52.27043187 +0000 UTC m=+44.761722883" watchObservedRunningTime="2024-09-04 17:12:52.27158681 +0000 UTC m=+44.762877811" Sep 4 17:12:53.182526 systemd[1]: run-containerd-runc-k8s.io-5b4f4356f3ed071c984f61fed9f3f35181b146bb17f38188073fc84bd5494b1a-runc.jQqKFw.mount: Deactivated successfully. Sep 4 17:12:53.976891 kernel: bpftool[4928]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 4 17:12:54.316514 systemd-networkd[1931]: vxlan.calico: Link UP Sep 4 17:12:54.316530 systemd-networkd[1931]: vxlan.calico: Gained carrier Sep 4 17:12:54.319375 (udev-worker)[4739]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:54.373566 (udev-worker)[4959]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:55.862391 systemd[1]: Started sshd@8-172.31.31.13:22-139.178.89.65:39626.service - OpenSSH per-connection server daemon (139.178.89.65:39626). Sep 4 17:12:56.047105 sshd[4999]: Accepted publickey for core from 139.178.89.65 port 39626 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:12:56.050355 sshd[4999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:12:56.062740 systemd-logind[1992]: New session 9 of user core. Sep 4 17:12:56.068176 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 4 17:12:56.325077 systemd-networkd[1931]: vxlan.calico: Gained IPv6LL Sep 4 17:12:56.326214 sshd[4999]: pam_unix(sshd:session): session closed for user core Sep 4 17:12:56.334688 systemd[1]: sshd@8-172.31.31.13:22-139.178.89.65:39626.service: Deactivated successfully. Sep 4 17:12:56.339318 systemd[1]: session-9.scope: Deactivated successfully. Sep 4 17:12:56.342612 systemd-logind[1992]: Session 9 logged out. Waiting for processes to exit. Sep 4 17:12:56.345348 systemd-logind[1992]: Removed session 9. Sep 4 17:12:57.779189 containerd[2016]: time="2024-09-04T17:12:57.778942605Z" level=info msg="StopPodSandbox for \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\"" Sep 4 17:12:57.783215 containerd[2016]: time="2024-09-04T17:12:57.780886173Z" level=info msg="StopPodSandbox for \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\"" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.924 [INFO][5042] k8s.go 608: Cleaning up netns ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.926 [INFO][5042] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" iface="eth0" netns="/var/run/netns/cni-d96983e4-b395-3d85-5322-fc4639b7c3e7" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.929 [INFO][5042] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" iface="eth0" netns="/var/run/netns/cni-d96983e4-b395-3d85-5322-fc4639b7c3e7" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.930 [INFO][5042] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" iface="eth0" netns="/var/run/netns/cni-d96983e4-b395-3d85-5322-fc4639b7c3e7" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.930 [INFO][5042] k8s.go 615: Releasing IP address(es) ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.930 [INFO][5042] utils.go 188: Calico CNI releasing IP address ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.982 [INFO][5051] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" HandleID="k8s-pod-network.1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.983 [INFO][5051] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.983 [INFO][5051] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.995 [WARNING][5051] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" HandleID="k8s-pod-network.1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.995 [INFO][5051] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" HandleID="k8s-pod-network.1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:57.998 [INFO][5051] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:58.015573 containerd[2016]: 2024-09-04 17:12:58.012 [INFO][5042] k8s.go 621: Teardown processing complete. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:12:58.020215 containerd[2016]: time="2024-09-04T17:12:58.018898951Z" level=info msg="TearDown network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\" successfully" Sep 4 17:12:58.020215 containerd[2016]: time="2024-09-04T17:12:58.019878763Z" level=info msg="StopPodSandbox for \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\" returns successfully" Sep 4 17:12:58.026298 containerd[2016]: time="2024-09-04T17:12:58.025959811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d57759979-d9ffj,Uid:d2594d7c-f8ef-43c5-8979-10060070c099,Namespace:calico-system,Attempt:1,}" Sep 4 17:12:58.027148 systemd[1]: run-netns-cni\x2dd96983e4\x2db395\x2d3d85\x2d5322\x2dfc4639b7c3e7.mount: Deactivated successfully. Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:57.916 [INFO][5034] k8s.go 608: Cleaning up netns ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:57.918 [INFO][5034] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" iface="eth0" netns="/var/run/netns/cni-711bba3b-b29b-5da1-3718-78fa90262818" Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:57.920 [INFO][5034] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" iface="eth0" netns="/var/run/netns/cni-711bba3b-b29b-5da1-3718-78fa90262818" Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:57.920 [INFO][5034] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" iface="eth0" netns="/var/run/netns/cni-711bba3b-b29b-5da1-3718-78fa90262818" Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:57.920 [INFO][5034] k8s.go 615: Releasing IP address(es) ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:57.920 [INFO][5034] utils.go 188: Calico CNI releasing IP address ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:57.984 [INFO][5050] ipam_plugin.go 417: Releasing address using handleID ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" HandleID="k8s-pod-network.9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:57.984 [INFO][5050] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:57.998 [INFO][5050] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:58.022 [WARNING][5050] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" HandleID="k8s-pod-network.9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:58.022 [INFO][5050] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" HandleID="k8s-pod-network.9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:58.028 [INFO][5050] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:58.044190 containerd[2016]: 2024-09-04 17:12:58.037 [INFO][5034] k8s.go 621: Teardown processing complete. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:12:58.048074 containerd[2016]: time="2024-09-04T17:12:58.044950759Z" level=info msg="TearDown network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\" successfully" Sep 4 17:12:58.048074 containerd[2016]: time="2024-09-04T17:12:58.045000031Z" level=info msg="StopPodSandbox for \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\" returns successfully" Sep 4 17:12:58.049432 containerd[2016]: time="2024-09-04T17:12:58.049113415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-smljl,Uid:a4ac919e-9835-42b3-bf95-eb1b1e4767ec,Namespace:kube-system,Attempt:1,}" Sep 4 17:12:58.052527 systemd[1]: run-netns-cni\x2d711bba3b\x2db29b\x2d5da1\x2d3718\x2d78fa90262818.mount: Deactivated successfully. Sep 4 17:12:58.434422 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.72.0:123 Sep 4 17:12:58.436405 ntpd[1987]: 4 Sep 17:12:58 ntpd[1987]: Listen normally on 8 vxlan.calico 192.168.72.0:123 Sep 4 17:12:58.436405 ntpd[1987]: 4 Sep 17:12:58 ntpd[1987]: Listen normally on 9 vxlan.calico [fe80::6433:60ff:fe1a:3a9a%4]:123 Sep 4 17:12:58.434565 ntpd[1987]: Listen normally on 9 vxlan.calico [fe80::6433:60ff:fe1a:3a9a%4]:123 Sep 4 17:12:58.506394 systemd-networkd[1931]: caliab2c1ec2c06: Link UP Sep 4 17:12:58.509120 systemd-networkd[1931]: caliab2c1ec2c06: Gained carrier Sep 4 17:12:58.517740 (udev-worker)[5100]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.193 [INFO][5063] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0 calico-kube-controllers-5d57759979- calico-system d2594d7c-f8ef-43c5-8979-10060070c099 871 0 2024-09-04 17:12:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d57759979 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-13 calico-kube-controllers-5d57759979-d9ffj eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliab2c1ec2c06 [] []}} ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Namespace="calico-system" Pod="calico-kube-controllers-5d57759979-d9ffj" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.194 [INFO][5063] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Namespace="calico-system" Pod="calico-kube-controllers-5d57759979-d9ffj" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.398 [INFO][5085] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" HandleID="k8s-pod-network.6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.425 [INFO][5085] ipam_plugin.go 270: Auto assigning IP ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" HandleID="k8s-pod-network.6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003059a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-13", "pod":"calico-kube-controllers-5d57759979-d9ffj", "timestamp":"2024-09-04 17:12:58.398548113 +0000 UTC"}, Hostname:"ip-172-31-31-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.425 [INFO][5085] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.425 [INFO][5085] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.425 [INFO][5085] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-13' Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.429 [INFO][5085] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" host="ip-172-31-31-13" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.442 [INFO][5085] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-13" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.465 [INFO][5085] ipam.go 489: Trying affinity for 192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.469 [INFO][5085] ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.473 [INFO][5085] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.473 [INFO][5085] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" host="ip-172-31-31-13" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.476 [INFO][5085] ipam.go 1685: Creating new handle: k8s-pod-network.6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.481 [INFO][5085] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" host="ip-172-31-31-13" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.490 [INFO][5085] ipam.go 1216: Successfully claimed IPs: [192.168.72.1/26] block=192.168.72.0/26 handle="k8s-pod-network.6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" host="ip-172-31-31-13" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.490 [INFO][5085] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.1/26] handle="k8s-pod-network.6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" host="ip-172-31-31-13" Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.490 [INFO][5085] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:58.541248 containerd[2016]: 2024-09-04 17:12:58.490 [INFO][5085] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.72.1/26] IPv6=[] ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" HandleID="k8s-pod-network.6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.542568 containerd[2016]: 2024-09-04 17:12:58.499 [INFO][5063] k8s.go 386: Populated endpoint ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Namespace="calico-system" Pod="calico-kube-controllers-5d57759979-d9ffj" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0", GenerateName:"calico-kube-controllers-5d57759979-", Namespace:"calico-system", SelfLink:"", UID:"d2594d7c-f8ef-43c5-8979-10060070c099", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d57759979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"", Pod:"calico-kube-controllers-5d57759979-d9ffj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab2c1ec2c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:58.542568 containerd[2016]: 2024-09-04 17:12:58.500 [INFO][5063] k8s.go 387: Calico CNI using IPs: [192.168.72.1/32] ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Namespace="calico-system" Pod="calico-kube-controllers-5d57759979-d9ffj" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.542568 containerd[2016]: 2024-09-04 17:12:58.500 [INFO][5063] dataplane_linux.go 68: Setting the host side veth name to caliab2c1ec2c06 ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Namespace="calico-system" Pod="calico-kube-controllers-5d57759979-d9ffj" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.542568 containerd[2016]: 2024-09-04 17:12:58.508 [INFO][5063] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Namespace="calico-system" Pod="calico-kube-controllers-5d57759979-d9ffj" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.542568 containerd[2016]: 2024-09-04 17:12:58.510 [INFO][5063] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Namespace="calico-system" Pod="calico-kube-controllers-5d57759979-d9ffj" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0", GenerateName:"calico-kube-controllers-5d57759979-", Namespace:"calico-system", SelfLink:"", UID:"d2594d7c-f8ef-43c5-8979-10060070c099", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d57759979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e", Pod:"calico-kube-controllers-5d57759979-d9ffj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab2c1ec2c06", MAC:"42:2b:37:46:d6:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:58.542568 containerd[2016]: 2024-09-04 17:12:58.530 [INFO][5063] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e" Namespace="calico-system" Pod="calico-kube-controllers-5d57759979-d9ffj" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:12:58.607050 (udev-worker)[5107]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:12:58.612594 systemd-networkd[1931]: cali9551586dae3: Link UP Sep 4 17:12:58.618610 systemd-networkd[1931]: cali9551586dae3: Gained carrier Sep 4 17:12:58.643565 containerd[2016]: time="2024-09-04T17:12:58.643244146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:58.643565 containerd[2016]: time="2024-09-04T17:12:58.643358422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:58.643565 containerd[2016]: time="2024-09-04T17:12:58.643421518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:58.643565 containerd[2016]: time="2024-09-04T17:12:58.643473826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.291 [INFO][5077] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0 coredns-76f75df574- kube-system a4ac919e-9835-42b3-bf95-eb1b1e4767ec 870 0 2024-09-04 17:12:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-13 coredns-76f75df574-smljl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9551586dae3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Namespace="kube-system" Pod="coredns-76f75df574-smljl" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.291 [INFO][5077] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Namespace="kube-system" Pod="coredns-76f75df574-smljl" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.444 [INFO][5090] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" HandleID="k8s-pod-network.54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.471 [INFO][5090] ipam_plugin.go 270: Auto assigning IP ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" HandleID="k8s-pod-network.54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003d85d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-13", "pod":"coredns-76f75df574-smljl", "timestamp":"2024-09-04 17:12:58.444120921 +0000 UTC"}, Hostname:"ip-172-31-31-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.471 [INFO][5090] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.491 [INFO][5090] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.491 [INFO][5090] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-13' Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.496 [INFO][5090] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" host="ip-172-31-31-13" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.510 [INFO][5090] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-13" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.528 [INFO][5090] ipam.go 489: Trying affinity for 192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.537 [INFO][5090] ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.562 [INFO][5090] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.562 [INFO][5090] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" host="ip-172-31-31-13" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.565 [INFO][5090] ipam.go 1685: Creating new handle: k8s-pod-network.54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129 Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.577 [INFO][5090] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" host="ip-172-31-31-13" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.589 [INFO][5090] ipam.go 1216: Successfully claimed IPs: [192.168.72.2/26] block=192.168.72.0/26 handle="k8s-pod-network.54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" host="ip-172-31-31-13" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.590 [INFO][5090] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.2/26] handle="k8s-pod-network.54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" host="ip-172-31-31-13" Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.591 [INFO][5090] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:58.679705 containerd[2016]: 2024-09-04 17:12:58.591 [INFO][5090] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.72.2/26] IPv6=[] ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" HandleID="k8s-pod-network.54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.681010 containerd[2016]: 2024-09-04 17:12:58.598 [INFO][5077] k8s.go 386: Populated endpoint ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Namespace="kube-system" Pod="coredns-76f75df574-smljl" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a4ac919e-9835-42b3-bf95-eb1b1e4767ec", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"", Pod:"coredns-76f75df574-smljl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9551586dae3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:58.681010 containerd[2016]: 2024-09-04 17:12:58.598 [INFO][5077] k8s.go 387: Calico CNI using IPs: [192.168.72.2/32] ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Namespace="kube-system" Pod="coredns-76f75df574-smljl" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.681010 containerd[2016]: 2024-09-04 17:12:58.598 [INFO][5077] dataplane_linux.go 68: Setting the host side veth name to cali9551586dae3 ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Namespace="kube-system" Pod="coredns-76f75df574-smljl" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.681010 containerd[2016]: 2024-09-04 17:12:58.624 [INFO][5077] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Namespace="kube-system" Pod="coredns-76f75df574-smljl" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.681010 containerd[2016]: 2024-09-04 17:12:58.633 [INFO][5077] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Namespace="kube-system" Pod="coredns-76f75df574-smljl" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a4ac919e-9835-42b3-bf95-eb1b1e4767ec", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129", Pod:"coredns-76f75df574-smljl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9551586dae3", MAC:"36:73:bb:f6:60:52", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:58.681010 containerd[2016]: 2024-09-04 17:12:58.661 [INFO][5077] k8s.go 500: Wrote updated endpoint to datastore ContainerID="54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129" Namespace="kube-system" Pod="coredns-76f75df574-smljl" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:12:58.704744 systemd[1]: Started cri-containerd-6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e.scope - libcontainer container 6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e. Sep 4 17:12:58.761959 containerd[2016]: time="2024-09-04T17:12:58.761317762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:58.761959 containerd[2016]: time="2024-09-04T17:12:58.761673466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:58.761959 containerd[2016]: time="2024-09-04T17:12:58.761708062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:58.761959 containerd[2016]: time="2024-09-04T17:12:58.761761366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:58.780368 containerd[2016]: time="2024-09-04T17:12:58.779462050Z" level=info msg="StopPodSandbox for \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\"" Sep 4 17:12:58.808456 systemd[1]: Started cri-containerd-54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129.scope - libcontainer container 54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129. Sep 4 17:12:58.858699 containerd[2016]: time="2024-09-04T17:12:58.858641051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d57759979-d9ffj,Uid:d2594d7c-f8ef-43c5-8979-10060070c099,Namespace:calico-system,Attempt:1,} returns sandbox id \"6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e\"" Sep 4 17:12:58.863231 containerd[2016]: time="2024-09-04T17:12:58.863157683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Sep 4 17:12:58.918976 containerd[2016]: time="2024-09-04T17:12:58.918511523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-smljl,Uid:a4ac919e-9835-42b3-bf95-eb1b1e4767ec,Namespace:kube-system,Attempt:1,} returns sandbox id \"54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129\"" Sep 4 17:12:58.925481 containerd[2016]: time="2024-09-04T17:12:58.925363355Z" level=info msg="CreateContainer within sandbox \"54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:12:58.954505 containerd[2016]: time="2024-09-04T17:12:58.954373787Z" level=info msg="CreateContainer within sandbox \"54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc210a95af1b38e1d46fc25d469ecba01d79b2925c2fa97790dc979668624782\"" Sep 4 17:12:58.956166 containerd[2016]: time="2024-09-04T17:12:58.955458455Z" level=info msg="StartContainer for \"cc210a95af1b38e1d46fc25d469ecba01d79b2925c2fa97790dc979668624782\"" Sep 4 17:12:59.011190 systemd[1]: Started cri-containerd-cc210a95af1b38e1d46fc25d469ecba01d79b2925c2fa97790dc979668624782.scope - libcontainer container cc210a95af1b38e1d46fc25d469ecba01d79b2925c2fa97790dc979668624782. Sep 4 17:12:59.095921 containerd[2016]: time="2024-09-04T17:12:59.094687460Z" level=info msg="StartContainer for \"cc210a95af1b38e1d46fc25d469ecba01d79b2925c2fa97790dc979668624782\" returns successfully" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.062 [INFO][5205] k8s.go 608: Cleaning up netns ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.062 [INFO][5205] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" iface="eth0" netns="/var/run/netns/cni-c3c65bb5-b976-1132-699c-60c8c2b607b2" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.062 [INFO][5205] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" iface="eth0" netns="/var/run/netns/cni-c3c65bb5-b976-1132-699c-60c8c2b607b2" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.063 [INFO][5205] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" iface="eth0" netns="/var/run/netns/cni-c3c65bb5-b976-1132-699c-60c8c2b607b2" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.063 [INFO][5205] k8s.go 615: Releasing IP address(es) ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.063 [INFO][5205] utils.go 188: Calico CNI releasing IP address ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.128 [INFO][5255] ipam_plugin.go 417: Releasing address using handleID ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" HandleID="k8s-pod-network.383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.129 [INFO][5255] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.129 [INFO][5255] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.143 [WARNING][5255] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" HandleID="k8s-pod-network.383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.143 [INFO][5255] ipam_plugin.go 445: Releasing address using workloadID ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" HandleID="k8s-pod-network.383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.147 [INFO][5255] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:59.160190 containerd[2016]: 2024-09-04 17:12:59.156 [INFO][5205] k8s.go 621: Teardown processing complete. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:12:59.163257 containerd[2016]: time="2024-09-04T17:12:59.163034936Z" level=info msg="TearDown network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\" successfully" Sep 4 17:12:59.163257 containerd[2016]: time="2024-09-04T17:12:59.163099700Z" level=info msg="StopPodSandbox for \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\" returns successfully" Sep 4 17:12:59.169149 systemd[1]: run-netns-cni\x2dc3c65bb5\x2db976\x2d1132\x2d699c\x2d60c8c2b607b2.mount: Deactivated successfully. Sep 4 17:12:59.173356 containerd[2016]: time="2024-09-04T17:12:59.169498184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qsgk9,Uid:fa829edc-8daa-47fd-bdeb-0a042a3e6b58,Namespace:kube-system,Attempt:1,}" Sep 4 17:12:59.477752 systemd-networkd[1931]: calibeef53caa2a: Link UP Sep 4 17:12:59.478878 systemd-networkd[1931]: calibeef53caa2a: Gained carrier Sep 4 17:12:59.506449 kubelet[3249]: I0904 17:12:59.504970 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-smljl" podStartSLOduration=38.504238306 podStartE2EDuration="38.504238306s" podCreationTimestamp="2024-09-04 17:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:12:59.253324497 +0000 UTC m=+51.744615498" watchObservedRunningTime="2024-09-04 17:12:59.504238306 +0000 UTC m=+51.995529319" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.341 [INFO][5275] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0 coredns-76f75df574- kube-system fa829edc-8daa-47fd-bdeb-0a042a3e6b58 886 0 2024-09-04 17:12:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-13 coredns-76f75df574-qsgk9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibeef53caa2a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Namespace="kube-system" Pod="coredns-76f75df574-qsgk9" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.341 [INFO][5275] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Namespace="kube-system" Pod="coredns-76f75df574-qsgk9" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.403 [INFO][5287] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" HandleID="k8s-pod-network.42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.422 [INFO][5287] ipam_plugin.go 270: Auto assigning IP ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" HandleID="k8s-pod-network.42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000376480), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-13", "pod":"coredns-76f75df574-qsgk9", "timestamp":"2024-09-04 17:12:59.40353649 +0000 UTC"}, Hostname:"ip-172-31-31-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.422 [INFO][5287] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.423 [INFO][5287] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.423 [INFO][5287] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-13' Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.426 [INFO][5287] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" host="ip-172-31-31-13" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.433 [INFO][5287] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-13" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.441 [INFO][5287] ipam.go 489: Trying affinity for 192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.445 [INFO][5287] ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.449 [INFO][5287] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.449 [INFO][5287] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" host="ip-172-31-31-13" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.452 [INFO][5287] ipam.go 1685: Creating new handle: k8s-pod-network.42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.458 [INFO][5287] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" host="ip-172-31-31-13" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.468 [INFO][5287] ipam.go 1216: Successfully claimed IPs: [192.168.72.3/26] block=192.168.72.0/26 handle="k8s-pod-network.42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" host="ip-172-31-31-13" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.468 [INFO][5287] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.3/26] handle="k8s-pod-network.42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" host="ip-172-31-31-13" Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.469 [INFO][5287] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:12:59.516724 containerd[2016]: 2024-09-04 17:12:59.469 [INFO][5287] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.72.3/26] IPv6=[] ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" HandleID="k8s-pod-network.42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.519335 containerd[2016]: 2024-09-04 17:12:59.473 [INFO][5275] k8s.go 386: Populated endpoint ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Namespace="kube-system" Pod="coredns-76f75df574-qsgk9" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fa829edc-8daa-47fd-bdeb-0a042a3e6b58", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"", Pod:"coredns-76f75df574-qsgk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibeef53caa2a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:59.519335 containerd[2016]: 2024-09-04 17:12:59.473 [INFO][5275] k8s.go 387: Calico CNI using IPs: [192.168.72.3/32] ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Namespace="kube-system" Pod="coredns-76f75df574-qsgk9" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.519335 containerd[2016]: 2024-09-04 17:12:59.473 [INFO][5275] dataplane_linux.go 68: Setting the host side veth name to calibeef53caa2a ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Namespace="kube-system" Pod="coredns-76f75df574-qsgk9" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.519335 containerd[2016]: 2024-09-04 17:12:59.478 [INFO][5275] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Namespace="kube-system" Pod="coredns-76f75df574-qsgk9" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.519335 containerd[2016]: 2024-09-04 17:12:59.480 [INFO][5275] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Namespace="kube-system" Pod="coredns-76f75df574-qsgk9" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fa829edc-8daa-47fd-bdeb-0a042a3e6b58", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e", Pod:"coredns-76f75df574-qsgk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibeef53caa2a", MAC:"66:f2:7b:b9:d6:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:12:59.519335 containerd[2016]: 2024-09-04 17:12:59.501 [INFO][5275] k8s.go 500: Wrote updated endpoint to datastore ContainerID="42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e" Namespace="kube-system" Pod="coredns-76f75df574-qsgk9" WorkloadEndpoint="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:12:59.566680 containerd[2016]: time="2024-09-04T17:12:59.565911550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:12:59.566680 containerd[2016]: time="2024-09-04T17:12:59.566183650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:59.566680 containerd[2016]: time="2024-09-04T17:12:59.566288578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:12:59.566680 containerd[2016]: time="2024-09-04T17:12:59.566447554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:12:59.610297 systemd[1]: Started cri-containerd-42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e.scope - libcontainer container 42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e. Sep 4 17:12:59.677584 containerd[2016]: time="2024-09-04T17:12:59.677527679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qsgk9,Uid:fa829edc-8daa-47fd-bdeb-0a042a3e6b58,Namespace:kube-system,Attempt:1,} returns sandbox id \"42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e\"" Sep 4 17:12:59.682133 containerd[2016]: time="2024-09-04T17:12:59.681910703Z" level=info msg="CreateContainer within sandbox \"42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 4 17:12:59.718215 containerd[2016]: time="2024-09-04T17:12:59.718101491Z" level=info msg="CreateContainer within sandbox \"42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef0676219f94b679e3534138f3ded461b5e9979ddb0c93f3df87e83d48bbf046\"" Sep 4 17:12:59.718869 containerd[2016]: time="2024-09-04T17:12:59.718778495Z" level=info msg="StartContainer for \"ef0676219f94b679e3534138f3ded461b5e9979ddb0c93f3df87e83d48bbf046\"" Sep 4 17:12:59.778161 systemd[1]: Started cri-containerd-ef0676219f94b679e3534138f3ded461b5e9979ddb0c93f3df87e83d48bbf046.scope - libcontainer container ef0676219f94b679e3534138f3ded461b5e9979ddb0c93f3df87e83d48bbf046. Sep 4 17:12:59.835758 containerd[2016]: time="2024-09-04T17:12:59.835425372Z" level=info msg="StartContainer for \"ef0676219f94b679e3534138f3ded461b5e9979ddb0c93f3df87e83d48bbf046\" returns successfully" Sep 4 17:12:59.845209 systemd-networkd[1931]: cali9551586dae3: Gained IPv6LL Sep 4 17:13:00.240884 kubelet[3249]: I0904 17:13:00.240800 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qsgk9" podStartSLOduration=39.240742702 podStartE2EDuration="39.240742702s" podCreationTimestamp="2024-09-04 17:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-09-04 17:13:00.217348774 +0000 UTC m=+52.708639787" watchObservedRunningTime="2024-09-04 17:13:00.240742702 +0000 UTC m=+52.732033703" Sep 4 17:13:00.421257 systemd-networkd[1931]: caliab2c1ec2c06: Gained IPv6LL Sep 4 17:13:00.613117 systemd-networkd[1931]: calibeef53caa2a: Gained IPv6LL Sep 4 17:13:00.778077 containerd[2016]: time="2024-09-04T17:13:00.777824280Z" level=info msg="StopPodSandbox for \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\"" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.879 [INFO][5414] k8s.go 608: Cleaning up netns ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.879 [INFO][5414] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" iface="eth0" netns="/var/run/netns/cni-57e08886-be3b-be0f-6510-efaa36d19df7" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.880 [INFO][5414] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" iface="eth0" netns="/var/run/netns/cni-57e08886-be3b-be0f-6510-efaa36d19df7" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.881 [INFO][5414] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" iface="eth0" netns="/var/run/netns/cni-57e08886-be3b-be0f-6510-efaa36d19df7" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.881 [INFO][5414] k8s.go 615: Releasing IP address(es) ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.881 [INFO][5414] utils.go 188: Calico CNI releasing IP address ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.932 [INFO][5420] ipam_plugin.go 417: Releasing address using handleID ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" HandleID="k8s-pod-network.d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.932 [INFO][5420] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.933 [INFO][5420] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.948 [WARNING][5420] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" HandleID="k8s-pod-network.d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.948 [INFO][5420] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" HandleID="k8s-pod-network.d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.951 [INFO][5420] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:00.958554 containerd[2016]: 2024-09-04 17:13:00.954 [INFO][5414] k8s.go 621: Teardown processing complete. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:00.964436 containerd[2016]: time="2024-09-04T17:13:00.964206661Z" level=info msg="TearDown network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\" successfully" Sep 4 17:13:00.964436 containerd[2016]: time="2024-09-04T17:13:00.964275421Z" level=info msg="StopPodSandbox for \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\" returns successfully" Sep 4 17:13:00.965541 containerd[2016]: time="2024-09-04T17:13:00.965355793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6dqv,Uid:30c214dc-77a9-494e-bbbc-1b760a49564b,Namespace:calico-system,Attempt:1,}" Sep 4 17:13:00.968246 systemd[1]: run-netns-cni\x2d57e08886\x2dbe3b\x2dbe0f\x2d6510\x2defaa36d19df7.mount: Deactivated successfully. Sep 4 17:13:01.275371 systemd-networkd[1931]: calibf3dfd6459e: Link UP Sep 4 17:13:01.283942 systemd-networkd[1931]: calibf3dfd6459e: Gained carrier Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.091 [INFO][5428] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0 csi-node-driver- calico-system 30c214dc-77a9-494e-bbbc-1b760a49564b 920 0 2024-09-04 17:12:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-31-13 csi-node-driver-s6dqv eth0 default [] [] [kns.calico-system ksa.calico-system.default] calibf3dfd6459e [] []}} ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Namespace="calico-system" Pod="csi-node-driver-s6dqv" WorkloadEndpoint="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.091 [INFO][5428] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Namespace="calico-system" Pod="csi-node-driver-s6dqv" WorkloadEndpoint="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.169 [INFO][5438] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" HandleID="k8s-pod-network.30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.193 [INFO][5438] ipam_plugin.go 270: Auto assigning IP ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" HandleID="k8s-pod-network.30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000347d50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-13", "pod":"csi-node-driver-s6dqv", "timestamp":"2024-09-04 17:13:01.16920553 +0000 UTC"}, Hostname:"ip-172-31-31-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.193 [INFO][5438] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.194 [INFO][5438] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.194 [INFO][5438] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-13' Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.203 [INFO][5438] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" host="ip-172-31-31-13" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.213 [INFO][5438] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-13" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.225 [INFO][5438] ipam.go 489: Trying affinity for 192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.233 [INFO][5438] ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.241 [INFO][5438] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.241 [INFO][5438] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" host="ip-172-31-31-13" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.245 [INFO][5438] ipam.go 1685: Creating new handle: k8s-pod-network.30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.252 [INFO][5438] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" host="ip-172-31-31-13" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.263 [INFO][5438] ipam.go 1216: Successfully claimed IPs: [192.168.72.4/26] block=192.168.72.0/26 handle="k8s-pod-network.30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" host="ip-172-31-31-13" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.263 [INFO][5438] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.4/26] handle="k8s-pod-network.30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" host="ip-172-31-31-13" Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.264 [INFO][5438] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:01.320330 containerd[2016]: 2024-09-04 17:13:01.264 [INFO][5438] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.72.4/26] IPv6=[] ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" HandleID="k8s-pod-network.30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:01.321596 containerd[2016]: 2024-09-04 17:13:01.268 [INFO][5428] k8s.go 386: Populated endpoint ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Namespace="calico-system" Pod="csi-node-driver-s6dqv" WorkloadEndpoint="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30c214dc-77a9-494e-bbbc-1b760a49564b", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"", Pod:"csi-node-driver-s6dqv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibf3dfd6459e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:01.321596 containerd[2016]: 2024-09-04 17:13:01.269 [INFO][5428] k8s.go 387: Calico CNI using IPs: [192.168.72.4/32] ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Namespace="calico-system" Pod="csi-node-driver-s6dqv" WorkloadEndpoint="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:01.321596 containerd[2016]: 2024-09-04 17:13:01.269 [INFO][5428] dataplane_linux.go 68: Setting the host side veth name to calibf3dfd6459e ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Namespace="calico-system" Pod="csi-node-driver-s6dqv" WorkloadEndpoint="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:01.321596 containerd[2016]: 2024-09-04 17:13:01.281 [INFO][5428] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Namespace="calico-system" Pod="csi-node-driver-s6dqv" WorkloadEndpoint="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:01.321596 containerd[2016]: 2024-09-04 17:13:01.283 [INFO][5428] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Namespace="calico-system" Pod="csi-node-driver-s6dqv" WorkloadEndpoint="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30c214dc-77a9-494e-bbbc-1b760a49564b", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a", Pod:"csi-node-driver-s6dqv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibf3dfd6459e", MAC:"c6:55:55:df:9f:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:01.321596 containerd[2016]: 2024-09-04 17:13:01.315 [INFO][5428] k8s.go 500: Wrote updated endpoint to datastore ContainerID="30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a" Namespace="calico-system" Pod="csi-node-driver-s6dqv" WorkloadEndpoint="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:01.372373 systemd[1]: Started sshd@9-172.31.31.13:22-139.178.89.65:40720.service - OpenSSH per-connection server daemon (139.178.89.65:40720). Sep 4 17:13:01.417798 containerd[2016]: time="2024-09-04T17:13:01.417511932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:13:01.417798 containerd[2016]: time="2024-09-04T17:13:01.417647928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:01.418090 containerd[2016]: time="2024-09-04T17:13:01.417883440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:13:01.418090 containerd[2016]: time="2024-09-04T17:13:01.417947616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:01.475159 systemd[1]: Started cri-containerd-30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a.scope - libcontainer container 30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a. Sep 4 17:13:01.525478 containerd[2016]: time="2024-09-04T17:13:01.525387780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s6dqv,Uid:30c214dc-77a9-494e-bbbc-1b760a49564b,Namespace:calico-system,Attempt:1,} returns sandbox id \"30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a\"" Sep 4 17:13:01.595440 sshd[5461]: Accepted publickey for core from 139.178.89.65 port 40720 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:01.597286 sshd[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:01.607969 systemd-logind[1992]: New session 10 of user core. Sep 4 17:13:01.613074 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 4 17:13:01.881763 sshd[5461]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:01.889132 systemd[1]: sshd@9-172.31.31.13:22-139.178.89.65:40720.service: Deactivated successfully. Sep 4 17:13:01.892631 systemd[1]: session-10.scope: Deactivated successfully. Sep 4 17:13:01.894565 systemd-logind[1992]: Session 10 logged out. Waiting for processes to exit. Sep 4 17:13:01.896702 systemd-logind[1992]: Removed session 10. Sep 4 17:13:02.725568 systemd-networkd[1931]: calibf3dfd6459e: Gained IPv6LL Sep 4 17:13:03.065211 containerd[2016]: time="2024-09-04T17:13:03.065032704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:03.067502 containerd[2016]: time="2024-09-04T17:13:03.067406220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Sep 4 17:13:03.069297 containerd[2016]: time="2024-09-04T17:13:03.069171000Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:03.076955 containerd[2016]: time="2024-09-04T17:13:03.075434472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:03.083094 containerd[2016]: time="2024-09-04T17:13:03.083003592Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 4.219778193s" Sep 4 17:13:03.083291 containerd[2016]: time="2024-09-04T17:13:03.083087400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Sep 4 17:13:03.088210 containerd[2016]: time="2024-09-04T17:13:03.088136220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Sep 4 17:13:03.123444 containerd[2016]: time="2024-09-04T17:13:03.123389052Z" level=info msg="CreateContainer within sandbox \"6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 4 17:13:03.151907 containerd[2016]: time="2024-09-04T17:13:03.151788600Z" level=info msg="CreateContainer within sandbox \"6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"815ac84468736ae86f96425acb04a7ff08f34fab0561cbce211adf1cb2e976aa\"" Sep 4 17:13:03.153306 containerd[2016]: time="2024-09-04T17:13:03.153180624Z" level=info msg="StartContainer for \"815ac84468736ae86f96425acb04a7ff08f34fab0561cbce211adf1cb2e976aa\"" Sep 4 17:13:03.221345 systemd[1]: Started cri-containerd-815ac84468736ae86f96425acb04a7ff08f34fab0561cbce211adf1cb2e976aa.scope - libcontainer container 815ac84468736ae86f96425acb04a7ff08f34fab0561cbce211adf1cb2e976aa. Sep 4 17:13:03.331676 containerd[2016]: time="2024-09-04T17:13:03.331423213Z" level=info msg="StartContainer for \"815ac84468736ae86f96425acb04a7ff08f34fab0561cbce211adf1cb2e976aa\" returns successfully" Sep 4 17:13:04.447788 kubelet[3249]: I0904 17:13:04.447719 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d57759979-d9ffj" podStartSLOduration=27.225803014 podStartE2EDuration="31.447647259s" podCreationTimestamp="2024-09-04 17:12:33 +0000 UTC" firstStartedPulling="2024-09-04 17:12:58.861903911 +0000 UTC m=+51.353194888" lastFinishedPulling="2024-09-04 17:13:03.083748144 +0000 UTC m=+55.575039133" observedRunningTime="2024-09-04 17:13:04.32757569 +0000 UTC m=+56.818866715" watchObservedRunningTime="2024-09-04 17:13:04.447647259 +0000 UTC m=+56.938938260" Sep 4 17:13:05.434556 ntpd[1987]: Listen normally on 10 caliab2c1ec2c06 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:13:05.434733 ntpd[1987]: Listen normally on 11 cali9551586dae3 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:13:05.435369 ntpd[1987]: 4 Sep 17:13:05 ntpd[1987]: Listen normally on 10 caliab2c1ec2c06 [fe80::ecee:eeff:feee:eeee%7]:123 Sep 4 17:13:05.435369 ntpd[1987]: 4 Sep 17:13:05 ntpd[1987]: Listen normally on 11 cali9551586dae3 [fe80::ecee:eeff:feee:eeee%8]:123 Sep 4 17:13:05.435369 ntpd[1987]: 4 Sep 17:13:05 ntpd[1987]: Listen normally on 12 calibeef53caa2a [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:13:05.435369 ntpd[1987]: 4 Sep 17:13:05 ntpd[1987]: Listen normally on 13 calibf3dfd6459e [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:13:05.434811 ntpd[1987]: Listen normally on 12 calibeef53caa2a [fe80::ecee:eeff:feee:eeee%9]:123 Sep 4 17:13:05.434921 ntpd[1987]: Listen normally on 13 calibf3dfd6459e [fe80::ecee:eeff:feee:eeee%10]:123 Sep 4 17:13:06.008895 containerd[2016]: time="2024-09-04T17:13:06.008538194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:06.010719 containerd[2016]: time="2024-09-04T17:13:06.010645646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Sep 4 17:13:06.012871 containerd[2016]: time="2024-09-04T17:13:06.012781862Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:06.017335 containerd[2016]: time="2024-09-04T17:13:06.017252918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:06.018880 containerd[2016]: time="2024-09-04T17:13:06.018684698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 2.930455406s" Sep 4 17:13:06.018880 containerd[2016]: time="2024-09-04T17:13:06.018742190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Sep 4 17:13:06.025269 containerd[2016]: time="2024-09-04T17:13:06.025192910Z" level=info msg="CreateContainer within sandbox \"30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 4 17:13:06.060268 containerd[2016]: time="2024-09-04T17:13:06.059896779Z" level=info msg="CreateContainer within sandbox \"30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fffc5bac7a359ea17dea7f88071efccda220fce0411fbaf20a29997f37848221\"" Sep 4 17:13:06.062934 containerd[2016]: time="2024-09-04T17:13:06.061086039Z" level=info msg="StartContainer for \"fffc5bac7a359ea17dea7f88071efccda220fce0411fbaf20a29997f37848221\"" Sep 4 17:13:06.126125 systemd[1]: Started cri-containerd-fffc5bac7a359ea17dea7f88071efccda220fce0411fbaf20a29997f37848221.scope - libcontainer container fffc5bac7a359ea17dea7f88071efccda220fce0411fbaf20a29997f37848221. Sep 4 17:13:06.182137 containerd[2016]: time="2024-09-04T17:13:06.181919391Z" level=info msg="StartContainer for \"fffc5bac7a359ea17dea7f88071efccda220fce0411fbaf20a29997f37848221\" returns successfully" Sep 4 17:13:06.187170 containerd[2016]: time="2024-09-04T17:13:06.187110843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Sep 4 17:13:06.925425 systemd[1]: Started sshd@10-172.31.31.13:22-139.178.89.65:40736.service - OpenSSH per-connection server daemon (139.178.89.65:40736). Sep 4 17:13:07.115054 sshd[5626]: Accepted publickey for core from 139.178.89.65 port 40736 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:07.118409 sshd[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:07.128255 systemd-logind[1992]: New session 11 of user core. Sep 4 17:13:07.136117 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 4 17:13:07.453485 sshd[5626]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:07.464761 systemd[1]: sshd@10-172.31.31.13:22-139.178.89.65:40736.service: Deactivated successfully. Sep 4 17:13:07.472498 systemd[1]: session-11.scope: Deactivated successfully. Sep 4 17:13:07.478562 systemd-logind[1992]: Session 11 logged out. Waiting for processes to exit. Sep 4 17:13:07.500704 systemd[1]: Started sshd@11-172.31.31.13:22-139.178.89.65:40746.service - OpenSSH per-connection server daemon (139.178.89.65:40746). Sep 4 17:13:07.503656 systemd-logind[1992]: Removed session 11. Sep 4 17:13:07.720412 sshd[5640]: Accepted publickey for core from 139.178.89.65 port 40746 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:07.732509 sshd[5640]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:07.747604 systemd-logind[1992]: New session 12 of user core. Sep 4 17:13:07.754182 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 4 17:13:07.830659 kubelet[3249]: I0904 17:13:07.830208 3249 scope.go:117] "RemoveContainer" containerID="0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900" Sep 4 17:13:07.838600 containerd[2016]: time="2024-09-04T17:13:07.838054219Z" level=info msg="RemoveContainer for \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\"" Sep 4 17:13:07.852895 containerd[2016]: time="2024-09-04T17:13:07.851791387Z" level=info msg="RemoveContainer for \"0eeb162995644e20e6b833d5d5d3c8b67feb908cb4b961801398836e3bb1c900\" returns successfully" Sep 4 17:13:07.861291 containerd[2016]: time="2024-09-04T17:13:07.860806472Z" level=info msg="StopPodSandbox for \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\"" Sep 4 17:13:08.388237 sshd[5640]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:08.400617 systemd[1]: sshd@11-172.31.31.13:22-139.178.89.65:40746.service: Deactivated successfully. Sep 4 17:13:08.407808 systemd[1]: session-12.scope: Deactivated successfully. Sep 4 17:13:08.416967 systemd-logind[1992]: Session 12 logged out. Waiting for processes to exit. Sep 4 17:13:08.463388 systemd[1]: Started sshd@12-172.31.31.13:22-139.178.89.65:49066.service - OpenSSH per-connection server daemon (139.178.89.65:49066). Sep 4 17:13:08.468208 systemd-logind[1992]: Removed session 12. Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.178 [WARNING][5666] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30c214dc-77a9-494e-bbbc-1b760a49564b", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a", Pod:"csi-node-driver-s6dqv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibf3dfd6459e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.182 [INFO][5666] k8s.go 608: Cleaning up netns ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.182 [INFO][5666] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" iface="eth0" netns="" Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.182 [INFO][5666] k8s.go 615: Releasing IP address(es) ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.182 [INFO][5666] utils.go 188: Calico CNI releasing IP address ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.366 [INFO][5672] ipam_plugin.go 417: Releasing address using handleID ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" HandleID="k8s-pod-network.d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.366 [INFO][5672] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.366 [INFO][5672] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.440 [WARNING][5672] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" HandleID="k8s-pod-network.d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.440 [INFO][5672] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" HandleID="k8s-pod-network.d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.474 [INFO][5672] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:08.518182 containerd[2016]: 2024-09-04 17:13:08.498 [INFO][5666] k8s.go 621: Teardown processing complete. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:08.526236 containerd[2016]: time="2024-09-04T17:13:08.520001203Z" level=info msg="TearDown network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\" successfully" Sep 4 17:13:08.526236 containerd[2016]: time="2024-09-04T17:13:08.520051015Z" level=info msg="StopPodSandbox for \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\" returns successfully" Sep 4 17:13:08.526236 containerd[2016]: time="2024-09-04T17:13:08.522036199Z" level=info msg="RemovePodSandbox for \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\"" Sep 4 17:13:08.526236 containerd[2016]: time="2024-09-04T17:13:08.522090367Z" level=info msg="Forcibly stopping sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\"" Sep 4 17:13:08.550206 containerd[2016]: time="2024-09-04T17:13:08.548531131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Sep 4 17:13:08.550206 containerd[2016]: time="2024-09-04T17:13:08.550109203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:08.563230 containerd[2016]: time="2024-09-04T17:13:08.562264831Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:08.589447 containerd[2016]: time="2024-09-04T17:13:08.588958423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:08.597758 containerd[2016]: time="2024-09-04T17:13:08.597188143Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 2.410010784s" Sep 4 17:13:08.597758 containerd[2016]: time="2024-09-04T17:13:08.597677827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Sep 4 17:13:08.623092 containerd[2016]: time="2024-09-04T17:13:08.620607919Z" level=info msg="CreateContainer within sandbox \"30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 4 17:13:08.723388 containerd[2016]: time="2024-09-04T17:13:08.723197948Z" level=info msg="CreateContainer within sandbox \"30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4e2283b80984dae308aa77f3ac0fae06954ff3dbaa45b47e9fc02d3a67804316\"" Sep 4 17:13:08.727335 containerd[2016]: time="2024-09-04T17:13:08.727067240Z" level=info msg="StartContainer for \"4e2283b80984dae308aa77f3ac0fae06954ff3dbaa45b47e9fc02d3a67804316\"" Sep 4 17:13:08.751867 sshd[5683]: Accepted publickey for core from 139.178.89.65 port 49066 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:08.760697 sshd[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:08.784047 systemd-logind[1992]: New session 13 of user core. Sep 4 17:13:08.793658 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 4 17:13:08.877187 systemd[1]: Started cri-containerd-4e2283b80984dae308aa77f3ac0fae06954ff3dbaa45b47e9fc02d3a67804316.scope - libcontainer container 4e2283b80984dae308aa77f3ac0fae06954ff3dbaa45b47e9fc02d3a67804316. Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.810 [WARNING][5698] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30c214dc-77a9-494e-bbbc-1b760a49564b", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"30790d9c21ef2fece78d6f60ab31cb03f844e0b8bca312b83c735c7bdfa1385a", Pod:"csi-node-driver-s6dqv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.72.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calibf3dfd6459e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.813 [INFO][5698] k8s.go 608: Cleaning up netns ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.813 [INFO][5698] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" iface="eth0" netns="" Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.813 [INFO][5698] k8s.go 615: Releasing IP address(es) ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.813 [INFO][5698] utils.go 188: Calico CNI releasing IP address ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.919 [INFO][5718] ipam_plugin.go 417: Releasing address using handleID ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" HandleID="k8s-pod-network.d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.919 [INFO][5718] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.920 [INFO][5718] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.951 [WARNING][5718] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" HandleID="k8s-pod-network.d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.951 [INFO][5718] ipam_plugin.go 445: Releasing address using workloadID ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" HandleID="k8s-pod-network.d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Workload="ip--172--31--31--13-k8s-csi--node--driver--s6dqv-eth0" Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.959 [INFO][5718] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:08.975948 containerd[2016]: 2024-09-04 17:13:08.964 [INFO][5698] k8s.go 621: Teardown processing complete. ContainerID="d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7" Sep 4 17:13:08.979212 containerd[2016]: time="2024-09-04T17:13:08.977511381Z" level=info msg="TearDown network for sandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\" successfully" Sep 4 17:13:08.988388 containerd[2016]: time="2024-09-04T17:13:08.987170661Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:08.988388 containerd[2016]: time="2024-09-04T17:13:08.987291417Z" level=info msg="RemovePodSandbox \"d6eea1c1c404428cb87c248052e49ee6b3102535be4063257fe38c42a47477a7\" returns successfully" Sep 4 17:13:08.988388 containerd[2016]: time="2024-09-04T17:13:08.988061637Z" level=info msg="StopPodSandbox for \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\"" Sep 4 17:13:08.988388 containerd[2016]: time="2024-09-04T17:13:08.988203597Z" level=info msg="TearDown network for sandbox \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\" successfully" Sep 4 17:13:08.988388 containerd[2016]: time="2024-09-04T17:13:08.988267173Z" level=info msg="StopPodSandbox for \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\" returns successfully" Sep 4 17:13:08.989861 containerd[2016]: time="2024-09-04T17:13:08.989262669Z" level=info msg="RemovePodSandbox for \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\"" Sep 4 17:13:08.989861 containerd[2016]: time="2024-09-04T17:13:08.989321949Z" level=info msg="Forcibly stopping sandbox \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\"" Sep 4 17:13:08.989861 containerd[2016]: time="2024-09-04T17:13:08.989477181Z" level=info msg="TearDown network for sandbox \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\" successfully" Sep 4 17:13:08.999204 containerd[2016]: time="2024-09-04T17:13:08.998600205Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:08.999204 containerd[2016]: time="2024-09-04T17:13:08.998754573Z" level=info msg="RemovePodSandbox \"b4ce2e1f3f0be200f89a8e44a11bdd123193156e98342db5bcbf272ecdd59e09\" returns successfully" Sep 4 17:13:08.999796 containerd[2016]: time="2024-09-04T17:13:08.999731901Z" level=info msg="StopPodSandbox for \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\"" Sep 4 17:13:09.000106 containerd[2016]: time="2024-09-04T17:13:08.999911805Z" level=info msg="TearDown network for sandbox \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" successfully" Sep 4 17:13:09.000106 containerd[2016]: time="2024-09-04T17:13:08.999977277Z" level=info msg="StopPodSandbox for \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" returns successfully" Sep 4 17:13:09.001541 containerd[2016]: time="2024-09-04T17:13:09.000891989Z" level=info msg="RemovePodSandbox for \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\"" Sep 4 17:13:09.001541 containerd[2016]: time="2024-09-04T17:13:09.000951497Z" level=info msg="Forcibly stopping sandbox \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\"" Sep 4 17:13:09.001541 containerd[2016]: time="2024-09-04T17:13:09.001095173Z" level=info msg="TearDown network for sandbox \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" successfully" Sep 4 17:13:09.010628 containerd[2016]: time="2024-09-04T17:13:09.010305917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:09.010628 containerd[2016]: time="2024-09-04T17:13:09.010465961Z" level=info msg="RemovePodSandbox \"fe8f4371102e36878a8fbec325447c12e6759f99555eb0c092b14628f8d270e5\" returns successfully" Sep 4 17:13:09.014241 containerd[2016]: time="2024-09-04T17:13:09.012601121Z" level=info msg="StopPodSandbox for \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\"" Sep 4 17:13:09.115861 containerd[2016]: time="2024-09-04T17:13:09.111318222Z" level=info msg="StartContainer for \"4e2283b80984dae308aa77f3ac0fae06954ff3dbaa45b47e9fc02d3a67804316\" returns successfully" Sep 4 17:13:09.288154 sshd[5683]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:09.309497 systemd[1]: sshd@12-172.31.31.13:22-139.178.89.65:49066.service: Deactivated successfully. Sep 4 17:13:09.318627 systemd[1]: session-13.scope: Deactivated successfully. Sep 4 17:13:09.328248 systemd-logind[1992]: Session 13 logged out. Waiting for processes to exit. Sep 4 17:13:09.335958 systemd-logind[1992]: Removed session 13. Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.205 [WARNING][5760] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0", GenerateName:"calico-kube-controllers-5d57759979-", Namespace:"calico-system", SelfLink:"", UID:"d2594d7c-f8ef-43c5-8979-10060070c099", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d57759979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e", Pod:"calico-kube-controllers-5d57759979-d9ffj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab2c1ec2c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.206 [INFO][5760] k8s.go 608: Cleaning up netns ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.207 [INFO][5760] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" iface="eth0" netns="" Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.208 [INFO][5760] k8s.go 615: Releasing IP address(es) ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.208 [INFO][5760] utils.go 188: Calico CNI releasing IP address ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.320 [INFO][5770] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" HandleID="k8s-pod-network.1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.321 [INFO][5770] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.321 [INFO][5770] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.368 [WARNING][5770] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" HandleID="k8s-pod-network.1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.369 [INFO][5770] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" HandleID="k8s-pod-network.1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.391 [INFO][5770] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:09.401870 containerd[2016]: 2024-09-04 17:13:09.397 [INFO][5760] k8s.go 621: Teardown processing complete. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:13:09.401870 containerd[2016]: time="2024-09-04T17:13:09.400238203Z" level=info msg="TearDown network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\" successfully" Sep 4 17:13:09.401870 containerd[2016]: time="2024-09-04T17:13:09.400275991Z" level=info msg="StopPodSandbox for \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\" returns successfully" Sep 4 17:13:09.401870 containerd[2016]: time="2024-09-04T17:13:09.401346115Z" level=info msg="RemovePodSandbox for \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\"" Sep 4 17:13:09.401870 containerd[2016]: time="2024-09-04T17:13:09.401399455Z" level=info msg="Forcibly stopping sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\"" Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.505 [WARNING][5794] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0", GenerateName:"calico-kube-controllers-5d57759979-", Namespace:"calico-system", SelfLink:"", UID:"d2594d7c-f8ef-43c5-8979-10060070c099", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d57759979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"6cd5e1a41af131a2b0b29498927f4a109c007e185c035634a9a763291f2cde1e", Pod:"calico-kube-controllers-5d57759979-d9ffj", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.72.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliab2c1ec2c06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.507 [INFO][5794] k8s.go 608: Cleaning up netns ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.508 [INFO][5794] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" iface="eth0" netns="" Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.508 [INFO][5794] k8s.go 615: Releasing IP address(es) ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.508 [INFO][5794] utils.go 188: Calico CNI releasing IP address ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.575 [INFO][5803] ipam_plugin.go 417: Releasing address using handleID ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" HandleID="k8s-pod-network.1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.575 [INFO][5803] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.575 [INFO][5803] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.597 [WARNING][5803] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" HandleID="k8s-pod-network.1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.597 [INFO][5803] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" HandleID="k8s-pod-network.1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Workload="ip--172--31--31--13-k8s-calico--kube--controllers--5d57759979--d9ffj-eth0" Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.604 [INFO][5803] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:09.614880 containerd[2016]: 2024-09-04 17:13:09.608 [INFO][5794] k8s.go 621: Teardown processing complete. ContainerID="1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea" Sep 4 17:13:09.614880 containerd[2016]: time="2024-09-04T17:13:09.613423244Z" level=info msg="TearDown network for sandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\" successfully" Sep 4 17:13:09.621429 containerd[2016]: time="2024-09-04T17:13:09.621371336Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:09.621689 containerd[2016]: time="2024-09-04T17:13:09.621656720Z" level=info msg="RemovePodSandbox \"1c8cd451d3c6cc0a131098adf908efc7f1eaecda9024afd165a6efd55b788fea\" returns successfully" Sep 4 17:13:09.622754 containerd[2016]: time="2024-09-04T17:13:09.622704104Z" level=info msg="StopPodSandbox for \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\"" Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.705 [WARNING][5823] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a4ac919e-9835-42b3-bf95-eb1b1e4767ec", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129", Pod:"coredns-76f75df574-smljl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9551586dae3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.707 [INFO][5823] k8s.go 608: Cleaning up netns ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.708 [INFO][5823] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" iface="eth0" netns="" Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.708 [INFO][5823] k8s.go 615: Releasing IP address(es) ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.708 [INFO][5823] utils.go 188: Calico CNI releasing IP address ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.758 [INFO][5829] ipam_plugin.go 417: Releasing address using handleID ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" HandleID="k8s-pod-network.9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.758 [INFO][5829] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.759 [INFO][5829] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.773 [WARNING][5829] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" HandleID="k8s-pod-network.9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.773 [INFO][5829] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" HandleID="k8s-pod-network.9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.777 [INFO][5829] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:09.786920 containerd[2016]: 2024-09-04 17:13:09.781 [INFO][5823] k8s.go 621: Teardown processing complete. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:13:09.789494 containerd[2016]: time="2024-09-04T17:13:09.787627605Z" level=info msg="TearDown network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\" successfully" Sep 4 17:13:09.789494 containerd[2016]: time="2024-09-04T17:13:09.787889829Z" level=info msg="StopPodSandbox for \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\" returns successfully" Sep 4 17:13:09.790659 containerd[2016]: time="2024-09-04T17:13:09.789927957Z" level=info msg="RemovePodSandbox for \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\"" Sep 4 17:13:09.790659 containerd[2016]: time="2024-09-04T17:13:09.790011597Z" level=info msg="Forcibly stopping sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\"" Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.886 [WARNING][5846] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a4ac919e-9835-42b3-bf95-eb1b1e4767ec", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"54b83bb2b7237eb5c17157c77a3f2fd09c2c7e64176ed7749dacf8a2b459e129", Pod:"coredns-76f75df574-smljl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9551586dae3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.886 [INFO][5846] k8s.go 608: Cleaning up netns ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.887 [INFO][5846] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" iface="eth0" netns="" Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.887 [INFO][5846] k8s.go 615: Releasing IP address(es) ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.887 [INFO][5846] utils.go 188: Calico CNI releasing IP address ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.944 [INFO][5852] ipam_plugin.go 417: Releasing address using handleID ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" HandleID="k8s-pod-network.9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.945 [INFO][5852] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.945 [INFO][5852] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.961 [WARNING][5852] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" HandleID="k8s-pod-network.9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.961 [INFO][5852] ipam_plugin.go 445: Releasing address using workloadID ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" HandleID="k8s-pod-network.9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--smljl-eth0" Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.965 [INFO][5852] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:09.973050 containerd[2016]: 2024-09-04 17:13:09.969 [INFO][5846] k8s.go 621: Teardown processing complete. ContainerID="9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d" Sep 4 17:13:09.973050 containerd[2016]: time="2024-09-04T17:13:09.972436762Z" level=info msg="TearDown network for sandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\" successfully" Sep 4 17:13:09.982146 containerd[2016]: time="2024-09-04T17:13:09.982037314Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:09.984067 containerd[2016]: time="2024-09-04T17:13:09.982145902Z" level=info msg="RemovePodSandbox \"9859a55e3290de2a5c181d94df285dc946c5742e418c16a8daf828c95f58135d\" returns successfully" Sep 4 17:13:09.984067 containerd[2016]: time="2024-09-04T17:13:09.982748782Z" level=info msg="StopPodSandbox for \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\"" Sep 4 17:13:10.052545 kubelet[3249]: I0904 17:13:10.052364 3249 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 4 17:13:10.052545 kubelet[3249]: I0904 17:13:10.052446 3249 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.088 [WARNING][5871] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fa829edc-8daa-47fd-bdeb-0a042a3e6b58", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e", Pod:"coredns-76f75df574-qsgk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibeef53caa2a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.089 [INFO][5871] k8s.go 608: Cleaning up netns ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.090 [INFO][5871] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" iface="eth0" netns="" Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.090 [INFO][5871] k8s.go 615: Releasing IP address(es) ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.091 [INFO][5871] utils.go 188: Calico CNI releasing IP address ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.150 [INFO][5878] ipam_plugin.go 417: Releasing address using handleID ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" HandleID="k8s-pod-network.383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.150 [INFO][5878] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.150 [INFO][5878] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.163 [WARNING][5878] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" HandleID="k8s-pod-network.383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.164 [INFO][5878] ipam_plugin.go 445: Releasing address using workloadID ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" HandleID="k8s-pod-network.383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.167 [INFO][5878] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:10.175496 containerd[2016]: 2024-09-04 17:13:10.171 [INFO][5871] k8s.go 621: Teardown processing complete. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:13:10.175496 containerd[2016]: time="2024-09-04T17:13:10.175042843Z" level=info msg="TearDown network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\" successfully" Sep 4 17:13:10.175496 containerd[2016]: time="2024-09-04T17:13:10.175108123Z" level=info msg="StopPodSandbox for \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\" returns successfully" Sep 4 17:13:10.177890 containerd[2016]: time="2024-09-04T17:13:10.176910175Z" level=info msg="RemovePodSandbox for \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\"" Sep 4 17:13:10.177890 containerd[2016]: time="2024-09-04T17:13:10.177009763Z" level=info msg="Forcibly stopping sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\"" Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.268 [WARNING][5896] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"fa829edc-8daa-47fd-bdeb-0a042a3e6b58", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 12, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"42e62ffa6f891fa58e648a882d4ec6e9581a979d7180ab3a3165d8874c4b5e5e", Pod:"coredns-76f75df574-qsgk9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.72.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibeef53caa2a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.268 [INFO][5896] k8s.go 608: Cleaning up netns ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.268 [INFO][5896] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" iface="eth0" netns="" Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.269 [INFO][5896] k8s.go 615: Releasing IP address(es) ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.270 [INFO][5896] utils.go 188: Calico CNI releasing IP address ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.322 [INFO][5902] ipam_plugin.go 417: Releasing address using handleID ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" HandleID="k8s-pod-network.383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.322 [INFO][5902] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.322 [INFO][5902] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.340 [WARNING][5902] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" HandleID="k8s-pod-network.383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.340 [INFO][5902] ipam_plugin.go 445: Releasing address using workloadID ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" HandleID="k8s-pod-network.383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Workload="ip--172--31--31--13-k8s-coredns--76f75df574--qsgk9-eth0" Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.347 [INFO][5902] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:10.367143 containerd[2016]: 2024-09-04 17:13:10.358 [INFO][5896] k8s.go 621: Teardown processing complete. ContainerID="383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1" Sep 4 17:13:10.370999 containerd[2016]: time="2024-09-04T17:13:10.369454424Z" level=info msg="TearDown network for sandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\" successfully" Sep 4 17:13:10.380430 containerd[2016]: time="2024-09-04T17:13:10.379611512Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 4 17:13:10.380430 containerd[2016]: time="2024-09-04T17:13:10.379745276Z" level=info msg="RemovePodSandbox \"383580298430cd7fb9b0e21cdf60e5704b0a6f15b2832d84b1a38279711d23a1\" returns successfully" Sep 4 17:13:14.185700 systemd[1]: run-containerd-runc-k8s.io-815ac84468736ae86f96425acb04a7ff08f34fab0561cbce211adf1cb2e976aa-runc.EBBfLo.mount: Deactivated successfully. Sep 4 17:13:14.326374 systemd[1]: Started sshd@13-172.31.31.13:22-139.178.89.65:49072.service - OpenSSH per-connection server daemon (139.178.89.65:49072). Sep 4 17:13:14.505482 sshd[5931]: Accepted publickey for core from 139.178.89.65 port 49072 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:14.508316 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:14.518683 systemd-logind[1992]: New session 14 of user core. Sep 4 17:13:14.523138 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 4 17:13:14.784633 sshd[5931]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:14.793521 systemd[1]: sshd@13-172.31.31.13:22-139.178.89.65:49072.service: Deactivated successfully. Sep 4 17:13:14.798094 systemd[1]: session-14.scope: Deactivated successfully. Sep 4 17:13:14.800745 systemd-logind[1992]: Session 14 logged out. Waiting for processes to exit. Sep 4 17:13:14.803074 systemd-logind[1992]: Removed session 14. Sep 4 17:13:17.013361 kubelet[3249]: I0904 17:13:17.013297 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-s6dqv" podStartSLOduration=39.941747098 podStartE2EDuration="47.013232041s" podCreationTimestamp="2024-09-04 17:12:30 +0000 UTC" firstStartedPulling="2024-09-04 17:13:01.52795524 +0000 UTC m=+54.019246229" lastFinishedPulling="2024-09-04 17:13:08.599440171 +0000 UTC m=+61.090731172" observedRunningTime="2024-09-04 17:13:09.411782095 +0000 UTC m=+61.903073084" watchObservedRunningTime="2024-09-04 17:13:17.013232041 +0000 UTC m=+69.504523042" Sep 4 17:13:19.825387 systemd[1]: Started sshd@14-172.31.31.13:22-139.178.89.65:48682.service - OpenSSH per-connection server daemon (139.178.89.65:48682). Sep 4 17:13:20.005543 sshd[5976]: Accepted publickey for core from 139.178.89.65 port 48682 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:20.008784 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:20.017196 systemd-logind[1992]: New session 15 of user core. Sep 4 17:13:20.025087 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 4 17:13:20.279024 sshd[5976]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:20.284175 systemd-logind[1992]: Session 15 logged out. Waiting for processes to exit. Sep 4 17:13:20.284891 systemd[1]: sshd@14-172.31.31.13:22-139.178.89.65:48682.service: Deactivated successfully. Sep 4 17:13:20.289002 systemd[1]: session-15.scope: Deactivated successfully. Sep 4 17:13:20.293337 systemd-logind[1992]: Removed session 15. Sep 4 17:13:20.318364 systemd[1]: Started sshd@15-172.31.31.13:22-139.178.89.65:48694.service - OpenSSH per-connection server daemon (139.178.89.65:48694). Sep 4 17:13:20.499384 sshd[5989]: Accepted publickey for core from 139.178.89.65 port 48694 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:20.502768 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:20.512467 systemd-logind[1992]: New session 16 of user core. Sep 4 17:13:20.519153 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 4 17:13:20.964512 sshd[5989]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:20.970472 systemd[1]: sshd@15-172.31.31.13:22-139.178.89.65:48694.service: Deactivated successfully. Sep 4 17:13:20.974740 systemd[1]: session-16.scope: Deactivated successfully. Sep 4 17:13:20.980076 systemd-logind[1992]: Session 16 logged out. Waiting for processes to exit. Sep 4 17:13:20.981901 systemd-logind[1992]: Removed session 16. Sep 4 17:13:21.007399 systemd[1]: Started sshd@16-172.31.31.13:22-139.178.89.65:48706.service - OpenSSH per-connection server daemon (139.178.89.65:48706). Sep 4 17:13:21.193551 sshd[6000]: Accepted publickey for core from 139.178.89.65 port 48706 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:21.196368 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:21.204535 systemd-logind[1992]: New session 17 of user core. Sep 4 17:13:21.215158 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 4 17:13:24.155578 sshd[6000]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:24.165644 systemd[1]: sshd@16-172.31.31.13:22-139.178.89.65:48706.service: Deactivated successfully. Sep 4 17:13:24.174027 systemd[1]: session-17.scope: Deactivated successfully. Sep 4 17:13:24.174712 systemd[1]: session-17.scope: Consumed 1.017s CPU time. Sep 4 17:13:24.178981 systemd-logind[1992]: Session 17 logged out. Waiting for processes to exit. Sep 4 17:13:24.214078 systemd[1]: Started sshd@17-172.31.31.13:22-139.178.89.65:48708.service - OpenSSH per-connection server daemon (139.178.89.65:48708). Sep 4 17:13:24.218189 systemd-logind[1992]: Removed session 17. Sep 4 17:13:24.402984 sshd[6021]: Accepted publickey for core from 139.178.89.65 port 48708 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:24.405741 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:24.419125 systemd-logind[1992]: New session 18 of user core. Sep 4 17:13:24.427085 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 4 17:13:24.985711 sshd[6021]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:24.992007 systemd[1]: sshd@17-172.31.31.13:22-139.178.89.65:48708.service: Deactivated successfully. Sep 4 17:13:24.995666 systemd[1]: session-18.scope: Deactivated successfully. Sep 4 17:13:25.000249 systemd-logind[1992]: Session 18 logged out. Waiting for processes to exit. Sep 4 17:13:25.003019 systemd-logind[1992]: Removed session 18. Sep 4 17:13:25.023509 systemd[1]: Started sshd@18-172.31.31.13:22-139.178.89.65:48724.service - OpenSSH per-connection server daemon (139.178.89.65:48724). Sep 4 17:13:25.206158 sshd[6034]: Accepted publickey for core from 139.178.89.65 port 48724 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:25.209117 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:25.218008 systemd-logind[1992]: New session 19 of user core. Sep 4 17:13:25.226133 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 4 17:13:25.493209 sshd[6034]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:25.499886 systemd[1]: session-19.scope: Deactivated successfully. Sep 4 17:13:25.502664 systemd[1]: sshd@18-172.31.31.13:22-139.178.89.65:48724.service: Deactivated successfully. Sep 4 17:13:25.508516 systemd-logind[1992]: Session 19 logged out. Waiting for processes to exit. Sep 4 17:13:25.510661 systemd-logind[1992]: Removed session 19. Sep 4 17:13:29.359057 kubelet[3249]: I0904 17:13:29.358971 3249 topology_manager.go:215] "Topology Admit Handler" podUID="e7feb468-01a6-4c80-b5a8-6b634303ed94" podNamespace="calico-apiserver" podName="calico-apiserver-655df99ff7-5nmgh" Sep 4 17:13:29.391243 systemd[1]: Created slice kubepods-besteffort-pode7feb468_01a6_4c80_b5a8_6b634303ed94.slice - libcontainer container kubepods-besteffort-pode7feb468_01a6_4c80_b5a8_6b634303ed94.slice. Sep 4 17:13:29.485065 kubelet[3249]: I0904 17:13:29.484749 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dkb8\" (UniqueName: \"kubernetes.io/projected/e7feb468-01a6-4c80-b5a8-6b634303ed94-kube-api-access-9dkb8\") pod \"calico-apiserver-655df99ff7-5nmgh\" (UID: \"e7feb468-01a6-4c80-b5a8-6b634303ed94\") " pod="calico-apiserver/calico-apiserver-655df99ff7-5nmgh" Sep 4 17:13:29.485065 kubelet[3249]: I0904 17:13:29.484873 3249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e7feb468-01a6-4c80-b5a8-6b634303ed94-calico-apiserver-certs\") pod \"calico-apiserver-655df99ff7-5nmgh\" (UID: \"e7feb468-01a6-4c80-b5a8-6b634303ed94\") " pod="calico-apiserver/calico-apiserver-655df99ff7-5nmgh" Sep 4 17:13:29.586504 kubelet[3249]: E0904 17:13:29.586441 3249 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Sep 4 17:13:29.586676 kubelet[3249]: E0904 17:13:29.586554 3249 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e7feb468-01a6-4c80-b5a8-6b634303ed94-calico-apiserver-certs podName:e7feb468-01a6-4c80-b5a8-6b634303ed94 nodeName:}" failed. No retries permitted until 2024-09-04 17:13:30.086525887 +0000 UTC m=+82.577816876 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e7feb468-01a6-4c80-b5a8-6b634303ed94-calico-apiserver-certs") pod "calico-apiserver-655df99ff7-5nmgh" (UID: "e7feb468-01a6-4c80-b5a8-6b634303ed94") : secret "calico-apiserver-certs" not found Sep 4 17:13:30.299212 containerd[2016]: time="2024-09-04T17:13:30.299090811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655df99ff7-5nmgh,Uid:e7feb468-01a6-4c80-b5a8-6b634303ed94,Namespace:calico-apiserver,Attempt:0,}" Sep 4 17:13:30.540353 systemd[1]: Started sshd@19-172.31.31.13:22-139.178.89.65:41194.service - OpenSSH per-connection server daemon (139.178.89.65:41194). Sep 4 17:13:30.622235 systemd-networkd[1931]: calia69ea2fbcf3: Link UP Sep 4 17:13:30.622646 systemd-networkd[1931]: calia69ea2fbcf3: Gained carrier Sep 4 17:13:30.639445 (udev-worker)[6081]: Network interface NamePolicy= disabled on kernel command line. Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.405 [INFO][6058] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0 calico-apiserver-655df99ff7- calico-apiserver e7feb468-01a6-4c80-b5a8-6b634303ed94 1156 0 2024-09-04 17:13:29 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:655df99ff7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-13 calico-apiserver-655df99ff7-5nmgh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia69ea2fbcf3 [] []}} ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Namespace="calico-apiserver" Pod="calico-apiserver-655df99ff7-5nmgh" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.407 [INFO][6058] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Namespace="calico-apiserver" Pod="calico-apiserver-655df99ff7-5nmgh" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.478 [INFO][6070] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" HandleID="k8s-pod-network.ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Workload="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.514 [INFO][6070] ipam_plugin.go 270: Auto assigning IP ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" HandleID="k8s-pod-network.ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Workload="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317cd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-13", "pod":"calico-apiserver-655df99ff7-5nmgh", "timestamp":"2024-09-04 17:13:30.478621528 +0000 UTC"}, Hostname:"ip-172-31-31-13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.514 [INFO][6070] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.515 [INFO][6070] ipam_plugin.go 373: Acquired host-wide IPAM lock. Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.515 [INFO][6070] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-13' Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.520 [INFO][6070] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" host="ip-172-31-31-13" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.544 [INFO][6070] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-13" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.571 [INFO][6070] ipam.go 489: Trying affinity for 192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.578 [INFO][6070] ipam.go 155: Attempting to load block cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.583 [INFO][6070] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.72.0/26 host="ip-172-31-31-13" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.583 [INFO][6070] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.72.0/26 handle="k8s-pod-network.ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" host="ip-172-31-31-13" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.586 [INFO][6070] ipam.go 1685: Creating new handle: k8s-pod-network.ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.595 [INFO][6070] ipam.go 1203: Writing block in order to claim IPs block=192.168.72.0/26 handle="k8s-pod-network.ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" host="ip-172-31-31-13" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.610 [INFO][6070] ipam.go 1216: Successfully claimed IPs: [192.168.72.5/26] block=192.168.72.0/26 handle="k8s-pod-network.ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" host="ip-172-31-31-13" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.610 [INFO][6070] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.72.5/26] handle="k8s-pod-network.ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" host="ip-172-31-31-13" Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.610 [INFO][6070] ipam_plugin.go 379: Released host-wide IPAM lock. Sep 4 17:13:30.671236 containerd[2016]: 2024-09-04 17:13:30.610 [INFO][6070] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.72.5/26] IPv6=[] ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" HandleID="k8s-pod-network.ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Workload="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" Sep 4 17:13:30.681329 containerd[2016]: 2024-09-04 17:13:30.614 [INFO][6058] k8s.go 386: Populated endpoint ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Namespace="calico-apiserver" Pod="calico-apiserver-655df99ff7-5nmgh" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0", GenerateName:"calico-apiserver-655df99ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7feb468-01a6-4c80-b5a8-6b634303ed94", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655df99ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"", Pod:"calico-apiserver-655df99ff7-5nmgh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia69ea2fbcf3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:30.681329 containerd[2016]: 2024-09-04 17:13:30.614 [INFO][6058] k8s.go 387: Calico CNI using IPs: [192.168.72.5/32] ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Namespace="calico-apiserver" Pod="calico-apiserver-655df99ff7-5nmgh" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" Sep 4 17:13:30.681329 containerd[2016]: 2024-09-04 17:13:30.614 [INFO][6058] dataplane_linux.go 68: Setting the host side veth name to calia69ea2fbcf3 ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Namespace="calico-apiserver" Pod="calico-apiserver-655df99ff7-5nmgh" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" Sep 4 17:13:30.681329 containerd[2016]: 2024-09-04 17:13:30.623 [INFO][6058] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Namespace="calico-apiserver" Pod="calico-apiserver-655df99ff7-5nmgh" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" Sep 4 17:13:30.681329 containerd[2016]: 2024-09-04 17:13:30.634 [INFO][6058] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Namespace="calico-apiserver" Pod="calico-apiserver-655df99ff7-5nmgh" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0", GenerateName:"calico-apiserver-655df99ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e7feb468-01a6-4c80-b5a8-6b634303ed94", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2024, time.September, 4, 17, 13, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"655df99ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-13", ContainerID:"ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa", Pod:"calico-apiserver-655df99ff7-5nmgh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.72.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia69ea2fbcf3", MAC:"5a:6a:28:49:56:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Sep 4 17:13:30.681329 containerd[2016]: 2024-09-04 17:13:30.662 [INFO][6058] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa" Namespace="calico-apiserver" Pod="calico-apiserver-655df99ff7-5nmgh" WorkloadEndpoint="ip--172--31--31--13-k8s-calico--apiserver--655df99ff7--5nmgh-eth0" Sep 4 17:13:30.758352 sshd[6079]: Accepted publickey for core from 139.178.89.65 port 41194 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:30.761586 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:30.765547 containerd[2016]: time="2024-09-04T17:13:30.763804985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 4 17:13:30.766361 containerd[2016]: time="2024-09-04T17:13:30.766107473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:30.766627 containerd[2016]: time="2024-09-04T17:13:30.766533293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 4 17:13:30.766627 containerd[2016]: time="2024-09-04T17:13:30.766576385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 4 17:13:30.782963 systemd-logind[1992]: New session 20 of user core. Sep 4 17:13:30.803039 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 4 17:13:30.834370 systemd[1]: Started cri-containerd-ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa.scope - libcontainer container ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa. Sep 4 17:13:30.919466 containerd[2016]: time="2024-09-04T17:13:30.917568150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-655df99ff7-5nmgh,Uid:e7feb468-01a6-4c80-b5a8-6b634303ed94,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa\"" Sep 4 17:13:30.925453 containerd[2016]: time="2024-09-04T17:13:30.925093410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Sep 4 17:13:31.084012 sshd[6079]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:31.088961 systemd[1]: sshd@19-172.31.31.13:22-139.178.89.65:41194.service: Deactivated successfully. Sep 4 17:13:31.093721 systemd[1]: session-20.scope: Deactivated successfully. Sep 4 17:13:31.097516 systemd-logind[1992]: Session 20 logged out. Waiting for processes to exit. Sep 4 17:13:31.100206 systemd-logind[1992]: Removed session 20. Sep 4 17:13:31.845200 systemd-networkd[1931]: calia69ea2fbcf3: Gained IPv6LL Sep 4 17:13:33.533892 containerd[2016]: time="2024-09-04T17:13:33.533579011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:33.535591 containerd[2016]: time="2024-09-04T17:13:33.535469959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Sep 4 17:13:33.538111 containerd[2016]: time="2024-09-04T17:13:33.538062307Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:33.542876 containerd[2016]: time="2024-09-04T17:13:33.542748907Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 4 17:13:33.544655 containerd[2016]: time="2024-09-04T17:13:33.544584475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.619431701s" Sep 4 17:13:33.545067 containerd[2016]: time="2024-09-04T17:13:33.544804615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Sep 4 17:13:33.549776 containerd[2016]: time="2024-09-04T17:13:33.549603487Z" level=info msg="CreateContainer within sandbox \"ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 4 17:13:33.577362 containerd[2016]: time="2024-09-04T17:13:33.577291207Z" level=info msg="CreateContainer within sandbox \"ab9d8b5d9bf697c724e022c46ee48608d91f657e43617b49b8fd35c7d4feb3fa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e39d0cb1b8c37d014416b6a6628d04831d510df1f61f0fe3213d57ab10552af2\"" Sep 4 17:13:33.579356 containerd[2016]: time="2024-09-04T17:13:33.579290539Z" level=info msg="StartContainer for \"e39d0cb1b8c37d014416b6a6628d04831d510df1f61f0fe3213d57ab10552af2\"" Sep 4 17:13:33.660194 systemd[1]: Started cri-containerd-e39d0cb1b8c37d014416b6a6628d04831d510df1f61f0fe3213d57ab10552af2.scope - libcontainer container e39d0cb1b8c37d014416b6a6628d04831d510df1f61f0fe3213d57ab10552af2. Sep 4 17:13:33.759165 containerd[2016]: time="2024-09-04T17:13:33.759092432Z" level=info msg="StartContainer for \"e39d0cb1b8c37d014416b6a6628d04831d510df1f61f0fe3213d57ab10552af2\" returns successfully" Sep 4 17:13:34.434443 ntpd[1987]: Listen normally on 14 calia69ea2fbcf3 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:13:34.435225 ntpd[1987]: 4 Sep 17:13:34 ntpd[1987]: Listen normally on 14 calia69ea2fbcf3 [fe80::ecee:eeff:feee:eeee%11]:123 Sep 4 17:13:36.130482 systemd[1]: Started sshd@20-172.31.31.13:22-139.178.89.65:41198.service - OpenSSH per-connection server daemon (139.178.89.65:41198). Sep 4 17:13:36.332851 sshd[6224]: Accepted publickey for core from 139.178.89.65 port 41198 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:36.339508 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:36.351911 systemd-logind[1992]: New session 21 of user core. Sep 4 17:13:36.362139 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 4 17:13:36.366588 kubelet[3249]: I0904 17:13:36.366516 3249 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-655df99ff7-5nmgh" podStartSLOduration=4.745290572 podStartE2EDuration="7.366409845s" podCreationTimestamp="2024-09-04 17:13:29 +0000 UTC" firstStartedPulling="2024-09-04 17:13:30.92436831 +0000 UTC m=+83.415659299" lastFinishedPulling="2024-09-04 17:13:33.545487595 +0000 UTC m=+86.036778572" observedRunningTime="2024-09-04 17:13:34.489511532 +0000 UTC m=+86.980802533" watchObservedRunningTime="2024-09-04 17:13:36.366409845 +0000 UTC m=+88.857700906" Sep 4 17:13:36.696301 sshd[6224]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:36.704697 systemd[1]: sshd@20-172.31.31.13:22-139.178.89.65:41198.service: Deactivated successfully. Sep 4 17:13:36.712515 systemd[1]: session-21.scope: Deactivated successfully. Sep 4 17:13:36.716775 systemd-logind[1992]: Session 21 logged out. Waiting for processes to exit. Sep 4 17:13:36.722483 systemd-logind[1992]: Removed session 21. Sep 4 17:13:41.741392 systemd[1]: Started sshd@21-172.31.31.13:22-139.178.89.65:46414.service - OpenSSH per-connection server daemon (139.178.89.65:46414). Sep 4 17:13:41.941898 sshd[6246]: Accepted publickey for core from 139.178.89.65 port 46414 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:41.945270 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:41.954715 systemd-logind[1992]: New session 22 of user core. Sep 4 17:13:41.963161 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 4 17:13:42.264176 sshd[6246]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:42.269945 systemd-logind[1992]: Session 22 logged out. Waiting for processes to exit. Sep 4 17:13:42.270824 systemd[1]: session-22.scope: Deactivated successfully. Sep 4 17:13:42.272688 systemd[1]: sshd@21-172.31.31.13:22-139.178.89.65:46414.service: Deactivated successfully. Sep 4 17:13:42.286244 systemd-logind[1992]: Removed session 22. Sep 4 17:13:47.307339 systemd[1]: Started sshd@22-172.31.31.13:22-139.178.89.65:46420.service - OpenSSH per-connection server daemon (139.178.89.65:46420). Sep 4 17:13:47.480590 sshd[6304]: Accepted publickey for core from 139.178.89.65 port 46420 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:47.483547 sshd[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:47.491463 systemd-logind[1992]: New session 23 of user core. Sep 4 17:13:47.498089 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 4 17:13:47.741341 sshd[6304]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:47.747034 systemd-logind[1992]: Session 23 logged out. Waiting for processes to exit. Sep 4 17:13:47.748622 systemd[1]: sshd@22-172.31.31.13:22-139.178.89.65:46420.service: Deactivated successfully. Sep 4 17:13:47.753675 systemd[1]: session-23.scope: Deactivated successfully. Sep 4 17:13:47.759187 systemd-logind[1992]: Removed session 23. Sep 4 17:13:52.781340 systemd[1]: Started sshd@23-172.31.31.13:22-139.178.89.65:42378.service - OpenSSH per-connection server daemon (139.178.89.65:42378). Sep 4 17:13:52.967724 sshd[6321]: Accepted publickey for core from 139.178.89.65 port 42378 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:52.970993 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:52.979935 systemd-logind[1992]: New session 24 of user core. Sep 4 17:13:52.987068 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 4 17:13:53.226460 sshd[6321]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:53.232899 systemd[1]: sshd@23-172.31.31.13:22-139.178.89.65:42378.service: Deactivated successfully. Sep 4 17:13:53.237555 systemd[1]: session-24.scope: Deactivated successfully. Sep 4 17:13:53.240290 systemd-logind[1992]: Session 24 logged out. Waiting for processes to exit. Sep 4 17:13:53.242958 systemd-logind[1992]: Removed session 24. Sep 4 17:13:58.265307 systemd[1]: Started sshd@24-172.31.31.13:22-139.178.89.65:36756.service - OpenSSH per-connection server daemon (139.178.89.65:36756). Sep 4 17:13:58.453463 sshd[6338]: Accepted publickey for core from 139.178.89.65 port 36756 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:13:58.456226 sshd[6338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:13:58.467427 systemd-logind[1992]: New session 25 of user core. Sep 4 17:13:58.478129 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 4 17:13:58.741940 sshd[6338]: pam_unix(sshd:session): session closed for user core Sep 4 17:13:58.748614 systemd[1]: sshd@24-172.31.31.13:22-139.178.89.65:36756.service: Deactivated successfully. Sep 4 17:13:58.752395 systemd[1]: session-25.scope: Deactivated successfully. Sep 4 17:13:58.756791 systemd-logind[1992]: Session 25 logged out. Waiting for processes to exit. Sep 4 17:13:58.760358 systemd-logind[1992]: Removed session 25. Sep 4 17:14:03.787430 systemd[1]: Started sshd@25-172.31.31.13:22-139.178.89.65:36766.service - OpenSSH per-connection server daemon (139.178.89.65:36766). Sep 4 17:14:03.975867 sshd[6352]: Accepted publickey for core from 139.178.89.65 port 36766 ssh2: RSA SHA256:kUAc/AK3NORsNqodfN7sFAtyAL1l41RPtj57UtNEeKU Sep 4 17:14:03.979201 sshd[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Sep 4 17:14:03.988811 systemd-logind[1992]: New session 26 of user core. Sep 4 17:14:03.996196 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 4 17:14:04.242196 sshd[6352]: pam_unix(sshd:session): session closed for user core Sep 4 17:14:04.249120 systemd[1]: sshd@25-172.31.31.13:22-139.178.89.65:36766.service: Deactivated successfully. Sep 4 17:14:04.253094 systemd[1]: session-26.scope: Deactivated successfully. Sep 4 17:14:04.257652 systemd-logind[1992]: Session 26 logged out. Waiting for processes to exit. Sep 4 17:14:04.259633 systemd-logind[1992]: Removed session 26. Sep 4 17:14:50.496029 kubelet[3249]: E0904 17:14:50.495942 3249 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-31-13)" Sep 4 17:14:50.544160 systemd[1]: cri-containerd-19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7.scope: Deactivated successfully. Sep 4 17:14:50.546657 systemd[1]: cri-containerd-19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7.scope: Consumed 5.020s CPU time, 21.8M memory peak, 0B memory swap peak. Sep 4 17:14:50.587465 containerd[2016]: time="2024-09-04T17:14:50.587092558Z" level=info msg="shim disconnected" id=19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7 namespace=k8s.io Sep 4 17:14:50.587465 containerd[2016]: time="2024-09-04T17:14:50.587178034Z" level=warning msg="cleaning up after shim disconnected" id=19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7 namespace=k8s.io Sep 4 17:14:50.587465 containerd[2016]: time="2024-09-04T17:14:50.587198254Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:14:50.597900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7-rootfs.mount: Deactivated successfully. Sep 4 17:14:50.680739 kubelet[3249]: I0904 17:14:50.680699 3249 scope.go:117] "RemoveContainer" containerID="19ac905f91deeccf70ed6559938ac20ca5a67bf41763e4c9edc6ee84c4e5f8f7" Sep 4 17:14:50.686037 containerd[2016]: time="2024-09-04T17:14:50.685970482Z" level=info msg="CreateContainer within sandbox \"e605da24b2efc0e186406c81178d26ab61bd0675261f4b2bc518cece9a39f864\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Sep 4 17:14:50.708463 containerd[2016]: time="2024-09-04T17:14:50.708112474Z" level=info msg="CreateContainer within sandbox \"e605da24b2efc0e186406c81178d26ab61bd0675261f4b2bc518cece9a39f864\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9c5edddac0ceea2b4e42a6c2ada803341a191dd3b38072a5b4ebdd2a7fadd774\"" Sep 4 17:14:50.709542 containerd[2016]: time="2024-09-04T17:14:50.709142794Z" level=info msg="StartContainer for \"9c5edddac0ceea2b4e42a6c2ada803341a191dd3b38072a5b4ebdd2a7fadd774\"" Sep 4 17:14:50.771413 systemd[1]: Started cri-containerd-9c5edddac0ceea2b4e42a6c2ada803341a191dd3b38072a5b4ebdd2a7fadd774.scope - libcontainer container 9c5edddac0ceea2b4e42a6c2ada803341a191dd3b38072a5b4ebdd2a7fadd774. Sep 4 17:14:50.862721 containerd[2016]: time="2024-09-04T17:14:50.862627223Z" level=info msg="StartContainer for \"9c5edddac0ceea2b4e42a6c2ada803341a191dd3b38072a5b4ebdd2a7fadd774\" returns successfully" Sep 4 17:14:50.944165 systemd[1]: cri-containerd-01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4.scope: Deactivated successfully. Sep 4 17:14:50.946312 systemd[1]: cri-containerd-01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4.scope: Consumed 11.485s CPU time. Sep 4 17:14:50.997049 containerd[2016]: time="2024-09-04T17:14:50.996817584Z" level=info msg="shim disconnected" id=01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4 namespace=k8s.io Sep 4 17:14:50.997611 containerd[2016]: time="2024-09-04T17:14:50.997416732Z" level=warning msg="cleaning up after shim disconnected" id=01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4 namespace=k8s.io Sep 4 17:14:50.997611 containerd[2016]: time="2024-09-04T17:14:50.997452384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:14:51.003867 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4-rootfs.mount: Deactivated successfully. Sep 4 17:14:51.694039 kubelet[3249]: I0904 17:14:51.693983 3249 scope.go:117] "RemoveContainer" containerID="01f06a49658114e99bb0731c7d1499074cb59ea4e1d60ce88234f03d2b6875a4" Sep 4 17:14:51.699332 containerd[2016]: time="2024-09-04T17:14:51.699267779Z" level=info msg="CreateContainer within sandbox \"0a9233b94368f13b6daaddc7c1ed3f7f0ee12641f227d6a44a525d597fc7a75e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 4 17:14:51.732862 containerd[2016]: time="2024-09-04T17:14:51.730118759Z" level=info msg="CreateContainer within sandbox \"0a9233b94368f13b6daaddc7c1ed3f7f0ee12641f227d6a44a525d597fc7a75e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"047c695c197ed34802af603904f948a27f0366c0089e09731ad55a44284c7530\"" Sep 4 17:14:51.734854 containerd[2016]: time="2024-09-04T17:14:51.734569883Z" level=info msg="StartContainer for \"047c695c197ed34802af603904f948a27f0366c0089e09731ad55a44284c7530\"" Sep 4 17:14:51.743628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1110502762.mount: Deactivated successfully. Sep 4 17:14:51.816126 systemd[1]: Started cri-containerd-047c695c197ed34802af603904f948a27f0366c0089e09731ad55a44284c7530.scope - libcontainer container 047c695c197ed34802af603904f948a27f0366c0089e09731ad55a44284c7530. Sep 4 17:14:51.876750 containerd[2016]: time="2024-09-04T17:14:51.876276288Z" level=info msg="StartContainer for \"047c695c197ed34802af603904f948a27f0366c0089e09731ad55a44284c7530\" returns successfully" Sep 4 17:14:55.918161 systemd[1]: cri-containerd-dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12.scope: Deactivated successfully. Sep 4 17:14:55.918645 systemd[1]: cri-containerd-dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12.scope: Consumed 3.256s CPU time, 16.3M memory peak, 0B memory swap peak. Sep 4 17:14:55.959937 containerd[2016]: time="2024-09-04T17:14:55.959558860Z" level=info msg="shim disconnected" id=dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12 namespace=k8s.io Sep 4 17:14:55.959937 containerd[2016]: time="2024-09-04T17:14:55.959634628Z" level=warning msg="cleaning up after shim disconnected" id=dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12 namespace=k8s.io Sep 4 17:14:55.959937 containerd[2016]: time="2024-09-04T17:14:55.959655052Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 4 17:14:55.963339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12-rootfs.mount: Deactivated successfully. Sep 4 17:14:55.986075 containerd[2016]: time="2024-09-04T17:14:55.985963397Z" level=warning msg="cleanup warnings time=\"2024-09-04T17:14:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 4 17:14:56.724644 kubelet[3249]: I0904 17:14:56.724194 3249 scope.go:117] "RemoveContainer" containerID="dd21847f6a97e9b33b8b26b25ce1d071f04625a3fad22508d9450aaa51709e12" Sep 4 17:14:56.728289 containerd[2016]: time="2024-09-04T17:14:56.728231764Z" level=info msg="CreateContainer within sandbox \"b4800feb032b361422053eb8fef3cb087b169f0fb8b80f00130e3273f5706d29\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Sep 4 17:14:56.755398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1747549650.mount: Deactivated successfully. Sep 4 17:14:56.756266 containerd[2016]: time="2024-09-04T17:14:56.756102220Z" level=info msg="CreateContainer within sandbox \"b4800feb032b361422053eb8fef3cb087b169f0fb8b80f00130e3273f5706d29\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"806d59126baee4615f2cfb11bd47899306b0f51f04efb588c2e7d9bcf25a6ffc\"" Sep 4 17:14:56.757580 containerd[2016]: time="2024-09-04T17:14:56.757232860Z" level=info msg="StartContainer for \"806d59126baee4615f2cfb11bd47899306b0f51f04efb588c2e7d9bcf25a6ffc\"" Sep 4 17:14:56.815208 systemd[1]: Started cri-containerd-806d59126baee4615f2cfb11bd47899306b0f51f04efb588c2e7d9bcf25a6ffc.scope - libcontainer container 806d59126baee4615f2cfb11bd47899306b0f51f04efb588c2e7d9bcf25a6ffc. Sep 4 17:14:56.884391 containerd[2016]: time="2024-09-04T17:14:56.884316677Z" level=info msg="StartContainer for \"806d59126baee4615f2cfb11bd47899306b0f51f04efb588c2e7d9bcf25a6ffc\" returns successfully" Sep 4 17:15:00.496889 kubelet[3249]: E0904 17:15:00.496498 3249 controller.go:195] "Failed to update lease" err="Put \"https://172.31.31.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-13?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"