Oct 8 19:35:10.218176 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 8 19:35:10.220359 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Oct 8 18:25:39 -00 2024 Oct 8 19:35:10.220429 kernel: KASLR disabled due to lack of seed Oct 8 19:35:10.220447 kernel: efi: EFI v2.7 by EDK II Oct 8 19:35:10.220464 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Oct 8 19:35:10.220480 kernel: ACPI: Early table checksum verification disabled Oct 8 19:35:10.220498 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 8 19:35:10.220514 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 8 19:35:10.220530 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 8 19:35:10.220546 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 8 19:35:10.220566 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 8 19:35:10.220582 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 8 19:35:10.220597 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 8 19:35:10.220613 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 8 19:35:10.220632 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 8 19:35:10.220652 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 8 19:35:10.220669 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 8 19:35:10.220686 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 8 19:35:10.220702 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 8 19:35:10.220718 kernel: printk: bootconsole [uart0] enabled Oct 8 19:35:10.220735 kernel: NUMA: Failed to initialise from firmware Oct 8 19:35:10.220751 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 8 19:35:10.220768 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Oct 8 19:35:10.220784 kernel: Zone ranges: Oct 8 19:35:10.220801 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 8 19:35:10.220818 kernel: DMA32 empty Oct 8 19:35:10.220838 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 8 19:35:10.220854 kernel: Movable zone start for each node Oct 8 19:35:10.220871 kernel: Early memory node ranges Oct 8 19:35:10.220887 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Oct 8 19:35:10.220903 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Oct 8 19:35:10.220919 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Oct 8 19:35:10.220936 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 8 19:35:10.220952 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 8 19:35:10.220968 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 8 19:35:10.220985 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 8 19:35:10.221002 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 8 19:35:10.221018 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 8 19:35:10.221039 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 8 19:35:10.221056 kernel: psci: probing for conduit method from ACPI. Oct 8 19:35:10.221080 kernel: psci: PSCIv1.0 detected in firmware. Oct 8 19:35:10.221097 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:35:10.221126 kernel: psci: Trusted OS migration not required Oct 8 19:35:10.221183 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:35:10.221203 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:35:10.221245 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:35:10.221268 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 8 19:35:10.221286 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:35:10.221303 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:35:10.221320 kernel: CPU features: detected: Spectre-v2 Oct 8 19:35:10.221338 kernel: CPU features: detected: Spectre-v3a Oct 8 19:35:10.221355 kernel: CPU features: detected: Spectre-BHB Oct 8 19:35:10.221372 kernel: CPU features: detected: ARM erratum 1742098 Oct 8 19:35:10.221390 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 8 19:35:10.221414 kernel: alternatives: applying boot alternatives Oct 8 19:35:10.221434 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 19:35:10.221453 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:35:10.221471 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:35:10.221488 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:35:10.221505 kernel: Fallback order for Node 0: 0 Oct 8 19:35:10.221523 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 8 19:35:10.221540 kernel: Policy zone: Normal Oct 8 19:35:10.221558 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:35:10.221575 kernel: software IO TLB: area num 2. Oct 8 19:35:10.221593 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 8 19:35:10.221615 kernel: Memory: 3820152K/4030464K available (10304K kernel code, 2184K rwdata, 8092K rodata, 39360K init, 897K bss, 210312K reserved, 0K cma-reserved) Oct 8 19:35:10.221633 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 19:35:10.221651 kernel: trace event string verifier disabled Oct 8 19:35:10.221668 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:35:10.221686 kernel: rcu: RCU event tracing is enabled. Oct 8 19:35:10.221705 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 19:35:10.221723 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:35:10.221740 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:35:10.221758 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:35:10.221775 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 19:35:10.221792 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:35:10.221814 kernel: GICv3: 96 SPIs implemented Oct 8 19:35:10.221832 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:35:10.221850 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:35:10.221869 kernel: GICv3: GICv3 features: 16 PPIs Oct 8 19:35:10.221887 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 8 19:35:10.221905 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 8 19:35:10.221924 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:35:10.221944 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:35:10.221963 kernel: GICv3: using LPI property table @0x00000004000e0000 Oct 8 19:35:10.221982 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 8 19:35:10.222000 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Oct 8 19:35:10.222018 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:35:10.222041 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 8 19:35:10.222059 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 8 19:35:10.222077 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 8 19:35:10.222096 kernel: Console: colour dummy device 80x25 Oct 8 19:35:10.222115 kernel: printk: console [tty1] enabled Oct 8 19:35:10.222133 kernel: ACPI: Core revision 20230628 Oct 8 19:35:10.222151 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 8 19:35:10.222170 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:35:10.222193 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 8 19:35:10.222211 kernel: landlock: Up and running. Oct 8 19:35:10.225966 kernel: SELinux: Initializing. Oct 8 19:35:10.225986 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:35:10.226005 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:35:10.226023 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:35:10.226041 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:35:10.226059 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:35:10.226077 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:35:10.226095 kernel: Platform MSI: ITS@0x10080000 domain created Oct 8 19:35:10.226112 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 8 19:35:10.226137 kernel: Remapping and enabling EFI services. Oct 8 19:35:10.226154 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:35:10.226171 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:35:10.226189 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 8 19:35:10.226207 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Oct 8 19:35:10.226247 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 8 19:35:10.226270 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 19:35:10.226288 kernel: SMP: Total of 2 processors activated. Oct 8 19:35:10.226306 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:35:10.226330 kernel: CPU features: detected: 32-bit EL1 Support Oct 8 19:35:10.226348 kernel: CPU features: detected: CRC32 instructions Oct 8 19:35:10.226377 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:35:10.226399 kernel: alternatives: applying system-wide alternatives Oct 8 19:35:10.226418 kernel: devtmpfs: initialized Oct 8 19:35:10.226436 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:35:10.226454 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 19:35:10.226473 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:35:10.226509 kernel: SMBIOS 3.0.0 present. Oct 8 19:35:10.226536 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 8 19:35:10.226556 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:35:10.226574 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:35:10.226593 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:35:10.226611 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:35:10.226630 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:35:10.226648 kernel: audit: type=2000 audit(0.287:1): state=initialized audit_enabled=0 res=1 Oct 8 19:35:10.226672 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:35:10.226691 kernel: cpuidle: using governor menu Oct 8 19:35:10.226709 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:35:10.226728 kernel: ASID allocator initialised with 65536 entries Oct 8 19:35:10.226747 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:35:10.226766 kernel: Serial: AMBA PL011 UART driver Oct 8 19:35:10.226785 kernel: Modules: 17504 pages in range for non-PLT usage Oct 8 19:35:10.226803 kernel: Modules: 509024 pages in range for PLT usage Oct 8 19:35:10.226822 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:35:10.226845 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:35:10.226864 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:35:10.226882 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:35:10.226901 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:35:10.226920 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:35:10.226939 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:35:10.226958 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:35:10.226976 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:35:10.226995 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:35:10.227018 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:35:10.227037 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:35:10.227055 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:35:10.227074 kernel: ACPI: Interpreter enabled Oct 8 19:35:10.227092 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:35:10.227110 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:35:10.227129 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 8 19:35:10.227502 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:35:10.227725 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:35:10.227927 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:35:10.228129 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 8 19:35:10.229630 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 8 19:35:10.229669 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 8 19:35:10.229688 kernel: acpiphp: Slot [1] registered Oct 8 19:35:10.229707 kernel: acpiphp: Slot [2] registered Oct 8 19:35:10.229726 kernel: acpiphp: Slot [3] registered Oct 8 19:35:10.229745 kernel: acpiphp: Slot [4] registered Oct 8 19:35:10.229772 kernel: acpiphp: Slot [5] registered Oct 8 19:35:10.229791 kernel: acpiphp: Slot [6] registered Oct 8 19:35:10.229809 kernel: acpiphp: Slot [7] registered Oct 8 19:35:10.229828 kernel: acpiphp: Slot [8] registered Oct 8 19:35:10.229846 kernel: acpiphp: Slot [9] registered Oct 8 19:35:10.229865 kernel: acpiphp: Slot [10] registered Oct 8 19:35:10.229883 kernel: acpiphp: Slot [11] registered Oct 8 19:35:10.229901 kernel: acpiphp: Slot [12] registered Oct 8 19:35:10.229919 kernel: acpiphp: Slot [13] registered Oct 8 19:35:10.229941 kernel: acpiphp: Slot [14] registered Oct 8 19:35:10.229960 kernel: acpiphp: Slot [15] registered Oct 8 19:35:10.229979 kernel: acpiphp: Slot [16] registered Oct 8 19:35:10.229997 kernel: acpiphp: Slot [17] registered Oct 8 19:35:10.230015 kernel: acpiphp: Slot [18] registered Oct 8 19:35:10.230033 kernel: acpiphp: Slot [19] registered Oct 8 19:35:10.230052 kernel: acpiphp: Slot [20] registered Oct 8 19:35:10.230070 kernel: acpiphp: Slot [21] registered Oct 8 19:35:10.230088 kernel: acpiphp: Slot [22] registered Oct 8 19:35:10.230106 kernel: acpiphp: Slot [23] registered Oct 8 19:35:10.230129 kernel: acpiphp: Slot [24] registered Oct 8 19:35:10.230147 kernel: acpiphp: Slot [25] registered Oct 8 19:35:10.230165 kernel: acpiphp: Slot [26] registered Oct 8 19:35:10.230183 kernel: acpiphp: Slot [27] registered Oct 8 19:35:10.230201 kernel: acpiphp: Slot [28] registered Oct 8 19:35:10.230254 kernel: acpiphp: Slot [29] registered Oct 8 19:35:10.234276 kernel: acpiphp: Slot [30] registered Oct 8 19:35:10.234310 kernel: acpiphp: Slot [31] registered Oct 8 19:35:10.234329 kernel: PCI host bridge to bus 0000:00 Oct 8 19:35:10.234629 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 8 19:35:10.234822 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:35:10.235011 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 8 19:35:10.235200 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 8 19:35:10.237031 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 8 19:35:10.237315 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 8 19:35:10.237549 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 8 19:35:10.237777 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 8 19:35:10.237995 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 8 19:35:10.238204 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 8 19:35:10.238472 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 8 19:35:10.238714 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 8 19:35:10.238927 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 8 19:35:10.239149 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 8 19:35:10.242594 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 8 19:35:10.242839 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 8 19:35:10.243050 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 8 19:35:10.243301 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 8 19:35:10.243516 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 8 19:35:10.243735 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 8 19:35:10.243939 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 8 19:35:10.244129 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:35:10.246583 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 8 19:35:10.246628 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:35:10.246648 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:35:10.246668 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:35:10.246687 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:35:10.246706 kernel: iommu: Default domain type: Translated Oct 8 19:35:10.246736 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:35:10.246755 kernel: efivars: Registered efivars operations Oct 8 19:35:10.246774 kernel: vgaarb: loaded Oct 8 19:35:10.246792 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:35:10.246811 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:35:10.246830 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:35:10.246848 kernel: pnp: PnP ACPI init Oct 8 19:35:10.247079 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 8 19:35:10.247111 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:35:10.247131 kernel: NET: Registered PF_INET protocol family Oct 8 19:35:10.247149 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:35:10.247168 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:35:10.247187 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:35:10.247206 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:35:10.247252 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:35:10.247274 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:35:10.247294 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:35:10.247321 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:35:10.247340 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:35:10.247359 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:35:10.247379 kernel: kvm [1]: HYP mode not available Oct 8 19:35:10.247398 kernel: Initialise system trusted keyrings Oct 8 19:35:10.247418 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:35:10.247437 kernel: Key type asymmetric registered Oct 8 19:35:10.247455 kernel: Asymmetric key parser 'x509' registered Oct 8 19:35:10.247474 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:35:10.247499 kernel: io scheduler mq-deadline registered Oct 8 19:35:10.247518 kernel: io scheduler kyber registered Oct 8 19:35:10.247538 kernel: io scheduler bfq registered Oct 8 19:35:10.247778 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 8 19:35:10.247806 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:35:10.247826 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:35:10.247845 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Oct 8 19:35:10.247864 kernel: ACPI: button: Sleep Button [SLPB] Oct 8 19:35:10.247883 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:35:10.247910 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 8 19:35:10.248119 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 8 19:35:10.248146 kernel: printk: console [ttyS0] disabled Oct 8 19:35:10.248165 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 8 19:35:10.248183 kernel: printk: console [ttyS0] enabled Oct 8 19:35:10.248202 kernel: printk: bootconsole [uart0] disabled Oct 8 19:35:10.249327 kernel: thunder_xcv, ver 1.0 Oct 8 19:35:10.249386 kernel: thunder_bgx, ver 1.0 Oct 8 19:35:10.249405 kernel: nicpf, ver 1.0 Oct 8 19:35:10.249439 kernel: nicvf, ver 1.0 Oct 8 19:35:10.249783 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:35:10.249989 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:35:09 UTC (1728416109) Oct 8 19:35:10.250015 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:35:10.250035 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 8 19:35:10.250054 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:35:10.250072 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:35:10.250091 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:35:10.250116 kernel: Segment Routing with IPv6 Oct 8 19:35:10.250134 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:35:10.250153 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:35:10.250171 kernel: Key type dns_resolver registered Oct 8 19:35:10.250190 kernel: registered taskstats version 1 Oct 8 19:35:10.250208 kernel: Loading compiled-in X.509 certificates Oct 8 19:35:10.250249 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e9e638352c282bfddf5aec6da700ad8191939d05' Oct 8 19:35:10.250270 kernel: Key type .fscrypt registered Oct 8 19:35:10.250289 kernel: Key type fscrypt-provisioning registered Oct 8 19:35:10.250313 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:35:10.250332 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:35:10.250351 kernel: ima: No architecture policies found Oct 8 19:35:10.250369 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:35:10.250388 kernel: clk: Disabling unused clocks Oct 8 19:35:10.250406 kernel: Freeing unused kernel memory: 39360K Oct 8 19:35:10.250424 kernel: Run /init as init process Oct 8 19:35:10.250443 kernel: with arguments: Oct 8 19:35:10.250461 kernel: /init Oct 8 19:35:10.250501 kernel: with environment: Oct 8 19:35:10.250524 kernel: HOME=/ Oct 8 19:35:10.250543 kernel: TERM=linux Oct 8 19:35:10.250561 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:35:10.250585 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:35:10.250609 systemd[1]: Detected virtualization amazon. Oct 8 19:35:10.250630 systemd[1]: Detected architecture arm64. Oct 8 19:35:10.250650 systemd[1]: Running in initrd. Oct 8 19:35:10.250676 systemd[1]: No hostname configured, using default hostname. Oct 8 19:35:10.250696 systemd[1]: Hostname set to . Oct 8 19:35:10.250718 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:35:10.250739 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:35:10.250760 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:35:10.250780 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:35:10.250802 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:35:10.250823 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:35:10.250849 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:35:10.250870 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:35:10.250893 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:35:10.250914 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:35:10.250934 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:35:10.250955 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:35:10.250980 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:35:10.251002 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:35:10.251022 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:35:10.251043 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:35:10.251064 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:35:10.251084 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:35:10.251105 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:35:10.251125 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:35:10.251146 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:35:10.251172 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:35:10.251192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:35:10.251213 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:35:10.254350 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:35:10.254374 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:35:10.254395 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:35:10.254415 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:35:10.254436 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:35:10.254456 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:35:10.254506 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:35:10.254546 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:35:10.254569 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:35:10.254595 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:35:10.254678 systemd-journald[250]: Collecting audit messages is disabled. Oct 8 19:35:10.254731 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:35:10.254753 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:35:10.254774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:35:10.254798 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:35:10.254819 systemd-journald[250]: Journal started Oct 8 19:35:10.254856 systemd-journald[250]: Runtime Journal (/run/log/journal/ec2b5ff5b6d5872e620d94b30e61d58a) is 8.0M, max 75.3M, 67.3M free. Oct 8 19:35:10.219328 systemd-modules-load[252]: Inserted module 'overlay' Oct 8 19:35:10.262264 kernel: Bridge firewalling registered Oct 8 19:35:10.262331 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:35:10.259101 systemd-modules-load[252]: Inserted module 'br_netfilter' Oct 8 19:35:10.267599 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:35:10.280565 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:35:10.288510 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:35:10.306150 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:35:10.319740 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:35:10.324923 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:35:10.351077 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:35:10.362293 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:35:10.374538 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:35:10.379861 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:35:10.396519 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:35:10.415605 dracut-cmdline[285]: dracut-dracut-053 Oct 8 19:35:10.423903 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f7968382bc5b46f9b6104a9f012cfba991c8ea306771e716a099618547de81d3 Oct 8 19:35:10.471525 systemd-resolved[287]: Positive Trust Anchors: Oct 8 19:35:10.471552 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:35:10.471619 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:35:10.571576 kernel: SCSI subsystem initialized Oct 8 19:35:10.578336 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:35:10.591341 kernel: iscsi: registered transport (tcp) Oct 8 19:35:10.613722 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:35:10.613802 kernel: QLogic iSCSI HBA Driver Oct 8 19:35:10.712262 kernel: random: crng init done Oct 8 19:35:10.712563 systemd-resolved[287]: Defaulting to hostname 'linux'. Oct 8 19:35:10.716818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:35:10.734814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:35:10.743664 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:35:10.752437 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:35:10.801657 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:35:10.801733 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:35:10.801762 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:35:10.869264 kernel: raid6: neonx8 gen() 6786 MB/s Oct 8 19:35:10.886250 kernel: raid6: neonx4 gen() 6568 MB/s Oct 8 19:35:10.903250 kernel: raid6: neonx2 gen() 5466 MB/s Oct 8 19:35:10.920252 kernel: raid6: neonx1 gen() 3952 MB/s Oct 8 19:35:10.937250 kernel: raid6: int64x8 gen() 3823 MB/s Oct 8 19:35:10.954251 kernel: raid6: int64x4 gen() 3729 MB/s Oct 8 19:35:10.971259 kernel: raid6: int64x2 gen() 3625 MB/s Oct 8 19:35:10.989025 kernel: raid6: int64x1 gen() 2770 MB/s Oct 8 19:35:10.989072 kernel: raid6: using algorithm neonx8 gen() 6786 MB/s Oct 8 19:35:11.006999 kernel: raid6: .... xor() 4809 MB/s, rmw enabled Oct 8 19:35:11.007049 kernel: raid6: using neon recovery algorithm Oct 8 19:35:11.015280 kernel: xor: measuring software checksum speed Oct 8 19:35:11.015396 kernel: 8regs : 9948 MB/sec Oct 8 19:35:11.017251 kernel: 32regs : 10968 MB/sec Oct 8 19:35:11.019262 kernel: arm64_neon : 8953 MB/sec Oct 8 19:35:11.019298 kernel: xor: using function: 32regs (10968 MB/sec) Oct 8 19:35:11.104284 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:35:11.127158 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:35:11.137975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:35:11.178019 systemd-udevd[469]: Using default interface naming scheme 'v255'. Oct 8 19:35:11.187391 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:35:11.200820 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:35:11.243038 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Oct 8 19:35:11.304349 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:35:11.313538 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:35:11.444742 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:35:11.457597 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:35:11.502807 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:35:11.510578 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:35:11.513576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:35:11.518186 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:35:11.542632 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:35:11.583528 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:35:11.657759 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:35:11.657839 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 8 19:35:11.667650 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 8 19:35:11.668007 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 8 19:35:11.669073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:35:11.669643 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:35:11.676629 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:35:11.681534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:35:11.681834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:35:11.697425 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:35:11.722832 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:35:11.726340 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:4c:e0:38:a0:cd Oct 8 19:35:11.733837 (udev-worker)[542]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:35:11.740410 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 8 19:35:11.740455 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 8 19:35:11.755347 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 8 19:35:11.761552 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:35:11.761625 kernel: GPT:9289727 != 16777215 Oct 8 19:35:11.761651 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:35:11.761676 kernel: GPT:9289727 != 16777215 Oct 8 19:35:11.761701 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:35:11.761725 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:35:11.777364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:35:11.788595 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:35:11.838280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:35:11.881262 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (531) Oct 8 19:35:11.927263 kernel: BTRFS: device fsid ad786f33-c7c5-429e-95f9-4ea457bd3916 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (532) Oct 8 19:35:11.928026 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 8 19:35:12.007942 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:35:12.039570 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 8 19:35:12.066594 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 8 19:35:12.069677 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Oct 8 19:35:12.090590 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:35:12.104156 disk-uuid[657]: Primary Header is updated. Oct 8 19:35:12.104156 disk-uuid[657]: Secondary Entries is updated. Oct 8 19:35:12.104156 disk-uuid[657]: Secondary Header is updated. Oct 8 19:35:12.120259 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:35:12.130259 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:35:13.138262 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:35:13.140338 disk-uuid[658]: The operation has completed successfully. Oct 8 19:35:13.322687 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:35:13.323830 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:35:13.384600 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:35:13.401335 sh[917]: Success Oct 8 19:35:13.430256 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:35:13.539575 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:35:13.561422 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:35:13.574115 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:35:13.590394 kernel: BTRFS info (device dm-0): first mount of filesystem ad786f33-c7c5-429e-95f9-4ea457bd3916 Oct 8 19:35:13.590513 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:35:13.594531 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:35:13.594612 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:35:13.594642 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:35:13.688261 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 19:35:13.712082 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:35:13.715203 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:35:13.731579 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:35:13.740507 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:35:13.773389 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:35:13.773473 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:35:13.773506 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:35:13.781259 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:35:13.798947 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:35:13.802650 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:35:13.832004 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:35:13.842587 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:35:13.913958 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:35:13.938718 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:35:14.000900 systemd-networkd[1109]: lo: Link UP Oct 8 19:35:14.000925 systemd-networkd[1109]: lo: Gained carrier Oct 8 19:35:14.004649 systemd-networkd[1109]: Enumeration completed Oct 8 19:35:14.007635 systemd-networkd[1109]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:35:14.007656 systemd-networkd[1109]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:35:14.013486 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:35:14.018975 systemd[1]: Reached target network.target - Network. Oct 8 19:35:14.019385 systemd-networkd[1109]: eth0: Link UP Oct 8 19:35:14.019392 systemd-networkd[1109]: eth0: Gained carrier Oct 8 19:35:14.019411 systemd-networkd[1109]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:35:14.060323 systemd-networkd[1109]: eth0: DHCPv4 address 172.31.27.200/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:35:14.299083 ignition[1052]: Ignition 2.19.0 Oct 8 19:35:14.299114 ignition[1052]: Stage: fetch-offline Oct 8 19:35:14.299973 ignition[1052]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:35:14.300000 ignition[1052]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:35:14.301580 ignition[1052]: Ignition finished successfully Oct 8 19:35:14.308553 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:35:14.320509 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 19:35:14.345644 ignition[1118]: Ignition 2.19.0 Oct 8 19:35:14.345673 ignition[1118]: Stage: fetch Oct 8 19:35:14.347329 ignition[1118]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:35:14.347354 ignition[1118]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:35:14.348476 ignition[1118]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:35:14.378596 ignition[1118]: PUT result: OK Oct 8 19:35:14.381543 ignition[1118]: parsed url from cmdline: "" Oct 8 19:35:14.381559 ignition[1118]: no config URL provided Oct 8 19:35:14.381574 ignition[1118]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:35:14.381600 ignition[1118]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:35:14.381630 ignition[1118]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:35:14.383270 ignition[1118]: PUT result: OK Oct 8 19:35:14.383341 ignition[1118]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 8 19:35:14.386788 ignition[1118]: GET result: OK Oct 8 19:35:14.386966 ignition[1118]: parsing config with SHA512: c9e1f04572560e3da63eec87d3a4e66b68e3c32187f0e6774a90ed27a26cc5f0f1a2d8140fded48ed27189b5cf579a470002a7db276d76108cdcfbdf6037c777 Oct 8 19:35:14.400621 unknown[1118]: fetched base config from "system" Oct 8 19:35:14.401407 ignition[1118]: fetch: fetch complete Oct 8 19:35:14.400653 unknown[1118]: fetched base config from "system" Oct 8 19:35:14.401419 ignition[1118]: fetch: fetch passed Oct 8 19:35:14.400668 unknown[1118]: fetched user config from "aws" Oct 8 19:35:14.401497 ignition[1118]: Ignition finished successfully Oct 8 19:35:14.413422 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 19:35:14.428740 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:35:14.457630 ignition[1124]: Ignition 2.19.0 Oct 8 19:35:14.457659 ignition[1124]: Stage: kargs Oct 8 19:35:14.459426 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:35:14.459801 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:35:14.460606 ignition[1124]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:35:14.466415 ignition[1124]: PUT result: OK Oct 8 19:35:14.481537 ignition[1124]: kargs: kargs passed Oct 8 19:35:14.481681 ignition[1124]: Ignition finished successfully Oct 8 19:35:14.487292 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:35:14.497553 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:35:14.532082 ignition[1130]: Ignition 2.19.0 Oct 8 19:35:14.532112 ignition[1130]: Stage: disks Oct 8 19:35:14.535486 ignition[1130]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:35:14.535565 ignition[1130]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:35:14.537198 ignition[1130]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:35:14.541482 ignition[1130]: PUT result: OK Oct 8 19:35:14.546354 ignition[1130]: disks: disks passed Oct 8 19:35:14.547914 ignition[1130]: Ignition finished successfully Oct 8 19:35:14.552411 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:35:14.555756 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:35:14.560592 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:35:14.562971 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:35:14.566712 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:35:14.570611 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:35:14.589491 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:35:14.631155 systemd-fsck[1138]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:35:14.641420 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:35:14.654450 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:35:14.745268 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 833c86f3-93dd-4526-bb43-c7809dac8e51 r/w with ordered data mode. Quota mode: none. Oct 8 19:35:14.747588 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:35:14.750812 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:35:14.784385 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:35:14.790486 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:35:14.794378 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:35:14.794496 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:35:14.797726 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:35:14.812280 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1157) Oct 8 19:35:14.817084 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:35:14.817148 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:35:14.817175 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:35:14.822292 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:35:14.826067 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:35:14.830452 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:35:14.839909 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:35:15.269063 initrd-setup-root[1181]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:35:15.278916 initrd-setup-root[1188]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:35:15.288503 initrd-setup-root[1195]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:35:15.297203 initrd-setup-root[1202]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:35:15.380360 systemd-networkd[1109]: eth0: Gained IPv6LL Oct 8 19:35:15.618511 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:35:15.633900 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:35:15.640506 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:35:15.657992 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:35:15.665267 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:35:15.696446 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:35:15.713811 ignition[1270]: INFO : Ignition 2.19.0 Oct 8 19:35:15.715667 ignition[1270]: INFO : Stage: mount Oct 8 19:35:15.717645 ignition[1270]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:35:15.717645 ignition[1270]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:35:15.721903 ignition[1270]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:35:15.725766 ignition[1270]: INFO : PUT result: OK Oct 8 19:35:15.732131 ignition[1270]: INFO : mount: mount passed Oct 8 19:35:15.734031 ignition[1270]: INFO : Ignition finished successfully Oct 8 19:35:15.740297 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:35:15.748641 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:35:15.784567 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:35:15.801243 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1281) Oct 8 19:35:15.805028 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem cbd8a2bc-d0a3-4040-91fa-086f2a330687 Oct 8 19:35:15.805077 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:35:15.806245 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:35:15.812244 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:35:15.813844 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:35:15.863268 ignition[1298]: INFO : Ignition 2.19.0 Oct 8 19:35:15.863268 ignition[1298]: INFO : Stage: files Oct 8 19:35:15.868534 ignition[1298]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:35:15.868534 ignition[1298]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:35:15.868534 ignition[1298]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:35:15.868534 ignition[1298]: INFO : PUT result: OK Oct 8 19:35:15.879503 ignition[1298]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:35:15.883496 ignition[1298]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:35:15.883496 ignition[1298]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:35:15.914084 ignition[1298]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:35:15.917036 ignition[1298]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:35:15.919935 unknown[1298]: wrote ssh authorized keys file for user: core Oct 8 19:35:15.922243 ignition[1298]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:35:15.937673 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:35:15.937673 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:35:16.030993 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:35:16.192905 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:35:16.196676 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:35:16.200178 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:35:16.203456 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:35:16.206665 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:35:16.209796 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:35:16.213767 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:35:16.213767 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:35:16.213767 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:35:16.213767 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:35:16.226960 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:35:16.226960 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:35:16.226960 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:35:16.226960 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:35:16.226960 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Oct 8 19:35:16.666969 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:35:17.066729 ignition[1298]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Oct 8 19:35:17.066729 ignition[1298]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:35:17.073438 ignition[1298]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:35:17.078166 ignition[1298]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:35:17.078166 ignition[1298]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:35:17.078166 ignition[1298]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:35:17.078166 ignition[1298]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:35:17.078166 ignition[1298]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:35:17.078166 ignition[1298]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:35:17.078166 ignition[1298]: INFO : files: files passed Oct 8 19:35:17.078166 ignition[1298]: INFO : Ignition finished successfully Oct 8 19:35:17.083017 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:35:17.105456 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:35:17.114588 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:35:17.126855 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:35:17.130875 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:35:17.149468 initrd-setup-root-after-ignition[1326]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:35:17.149468 initrd-setup-root-after-ignition[1326]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:35:17.155664 initrd-setup-root-after-ignition[1330]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:35:17.162066 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:35:17.164654 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:35:17.178553 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:35:17.235361 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:35:17.237731 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:35:17.243528 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:35:17.244591 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:35:17.245330 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:35:17.248490 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:35:17.287340 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:35:17.298801 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:35:17.331868 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:35:17.335398 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:35:17.338434 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:35:17.340359 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:35:17.340592 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:35:17.343413 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:35:17.346242 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:35:17.350418 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:35:17.359292 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:35:17.361543 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:35:17.364678 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:35:17.376981 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:35:17.379866 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:35:17.383517 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:35:17.389305 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:35:17.391241 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:35:17.391472 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:35:17.394807 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:35:17.398327 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:35:17.401919 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:35:17.407276 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:35:17.413838 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:35:17.414074 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:35:17.421205 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:35:17.423340 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:35:17.423907 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:35:17.424108 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:35:17.445438 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:35:17.447303 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:35:17.447735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:35:17.472737 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:35:17.480090 ignition[1350]: INFO : Ignition 2.19.0 Oct 8 19:35:17.480090 ignition[1350]: INFO : Stage: umount Oct 8 19:35:17.480090 ignition[1350]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:35:17.480090 ignition[1350]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:35:17.480090 ignition[1350]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:35:17.480090 ignition[1350]: INFO : PUT result: OK Oct 8 19:35:17.499830 ignition[1350]: INFO : umount: umount passed Oct 8 19:35:17.499830 ignition[1350]: INFO : Ignition finished successfully Oct 8 19:35:17.480349 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:35:17.482739 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:35:17.488051 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:35:17.488332 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:35:17.517525 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:35:17.517722 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:35:17.528849 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:35:17.531480 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:35:17.540128 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:35:17.544010 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:35:17.544110 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:35:17.549597 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:35:17.549708 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:35:17.560213 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 19:35:17.560365 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 19:35:17.564172 systemd[1]: Stopped target network.target - Network. Oct 8 19:35:17.569336 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:35:17.569482 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:35:17.572087 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:35:17.573834 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:35:17.579513 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:35:17.586390 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:35:17.588015 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:35:17.589804 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:35:17.589885 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:35:17.591828 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:35:17.591912 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:35:17.593786 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:35:17.593889 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:35:17.595713 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:35:17.595791 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:35:17.600346 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:35:17.612484 systemd-networkd[1109]: eth0: DHCPv6 lease lost Oct 8 19:35:17.624031 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:35:17.630880 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:35:17.631131 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:35:17.637801 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:35:17.640389 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:35:17.645845 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:35:17.646575 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:35:17.654738 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:35:17.654843 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:35:17.658858 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:35:17.658951 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:35:17.673570 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:35:17.677167 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:35:17.677317 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:35:17.679974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:35:17.680052 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:35:17.682421 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:35:17.682517 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:35:17.684926 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:35:17.685001 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:35:17.691566 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:35:17.731723 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:35:17.733393 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:35:17.739591 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:35:17.741541 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:35:17.745296 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:35:17.745435 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:35:17.752584 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:35:17.752682 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:35:17.754582 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:35:17.754684 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:35:17.754968 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:35:17.755049 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:35:17.755239 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:35:17.755331 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:35:17.789600 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:35:17.794997 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:35:17.795119 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:35:17.800412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:35:17.800511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:35:17.809338 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:35:17.811768 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:35:17.816542 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:35:17.835522 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:35:17.852557 systemd[1]: Switching root. Oct 8 19:35:17.900520 systemd-journald[250]: Journal stopped Oct 8 19:35:21.180412 systemd-journald[250]: Received SIGTERM from PID 1 (systemd). Oct 8 19:35:21.180546 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:35:21.180590 kernel: SELinux: policy capability open_perms=1 Oct 8 19:35:21.180630 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:35:21.180661 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:35:21.180692 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:35:21.180729 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:35:21.180760 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:35:21.180789 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:35:21.180819 kernel: audit: type=1403 audit(1728416119.408:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:35:21.180860 systemd[1]: Successfully loaded SELinux policy in 68.800ms. Oct 8 19:35:21.180910 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.788ms. Oct 8 19:35:21.180947 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:35:21.180979 systemd[1]: Detected virtualization amazon. Oct 8 19:35:21.181011 systemd[1]: Detected architecture arm64. Oct 8 19:35:21.181046 systemd[1]: Detected first boot. Oct 8 19:35:21.181080 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:35:21.181111 zram_generator::config[1392]: No configuration found. Oct 8 19:35:21.181156 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:35:21.181192 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:35:21.183304 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:35:21.183368 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:35:21.183403 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:35:21.183443 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:35:21.183478 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:35:21.183510 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:35:21.183547 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:35:21.183580 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:35:21.183612 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:35:21.183644 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:35:21.183675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:35:21.183712 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:35:21.183744 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:35:21.183779 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:35:21.183811 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:35:21.183845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:35:21.183877 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:35:21.183909 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:35:21.183941 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:35:21.183976 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:35:21.184011 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:35:21.184044 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:35:21.184076 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:35:21.184107 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:35:21.184137 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:35:21.184169 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:35:21.184198 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:35:21.184252 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:35:21.184292 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:35:21.184322 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:35:21.184354 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:35:21.184384 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:35:21.184413 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:35:21.184443 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:35:21.184472 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:35:21.184504 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:35:21.184536 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:35:21.184570 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:35:21.184603 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:35:21.184634 systemd[1]: Reached target machines.target - Containers. Oct 8 19:35:21.184670 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:35:21.184703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:35:21.184735 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:35:21.184765 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:35:21.184795 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:35:21.184828 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:35:21.184858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:35:21.184890 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:35:21.184919 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:35:21.184950 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:35:21.184982 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:35:21.185014 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:35:21.185044 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:35:21.185074 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:35:21.185110 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:35:21.185139 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:35:21.185168 kernel: fuse: init (API version 7.39) Oct 8 19:35:21.185198 kernel: ACPI: bus type drm_connector registered Oct 8 19:35:21.189098 kernel: loop: module loaded Oct 8 19:35:21.189159 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:35:21.190207 systemd-journald[1474]: Collecting audit messages is disabled. Oct 8 19:35:21.190373 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:35:21.190435 systemd-journald[1474]: Journal started Oct 8 19:35:21.190493 systemd-journald[1474]: Runtime Journal (/run/log/journal/ec2b5ff5b6d5872e620d94b30e61d58a) is 8.0M, max 75.3M, 67.3M free. Oct 8 19:35:20.631054 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:35:20.684641 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 8 19:35:20.685447 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:35:21.221258 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:35:21.223265 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:35:21.226310 systemd[1]: Stopped verity-setup.service. Oct 8 19:35:21.232275 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:35:21.238462 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:35:21.241003 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:35:21.243622 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:35:21.246691 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:35:21.258987 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:35:21.261728 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:35:21.275156 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:35:21.278109 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:35:21.281455 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:35:21.281933 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:35:21.284987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:35:21.285802 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:35:21.288994 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:35:21.289707 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:35:21.293101 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:35:21.293695 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:35:21.297185 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:35:21.297667 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:35:21.300674 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:35:21.301140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:35:21.304199 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:35:21.307184 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:35:21.310944 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:35:21.344480 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:35:21.353543 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:35:21.367175 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:35:21.370526 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:35:21.370586 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:35:21.378494 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:35:21.389105 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:35:21.405693 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:35:21.407918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:35:21.412631 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:35:21.418630 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:35:21.422514 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:35:21.425686 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:35:21.428520 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:35:21.439759 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:35:21.452397 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:35:21.463646 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:35:21.472406 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:35:21.475185 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:35:21.477636 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:35:21.480834 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:35:21.495562 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:35:21.504390 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:35:21.518470 systemd-journald[1474]: Time spent on flushing to /var/log/journal/ec2b5ff5b6d5872e620d94b30e61d58a is 123.630ms for 909 entries. Oct 8 19:35:21.518470 systemd-journald[1474]: System Journal (/var/log/journal/ec2b5ff5b6d5872e620d94b30e61d58a) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:35:21.660011 systemd-journald[1474]: Received client request to flush runtime journal. Oct 8 19:35:21.663896 kernel: loop0: detected capacity change from 0 to 194512 Oct 8 19:35:21.664011 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:35:21.519551 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:35:21.545709 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:35:21.649724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:35:21.656091 udevadm[1529]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 8 19:35:21.673345 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:35:21.681437 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:35:21.683361 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:35:21.702397 kernel: loop1: detected capacity change from 0 to 114328 Oct 8 19:35:21.706646 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:35:21.720483 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:35:21.768348 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Oct 8 19:35:21.768388 systemd-tmpfiles[1541]: ACLs are not supported, ignoring. Oct 8 19:35:21.785426 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:35:21.796305 kernel: loop2: detected capacity change from 0 to 52536 Oct 8 19:35:21.943278 kernel: loop3: detected capacity change from 0 to 114432 Oct 8 19:35:22.062288 kernel: loop4: detected capacity change from 0 to 194512 Oct 8 19:35:22.084272 kernel: loop5: detected capacity change from 0 to 114328 Oct 8 19:35:22.100286 kernel: loop6: detected capacity change from 0 to 52536 Oct 8 19:35:22.125738 kernel: loop7: detected capacity change from 0 to 114432 Oct 8 19:35:22.137517 (sd-merge)[1547]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Oct 8 19:35:22.139303 (sd-merge)[1547]: Merged extensions into '/usr'. Oct 8 19:35:22.147599 systemd[1]: Reloading requested from client PID 1521 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:35:22.147637 systemd[1]: Reloading... Oct 8 19:35:22.328265 zram_generator::config[1573]: No configuration found. Oct 8 19:35:22.665895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:35:22.784634 systemd[1]: Reloading finished in 636 ms. Oct 8 19:35:22.854391 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:35:22.872656 systemd[1]: Starting ensure-sysext.service... Oct 8 19:35:22.879756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 8 19:35:22.882800 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:35:22.894782 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:35:22.913561 systemd[1]: Reloading requested from client PID 1624 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:35:22.913584 systemd[1]: Reloading... Oct 8 19:35:22.988803 systemd-tmpfiles[1625]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:35:22.991868 systemd-tmpfiles[1625]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:35:22.993762 systemd-tmpfiles[1625]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:35:22.995809 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Oct 8 19:35:22.995967 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Oct 8 19:35:23.011891 systemd-tmpfiles[1625]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:35:23.011924 systemd-tmpfiles[1625]: Skipping /boot Oct 8 19:35:23.020017 systemd-udevd[1627]: Using default interface naming scheme 'v255'. Oct 8 19:35:23.073416 systemd-tmpfiles[1625]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:35:23.073641 systemd-tmpfiles[1625]: Skipping /boot Oct 8 19:35:23.132408 zram_generator::config[1652]: No configuration found. Oct 8 19:35:23.275520 ldconfig[1516]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:35:23.339281 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1664) Oct 8 19:35:23.381465 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1664) Oct 8 19:35:23.385550 (udev-worker)[1659]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:35:23.594886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:35:23.663452 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1692) Oct 8 19:35:23.732883 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 19:35:23.734153 systemd[1]: Reloading finished in 819 ms. Oct 8 19:35:23.766749 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:35:23.771323 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:35:23.780146 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 8 19:35:23.833915 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:35:23.840026 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:35:23.847785 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:35:23.855797 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:35:23.863759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:35:23.873767 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:35:23.881121 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:35:23.898509 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:35:23.924892 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:35:23.931732 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:35:23.944888 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:35:23.947462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:35:23.973813 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:35:23.989572 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:35:23.991723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:35:23.991846 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:35:24.008535 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:35:24.012428 systemd[1]: Finished ensure-sysext.service. Oct 8 19:35:24.014932 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:35:24.033999 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:35:24.037333 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:35:24.095884 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:35:24.117005 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:35:24.138610 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:35:24.142834 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:35:24.150376 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:35:24.155922 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:35:24.156802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:35:24.163854 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:35:24.164185 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:35:24.167805 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:35:24.170355 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:35:24.209803 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:35:24.212334 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:35:24.212608 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:35:24.232906 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:35:24.239898 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:35:24.240955 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:35:24.259450 lvm[1859]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:35:24.262330 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:35:24.309284 augenrules[1868]: No rules Oct 8 19:35:24.311053 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:35:24.316444 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:35:24.323137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:35:24.337193 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:35:24.342302 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:35:24.346697 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:35:24.375340 lvm[1876]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:35:24.417359 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:35:24.469937 systemd-networkd[1821]: lo: Link UP Oct 8 19:35:24.469953 systemd-networkd[1821]: lo: Gained carrier Oct 8 19:35:24.473566 systemd-networkd[1821]: Enumeration completed Oct 8 19:35:24.473926 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:35:24.486571 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:35:24.489834 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:35:24.489852 systemd-networkd[1821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:35:24.492371 systemd-networkd[1821]: eth0: Link UP Oct 8 19:35:24.492779 systemd-networkd[1821]: eth0: Gained carrier Oct 8 19:35:24.492816 systemd-networkd[1821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:35:24.499916 systemd-resolved[1822]: Positive Trust Anchors: Oct 8 19:35:24.499956 systemd-resolved[1822]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:35:24.500023 systemd-resolved[1822]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 8 19:35:24.500341 systemd-networkd[1821]: eth0: DHCPv4 address 172.31.27.200/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:35:24.509640 systemd-resolved[1822]: Defaulting to hostname 'linux'. Oct 8 19:35:24.513011 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:35:24.515465 systemd[1]: Reached target network.target - Network. Oct 8 19:35:24.517197 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:35:24.519797 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:35:24.522024 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:35:24.524461 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:35:24.527388 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:35:24.529724 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:35:24.532258 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:35:24.534707 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:35:24.534773 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:35:24.536582 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:35:24.539781 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:35:24.544780 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:35:24.554871 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:35:24.558007 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:35:24.560413 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:35:24.562426 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:35:24.564422 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:35:24.564482 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:35:24.572557 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:35:24.579564 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 19:35:24.592601 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:35:24.597964 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:35:24.615819 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:35:24.618281 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:35:24.626109 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:35:24.635950 systemd[1]: Started ntpd.service - Network Time Service. Oct 8 19:35:24.646467 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:35:24.659552 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 8 19:35:24.668394 jq[1892]: false Oct 8 19:35:24.676774 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:35:24.693043 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:35:24.708892 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:35:24.711995 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:35:24.712898 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:35:24.718634 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:35:24.727536 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:35:24.735934 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:35:24.736464 dbus-daemon[1891]: [system] SELinux support is enabled Oct 8 19:35:24.737045 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:35:24.740544 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:35:24.756920 extend-filesystems[1893]: Found loop4 Oct 8 19:35:24.756920 extend-filesystems[1893]: Found loop5 Oct 8 19:35:24.756920 extend-filesystems[1893]: Found loop6 Oct 8 19:35:24.756920 extend-filesystems[1893]: Found loop7 Oct 8 19:35:24.756920 extend-filesystems[1893]: Found nvme0n1 Oct 8 19:35:24.756920 extend-filesystems[1893]: Found nvme0n1p1 Oct 8 19:35:24.756920 extend-filesystems[1893]: Found nvme0n1p2 Oct 8 19:35:24.756920 extend-filesystems[1893]: Found nvme0n1p3 Oct 8 19:35:24.775801 dbus-daemon[1891]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1821 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 8 19:35:24.787601 extend-filesystems[1893]: Found usr Oct 8 19:35:24.789076 extend-filesystems[1893]: Found nvme0n1p4 Oct 8 19:35:24.789076 extend-filesystems[1893]: Found nvme0n1p6 Oct 8 19:35:24.794772 dbus-daemon[1891]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 8 19:35:24.795391 extend-filesystems[1893]: Found nvme0n1p7 Oct 8 19:35:24.796824 extend-filesystems[1893]: Found nvme0n1p9 Oct 8 19:35:24.796824 extend-filesystems[1893]: Checking size of /dev/nvme0n1p9 Oct 8 19:35:24.799095 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:35:24.807401 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:35:24.807469 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:35:24.809888 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:35:24.809923 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:35:24.836583 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 8 19:35:24.878204 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:35:24.882768 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:35:24.893326 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:35:24.893326 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch successful Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch successful Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch successful Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch successful Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch failed with 404: resource not found Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch successful Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch successful Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch successful Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch successful Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 8 19:35:24.894009 coreos-metadata[1890]: Oct 08 19:35:24.893 INFO Fetch successful Oct 8 19:35:24.896527 ntpd[1895]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:50:55 UTC 2024 (1): Starting Oct 8 19:35:24.897919 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:50:55 UTC 2024 (1): Starting Oct 8 19:35:24.897919 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:35:24.897919 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: ---------------------------------------------------- Oct 8 19:35:24.897919 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:35:24.897919 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:35:24.897919 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: corporation. Support and training for ntp-4 are Oct 8 19:35:24.897919 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: available at https://www.nwtime.org/support Oct 8 19:35:24.897919 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: ---------------------------------------------------- Oct 8 19:35:24.896580 ntpd[1895]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:35:24.896601 ntpd[1895]: ---------------------------------------------------- Oct 8 19:35:24.896620 ntpd[1895]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:35:24.896638 ntpd[1895]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:35:24.896657 ntpd[1895]: corporation. Support and training for ntp-4 are Oct 8 19:35:24.896675 ntpd[1895]: available at https://www.nwtime.org/support Oct 8 19:35:24.896694 ntpd[1895]: ---------------------------------------------------- Oct 8 19:35:24.920032 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: proto: precision = 0.096 usec (-23) Oct 8 19:35:24.919721 ntpd[1895]: proto: precision = 0.096 usec (-23) Oct 8 19:35:24.922635 ntpd[1895]: basedate set to 2024-09-26 Oct 8 19:35:24.926416 jq[1905]: true Oct 8 19:35:24.926799 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: basedate set to 2024-09-26 Oct 8 19:35:24.926799 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: gps base set to 2024-09-29 (week 2334) Oct 8 19:35:24.922677 ntpd[1895]: gps base set to 2024-09-29 (week 2334) Oct 8 19:35:24.933733 ntpd[1895]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:35:24.935396 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:35:24.935396 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:35:24.935396 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:35:24.935396 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: Listen normally on 3 eth0 172.31.27.200:123 Oct 8 19:35:24.933829 ntpd[1895]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:35:24.934097 ntpd[1895]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:35:24.934161 ntpd[1895]: Listen normally on 3 eth0 172.31.27.200:123 Oct 8 19:35:24.935783 ntpd[1895]: Listen normally on 4 lo [::1]:123 Oct 8 19:35:24.937021 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: Listen normally on 4 lo [::1]:123 Oct 8 19:35:24.937021 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: bind(21) AF_INET6 fe80::44c:e0ff:fe38:a0cd%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 19:35:24.937021 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: unable to create socket on eth0 (5) for fe80::44c:e0ff:fe38:a0cd%2#123 Oct 8 19:35:24.937021 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: failed to init interface for address fe80::44c:e0ff:fe38:a0cd%2 Oct 8 19:35:24.937021 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: Listening on routing socket on fd #21 for interface updates Oct 8 19:35:24.935909 ntpd[1895]: bind(21) AF_INET6 fe80::44c:e0ff:fe38:a0cd%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 19:35:24.935949 ntpd[1895]: unable to create socket on eth0 (5) for fe80::44c:e0ff:fe38:a0cd%2#123 Oct 8 19:35:24.935977 ntpd[1895]: failed to init interface for address fe80::44c:e0ff:fe38:a0cd%2 Oct 8 19:35:24.936038 ntpd[1895]: Listening on routing socket on fd #21 for interface updates Oct 8 19:35:24.943903 ntpd[1895]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:35:24.944386 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:35:24.944386 ntpd[1895]: 8 Oct 19:35:24 ntpd[1895]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:35:24.943976 ntpd[1895]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:35:24.948592 extend-filesystems[1893]: Resized partition /dev/nvme0n1p9 Oct 8 19:35:24.959309 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:35:24.962938 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:35:24.973165 extend-filesystems[1941]: resize2fs 1.47.1 (20-May-2024) Oct 8 19:35:24.993266 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 8 19:35:25.005807 update_engine[1904]: I20241008 19:35:25.005606 1904 main.cc:92] Flatcar Update Engine starting Oct 8 19:35:25.012508 tar[1907]: linux-arm64/helm Oct 8 19:35:25.016089 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:35:25.023042 update_engine[1904]: I20241008 19:35:25.018500 1904 update_check_scheduler.cc:74] Next update check in 9m43s Oct 8 19:35:25.029592 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:35:25.048330 (ntainerd)[1937]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:35:25.081193 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 8 19:35:25.096368 jq[1934]: true Oct 8 19:35:25.122302 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 8 19:35:25.161794 extend-filesystems[1941]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 8 19:35:25.161794 extend-filesystems[1941]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:35:25.161794 extend-filesystems[1941]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 8 19:35:25.171393 extend-filesystems[1893]: Resized filesystem in /dev/nvme0n1p9 Oct 8 19:35:25.170781 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:35:25.173234 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:35:25.185785 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 19:35:25.188498 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:35:25.279249 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1692) Oct 8 19:35:25.336292 bash[1982]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:35:25.352062 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:35:25.373667 systemd[1]: Starting sshkeys.service... Oct 8 19:35:25.489193 systemd-logind[1903]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:35:25.489273 systemd-logind[1903]: Watching system buttons on /dev/input/event1 (Sleep Button) Oct 8 19:35:25.489366 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 19:35:25.493635 systemd-logind[1903]: New seat seat0. Oct 8 19:35:25.523651 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 19:35:25.526486 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:35:25.586874 dbus-daemon[1891]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 8 19:35:25.587138 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 8 19:35:25.594314 dbus-daemon[1891]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1915 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 8 19:35:25.610976 systemd[1]: Starting polkit.service - Authorization Manager... Oct 8 19:35:25.695337 polkitd[2040]: Started polkitd version 121 Oct 8 19:35:25.730572 coreos-metadata[2025]: Oct 08 19:35:25.730 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:35:25.733771 coreos-metadata[2025]: Oct 08 19:35:25.733 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 8 19:35:25.734438 coreos-metadata[2025]: Oct 08 19:35:25.734 INFO Fetch successful Oct 8 19:35:25.734579 coreos-metadata[2025]: Oct 08 19:35:25.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 8 19:35:25.735606 coreos-metadata[2025]: Oct 08 19:35:25.735 INFO Fetch successful Oct 8 19:35:25.740947 unknown[2025]: wrote ssh authorized keys file for user: core Oct 8 19:35:25.751014 polkitd[2040]: Loading rules from directory /etc/polkit-1/rules.d Oct 8 19:35:25.751142 polkitd[2040]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 8 19:35:25.765408 polkitd[2040]: Finished loading, compiling and executing 2 rules Oct 8 19:35:25.773599 dbus-daemon[1891]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 8 19:35:25.785715 systemd[1]: Started polkit.service - Authorization Manager. Oct 8 19:35:25.794877 polkitd[2040]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 8 19:35:25.805579 update-ssh-keys[2056]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:35:25.808585 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 19:35:25.836674 systemd[1]: Finished sshkeys.service. Oct 8 19:35:25.898649 ntpd[1895]: bind(24) AF_INET6 fe80::44c:e0ff:fe38:a0cd%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 19:35:25.901119 ntpd[1895]: 8 Oct 19:35:25 ntpd[1895]: bind(24) AF_INET6 fe80::44c:e0ff:fe38:a0cd%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 19:35:25.901119 ntpd[1895]: 8 Oct 19:35:25 ntpd[1895]: unable to create socket on eth0 (6) for fe80::44c:e0ff:fe38:a0cd%2#123 Oct 8 19:35:25.901119 ntpd[1895]: 8 Oct 19:35:25 ntpd[1895]: failed to init interface for address fe80::44c:e0ff:fe38:a0cd%2 Oct 8 19:35:25.898736 ntpd[1895]: unable to create socket on eth0 (6) for fe80::44c:e0ff:fe38:a0cd%2#123 Oct 8 19:35:25.898766 ntpd[1895]: failed to init interface for address fe80::44c:e0ff:fe38:a0cd%2 Oct 8 19:35:25.923053 systemd-hostnamed[1915]: Hostname set to (transient) Oct 8 19:35:25.923211 systemd-resolved[1822]: System hostname changed to 'ip-172-31-27-200'. Oct 8 19:35:25.945841 locksmithd[1944]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:35:26.011305 containerd[1937]: time="2024-10-08T19:35:26.009711670Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 8 19:35:26.095459 sshd_keygen[1943]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:35:26.147556 containerd[1937]: time="2024-10-08T19:35:26.147491062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:35:26.151380 containerd[1937]: time="2024-10-08T19:35:26.151254178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:35:26.151571 containerd[1937]: time="2024-10-08T19:35:26.151539226Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:35:26.151708 containerd[1937]: time="2024-10-08T19:35:26.151669558Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:35:26.152150 containerd[1937]: time="2024-10-08T19:35:26.152109310Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:35:26.152354 containerd[1937]: time="2024-10-08T19:35:26.152314642Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:35:26.152786 containerd[1937]: time="2024-10-08T19:35:26.152703178Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:35:26.152958 containerd[1937]: time="2024-10-08T19:35:26.152926198Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:35:26.156605 containerd[1937]: time="2024-10-08T19:35:26.156540394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:35:26.156769 containerd[1937]: time="2024-10-08T19:35:26.156730342Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:35:26.156898 containerd[1937]: time="2024-10-08T19:35:26.156866026Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:35:26.157041 containerd[1937]: time="2024-10-08T19:35:26.157002514Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:35:26.158260 containerd[1937]: time="2024-10-08T19:35:26.157429402Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:35:26.158260 containerd[1937]: time="2024-10-08T19:35:26.158130526Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:35:26.158760 containerd[1937]: time="2024-10-08T19:35:26.158717170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:35:26.158880 containerd[1937]: time="2024-10-08T19:35:26.158851066Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:35:26.159174 containerd[1937]: time="2024-10-08T19:35:26.159135502Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:35:26.159398 containerd[1937]: time="2024-10-08T19:35:26.159362158Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:35:26.168723 containerd[1937]: time="2024-10-08T19:35:26.168658030Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:35:26.169601 containerd[1937]: time="2024-10-08T19:35:26.169423510Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:35:26.171795 containerd[1937]: time="2024-10-08T19:35:26.169814266Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:35:26.171795 containerd[1937]: time="2024-10-08T19:35:26.170511010Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:35:26.171795 containerd[1937]: time="2024-10-08T19:35:26.170571298Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:35:26.171795 containerd[1937]: time="2024-10-08T19:35:26.170847934Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:35:26.173462 containerd[1937]: time="2024-10-08T19:35:26.173401510Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:35:26.175604 containerd[1937]: time="2024-10-08T19:35:26.175483462Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:35:26.175792 containerd[1937]: time="2024-10-08T19:35:26.175764082Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:35:26.176302 containerd[1937]: time="2024-10-08T19:35:26.175918810Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:35:26.177478 containerd[1937]: time="2024-10-08T19:35:26.177326530Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:35:26.178417 containerd[1937]: time="2024-10-08T19:35:26.177937810Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:35:26.178417 containerd[1937]: time="2024-10-08T19:35:26.178012126Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:35:26.178254 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:35:26.182531 containerd[1937]: time="2024-10-08T19:35:26.178056226Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:35:26.182774 containerd[1937]: time="2024-10-08T19:35:26.182681410Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:35:26.184273 containerd[1937]: time="2024-10-08T19:35:26.183294622Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:35:26.184593 containerd[1937]: time="2024-10-08T19:35:26.183341290Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:35:26.184593 containerd[1937]: time="2024-10-08T19:35:26.184528426Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:35:26.185399 containerd[1937]: time="2024-10-08T19:35:26.184821682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.185399 containerd[1937]: time="2024-10-08T19:35:26.185332114Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.186086 containerd[1937]: time="2024-10-08T19:35:26.185897182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.186468 containerd[1937]: time="2024-10-08T19:35:26.186309250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.186468 containerd[1937]: time="2024-10-08T19:35:26.186407470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.187042 containerd[1937]: time="2024-10-08T19:35:26.186763246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.187042 containerd[1937]: time="2024-10-08T19:35:26.186967570Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.187257 containerd[1937]: time="2024-10-08T19:35:26.187008730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.187257 containerd[1937]: time="2024-10-08T19:35:26.187189246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.187539 containerd[1937]: time="2024-10-08T19:35:26.187398574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.187539 containerd[1937]: time="2024-10-08T19:35:26.187460974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.187539 containerd[1937]: time="2024-10-08T19:35:26.187495786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.187901 containerd[1937]: time="2024-10-08T19:35:26.187736662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.187901 containerd[1937]: time="2024-10-08T19:35:26.187825462Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:35:26.188094 containerd[1937]: time="2024-10-08T19:35:26.187879858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.188094 containerd[1937]: time="2024-10-08T19:35:26.188049478Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.188362 containerd[1937]: time="2024-10-08T19:35:26.188206522Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:35:26.188902 containerd[1937]: time="2024-10-08T19:35:26.188661610Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:35:26.189144 containerd[1937]: time="2024-10-08T19:35:26.189101230Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:35:26.189818 containerd[1937]: time="2024-10-08T19:35:26.189406834Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:35:26.189818 containerd[1937]: time="2024-10-08T19:35:26.189474802Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:35:26.189818 containerd[1937]: time="2024-10-08T19:35:26.189512074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.189818 containerd[1937]: time="2024-10-08T19:35:26.189550054Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:35:26.189818 containerd[1937]: time="2024-10-08T19:35:26.189575314Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:35:26.189818 containerd[1937]: time="2024-10-08T19:35:26.189606418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:35:26.190628 containerd[1937]: time="2024-10-08T19:35:26.190508206Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:35:26.191572 containerd[1937]: time="2024-10-08T19:35:26.190935682Z" level=info msg="Connect containerd service" Oct 8 19:35:26.191572 containerd[1937]: time="2024-10-08T19:35:26.191079430Z" level=info msg="using legacy CRI server" Oct 8 19:35:26.191572 containerd[1937]: time="2024-10-08T19:35:26.191098798Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:35:26.192623 containerd[1937]: time="2024-10-08T19:35:26.192370414Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:35:26.192484 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:35:26.204579 containerd[1937]: time="2024-10-08T19:35:26.204520606Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:35:26.205413 systemd[1]: Started sshd@0-172.31.27.200:22-139.178.68.195:42610.service - OpenSSH per-connection server daemon (139.178.68.195:42610). Oct 8 19:35:26.213349 containerd[1937]: time="2024-10-08T19:35:26.211583267Z" level=info msg="Start subscribing containerd event" Oct 8 19:35:26.213349 containerd[1937]: time="2024-10-08T19:35:26.211671035Z" level=info msg="Start recovering state" Oct 8 19:35:26.213349 containerd[1937]: time="2024-10-08T19:35:26.211803743Z" level=info msg="Start event monitor" Oct 8 19:35:26.213349 containerd[1937]: time="2024-10-08T19:35:26.211828871Z" level=info msg="Start snapshots syncer" Oct 8 19:35:26.213349 containerd[1937]: time="2024-10-08T19:35:26.211853135Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:35:26.213349 containerd[1937]: time="2024-10-08T19:35:26.211873907Z" level=info msg="Start streaming server" Oct 8 19:35:26.215274 containerd[1937]: time="2024-10-08T19:35:26.214343663Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:35:26.215584 containerd[1937]: time="2024-10-08T19:35:26.215540915Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:35:26.215890 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:35:26.218504 containerd[1937]: time="2024-10-08T19:35:26.218112707Z" level=info msg="containerd successfully booted in 0.211073s" Oct 8 19:35:26.239347 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:35:26.239933 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:35:26.254283 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:35:26.302077 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:35:26.317576 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:35:26.324565 systemd-networkd[1821]: eth0: Gained IPv6LL Oct 8 19:35:26.332824 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:35:26.337747 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:35:26.342721 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:35:26.348162 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:35:26.361815 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 8 19:35:26.376750 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:35:26.384773 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:35:26.491547 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:35:26.502312 amazon-ssm-agent[2115]: Initializing new seelog logger Oct 8 19:35:26.502312 amazon-ssm-agent[2115]: New Seelog Logger Creation Complete Oct 8 19:35:26.502312 amazon-ssm-agent[2115]: 2024/10/08 19:35:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:35:26.502312 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:35:26.502973 amazon-ssm-agent[2115]: 2024/10/08 19:35:26 processing appconfig overrides Oct 8 19:35:26.505963 amazon-ssm-agent[2115]: 2024/10/08 19:35:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:35:26.505963 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:35:26.505963 amazon-ssm-agent[2115]: 2024/10/08 19:35:26 processing appconfig overrides Oct 8 19:35:26.505963 amazon-ssm-agent[2115]: 2024/10/08 19:35:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:35:26.505963 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:35:26.505963 amazon-ssm-agent[2115]: 2024/10/08 19:35:26 processing appconfig overrides Oct 8 19:35:26.505963 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO Proxy environment variables: Oct 8 19:35:26.510896 amazon-ssm-agent[2115]: 2024/10/08 19:35:26 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:35:26.510896 amazon-ssm-agent[2115]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:35:26.511165 amazon-ssm-agent[2115]: 2024/10/08 19:35:26 processing appconfig overrides Oct 8 19:35:26.547657 sshd[2105]: Accepted publickey for core from 139.178.68.195 port 42610 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:35:26.557207 sshd[2105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:35:26.590549 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:35:26.602961 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:35:26.605927 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO https_proxy: Oct 8 19:35:26.615628 systemd-logind[1903]: New session 1 of user core. Oct 8 19:35:26.662100 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:35:26.684253 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:35:26.704871 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO http_proxy: Oct 8 19:35:26.716414 (systemd)[2133]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:35:26.806278 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO no_proxy: Oct 8 19:35:26.905641 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO Checking if agent identity type OnPrem can be assumed Oct 8 19:35:27.005430 systemd[2133]: Queued start job for default target default.target. Oct 8 19:35:27.014835 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO Checking if agent identity type EC2 can be assumed Oct 8 19:35:27.016643 systemd[2133]: Created slice app.slice - User Application Slice. Oct 8 19:35:27.016705 systemd[2133]: Reached target paths.target - Paths. Oct 8 19:35:27.016737 systemd[2133]: Reached target timers.target - Timers. Oct 8 19:35:27.025724 systemd[2133]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:35:27.046141 systemd[2133]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:35:27.046628 systemd[2133]: Reached target sockets.target - Sockets. Oct 8 19:35:27.046671 systemd[2133]: Reached target basic.target - Basic System. Oct 8 19:35:27.046760 systemd[2133]: Reached target default.target - Main User Target. Oct 8 19:35:27.046826 systemd[2133]: Startup finished in 305ms. Oct 8 19:35:27.047513 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:35:27.056733 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:35:27.082835 tar[1907]: linux-arm64/LICENSE Oct 8 19:35:27.082835 tar[1907]: linux-arm64/README.md Oct 8 19:35:27.105838 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO Agent will take identity from EC2 Oct 8 19:35:27.128071 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:35:27.204688 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:35:27.237061 systemd[1]: Started sshd@1-172.31.27.200:22-139.178.68.195:42618.service - OpenSSH per-connection server daemon (139.178.68.195:42618). Oct 8 19:35:27.304183 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:35:27.372614 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:35:27.372733 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Oct 8 19:35:27.372733 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Oct 8 19:35:27.372733 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO [amazon-ssm-agent] Starting Core Agent Oct 8 19:35:27.372733 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO [amazon-ssm-agent] registrar detected. Attempting registration Oct 8 19:35:27.372733 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO [Registrar] Starting registrar module Oct 8 19:35:27.372733 amazon-ssm-agent[2115]: 2024-10-08 19:35:26 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Oct 8 19:35:27.374021 amazon-ssm-agent[2115]: 2024-10-08 19:35:27 INFO [EC2Identity] EC2 registration was successful. Oct 8 19:35:27.374021 amazon-ssm-agent[2115]: 2024-10-08 19:35:27 INFO [CredentialRefresher] credentialRefresher has started Oct 8 19:35:27.374021 amazon-ssm-agent[2115]: 2024-10-08 19:35:27 INFO [CredentialRefresher] Starting credentials refresher loop Oct 8 19:35:27.374021 amazon-ssm-agent[2115]: 2024-10-08 19:35:27 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 8 19:35:27.403523 amazon-ssm-agent[2115]: 2024-10-08 19:35:27 INFO [CredentialRefresher] Next credential rotation will be in 30.808322015033333 minutes Oct 8 19:35:27.454823 sshd[2148]: Accepted publickey for core from 139.178.68.195 port 42618 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:35:27.457885 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:35:27.470258 systemd-logind[1903]: New session 2 of user core. Oct 8 19:35:27.478544 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:35:27.612861 sshd[2148]: pam_unix(sshd:session): session closed for user core Oct 8 19:35:27.621515 systemd[1]: sshd@1-172.31.27.200:22-139.178.68.195:42618.service: Deactivated successfully. Oct 8 19:35:27.627376 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:35:27.630829 systemd-logind[1903]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:35:27.647285 systemd-logind[1903]: Removed session 2. Oct 8 19:35:27.651841 systemd[1]: Started sshd@2-172.31.27.200:22-139.178.68.195:42630.service - OpenSSH per-connection server daemon (139.178.68.195:42630). Oct 8 19:35:27.813630 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:35:27.817202 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:35:27.823360 systemd[1]: Startup finished in 1.194s (kernel) + 9.581s (initrd) + 8.481s (userspace) = 19.258s. Oct 8 19:35:27.830917 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:35:27.856378 sshd[2156]: Accepted publickey for core from 139.178.68.195 port 42630 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:35:27.859504 sshd[2156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:35:27.874842 systemd-logind[1903]: New session 3 of user core. Oct 8 19:35:27.882529 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:35:28.018817 sshd[2156]: pam_unix(sshd:session): session closed for user core Oct 8 19:35:28.026892 systemd[1]: sshd@2-172.31.27.200:22-139.178.68.195:42630.service: Deactivated successfully. Oct 8 19:35:28.030891 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:35:28.034802 systemd-logind[1903]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:35:28.038298 systemd-logind[1903]: Removed session 3. Oct 8 19:35:28.405264 amazon-ssm-agent[2115]: 2024-10-08 19:35:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 8 19:35:28.506718 amazon-ssm-agent[2115]: 2024-10-08 19:35:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2179) started Oct 8 19:35:28.607195 amazon-ssm-agent[2115]: 2024-10-08 19:35:28 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 8 19:35:28.652556 kubelet[2163]: E1008 19:35:28.652427 2163 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:35:28.657728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:35:28.658080 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:35:28.658820 systemd[1]: kubelet.service: Consumed 1.342s CPU time. Oct 8 19:35:28.897265 ntpd[1895]: Listen normally on 7 eth0 [fe80::44c:e0ff:fe38:a0cd%2]:123 Oct 8 19:35:28.898402 ntpd[1895]: 8 Oct 19:35:28 ntpd[1895]: Listen normally on 7 eth0 [fe80::44c:e0ff:fe38:a0cd%2]:123 Oct 8 19:35:31.499559 systemd-resolved[1822]: Clock change detected. Flushing caches. Oct 8 19:35:37.654588 systemd[1]: Started sshd@3-172.31.27.200:22-139.178.68.195:35792.service - OpenSSH per-connection server daemon (139.178.68.195:35792). Oct 8 19:35:37.834620 sshd[2194]: Accepted publickey for core from 139.178.68.195 port 35792 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:35:37.837330 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:35:37.846582 systemd-logind[1903]: New session 4 of user core. Oct 8 19:35:37.853472 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:35:37.980595 sshd[2194]: pam_unix(sshd:session): session closed for user core Oct 8 19:35:37.985948 systemd-logind[1903]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:35:37.986906 systemd[1]: sshd@3-172.31.27.200:22-139.178.68.195:35792.service: Deactivated successfully. Oct 8 19:35:37.990042 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:35:37.994507 systemd-logind[1903]: Removed session 4. Oct 8 19:35:38.021012 systemd[1]: Started sshd@4-172.31.27.200:22-139.178.68.195:35806.service - OpenSSH per-connection server daemon (139.178.68.195:35806). Oct 8 19:35:38.202905 sshd[2201]: Accepted publickey for core from 139.178.68.195 port 35806 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:35:38.205551 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:35:38.214026 systemd-logind[1903]: New session 5 of user core. Oct 8 19:35:38.224505 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:35:38.344335 sshd[2201]: pam_unix(sshd:session): session closed for user core Oct 8 19:35:38.351370 systemd[1]: sshd@4-172.31.27.200:22-139.178.68.195:35806.service: Deactivated successfully. Oct 8 19:35:38.355616 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:35:38.357576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:35:38.358571 systemd-logind[1903]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:35:38.367633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:35:38.387827 systemd[1]: Started sshd@5-172.31.27.200:22-139.178.68.195:35812.service - OpenSSH per-connection server daemon (139.178.68.195:35812). Oct 8 19:35:38.392037 systemd-logind[1903]: Removed session 5. Oct 8 19:35:38.568463 sshd[2211]: Accepted publickey for core from 139.178.68.195 port 35812 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:35:38.571314 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:35:38.580294 systemd-logind[1903]: New session 6 of user core. Oct 8 19:35:38.590507 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:35:38.722617 sshd[2211]: pam_unix(sshd:session): session closed for user core Oct 8 19:35:38.728948 systemd[1]: sshd@5-172.31.27.200:22-139.178.68.195:35812.service: Deactivated successfully. Oct 8 19:35:38.733790 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:35:38.736649 systemd-logind[1903]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:35:38.738924 systemd-logind[1903]: Removed session 6. Oct 8 19:35:38.765733 systemd[1]: Started sshd@6-172.31.27.200:22-139.178.68.195:35824.service - OpenSSH per-connection server daemon (139.178.68.195:35824). Oct 8 19:35:38.953037 sshd[2218]: Accepted publickey for core from 139.178.68.195 port 35824 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:35:38.955659 sshd[2218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:35:38.963017 systemd-logind[1903]: New session 7 of user core. Oct 8 19:35:38.971472 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:35:39.123726 sudo[2221]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:35:39.124546 sudo[2221]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:35:39.144455 sudo[2221]: pam_unix(sudo:session): session closed for user root Oct 8 19:35:39.169629 sshd[2218]: pam_unix(sshd:session): session closed for user core Oct 8 19:35:39.178076 systemd[1]: sshd@6-172.31.27.200:22-139.178.68.195:35824.service: Deactivated successfully. Oct 8 19:35:39.180987 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:35:39.182703 systemd-logind[1903]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:35:39.186068 systemd-logind[1903]: Removed session 7. Oct 8 19:35:39.211830 systemd[1]: Started sshd@7-172.31.27.200:22-139.178.68.195:35838.service - OpenSSH per-connection server daemon (139.178.68.195:35838). Oct 8 19:35:39.244809 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:35:39.257871 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:35:39.366415 kubelet[2232]: E1008 19:35:39.366306 2232 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:35:39.375411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:35:39.375764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:35:39.405393 sshd[2228]: Accepted publickey for core from 139.178.68.195 port 35838 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:35:39.408089 sshd[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:35:39.416967 systemd-logind[1903]: New session 8 of user core. Oct 8 19:35:39.424596 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:35:39.530252 sudo[2243]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:35:39.531539 sudo[2243]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:35:39.539896 sudo[2243]: pam_unix(sudo:session): session closed for user root Oct 8 19:35:39.552652 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:35:39.553341 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:35:39.590037 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:35:39.594251 auditctl[2246]: No rules Oct 8 19:35:39.594987 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:35:39.595663 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:35:39.607016 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:35:39.659875 augenrules[2264]: No rules Oct 8 19:35:39.662368 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:35:39.664785 sudo[2242]: pam_unix(sudo:session): session closed for user root Oct 8 19:35:39.689608 sshd[2228]: pam_unix(sshd:session): session closed for user core Oct 8 19:35:39.697432 systemd[1]: sshd@7-172.31.27.200:22-139.178.68.195:35838.service: Deactivated successfully. Oct 8 19:35:39.701880 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:35:39.703580 systemd-logind[1903]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:35:39.705898 systemd-logind[1903]: Removed session 8. Oct 8 19:35:39.741722 systemd[1]: Started sshd@8-172.31.27.200:22-139.178.68.195:35852.service - OpenSSH per-connection server daemon (139.178.68.195:35852). Oct 8 19:35:39.913740 sshd[2272]: Accepted publickey for core from 139.178.68.195 port 35852 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:35:39.916840 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:35:39.924311 systemd-logind[1903]: New session 9 of user core. Oct 8 19:35:39.937780 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:35:40.042598 sudo[2275]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:35:40.043211 sudo[2275]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 8 19:35:40.712674 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:35:40.712899 (dockerd)[2291]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:35:41.175295 dockerd[2291]: time="2024-10-08T19:35:41.174179400Z" level=info msg="Starting up" Oct 8 19:35:41.348879 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3293321605-merged.mount: Deactivated successfully. Oct 8 19:35:41.399139 dockerd[2291]: time="2024-10-08T19:35:41.399071737Z" level=info msg="Loading containers: start." Oct 8 19:35:41.609346 kernel: Initializing XFRM netlink socket Oct 8 19:35:41.670681 (udev-worker)[2314]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:35:41.757018 systemd-networkd[1821]: docker0: Link UP Oct 8 19:35:41.786665 dockerd[2291]: time="2024-10-08T19:35:41.786590451Z" level=info msg="Loading containers: done." Oct 8 19:35:41.815479 dockerd[2291]: time="2024-10-08T19:35:41.815407695Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:35:41.815725 dockerd[2291]: time="2024-10-08T19:35:41.815572707Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 8 19:35:41.815835 dockerd[2291]: time="2024-10-08T19:35:41.815776599Z" level=info msg="Daemon has completed initialization" Oct 8 19:35:41.890015 dockerd[2291]: time="2024-10-08T19:35:41.889362363Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:35:41.890873 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:35:42.344647 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3936398431-merged.mount: Deactivated successfully. Oct 8 19:35:43.347142 containerd[1937]: time="2024-10-08T19:35:43.347074490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\"" Oct 8 19:35:43.994606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2704799886.mount: Deactivated successfully. Oct 8 19:35:45.617572 containerd[1937]: time="2024-10-08T19:35:45.617321850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:45.619536 containerd[1937]: time="2024-10-08T19:35:45.619453314Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.9: active requests=0, bytes read=32286058" Oct 8 19:35:45.621810 containerd[1937]: time="2024-10-08T19:35:45.621758718Z" level=info msg="ImageCreate event name:\"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:45.630119 containerd[1937]: time="2024-10-08T19:35:45.629998098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:45.632938 containerd[1937]: time="2024-10-08T19:35:45.632409906Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.9\" with image id \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88538e7fdf73583c8670540eec5b3620af75c9ec200434a5815ee7fba5021f3\", size \"32282858\" in 2.285264196s" Oct 8 19:35:45.632938 containerd[1937]: time="2024-10-08T19:35:45.632474898Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.9\" returns image reference \"sha256:0ca432c382d835cda3e9fb9d7f97eeb68f8c26290c208142886893943f157b80\"" Oct 8 19:35:45.673584 containerd[1937]: time="2024-10-08T19:35:45.673493862Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\"" Oct 8 19:35:47.570162 containerd[1937]: time="2024-10-08T19:35:47.569963455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:47.572408 containerd[1937]: time="2024-10-08T19:35:47.572286571Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.9: active requests=0, bytes read=29374204" Oct 8 19:35:47.573868 containerd[1937]: time="2024-10-08T19:35:47.573778267Z" level=info msg="ImageCreate event name:\"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:47.581395 containerd[1937]: time="2024-10-08T19:35:47.581288275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:47.584196 containerd[1937]: time="2024-10-08T19:35:47.583676227Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.9\" with image id \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f2f18973ccb6996687d10ba5bd1b8f303e3dd2fed80f831a44d2ac8191e5bb9b\", size \"30862018\" in 1.910119197s" Oct 8 19:35:47.584196 containerd[1937]: time="2024-10-08T19:35:47.583743679Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.9\" returns image reference \"sha256:3e4860b5f4cadd23ec0c1f66f8cd323718a56721b4eaffc560dd5bbdae0a3373\"" Oct 8 19:35:47.627937 containerd[1937]: time="2024-10-08T19:35:47.627586700Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\"" Oct 8 19:35:48.855286 containerd[1937]: time="2024-10-08T19:35:48.855028114Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:48.857315 containerd[1937]: time="2024-10-08T19:35:48.857244454Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.9: active requests=0, bytes read=15751217" Oct 8 19:35:48.859143 containerd[1937]: time="2024-10-08T19:35:48.859071970Z" level=info msg="ImageCreate event name:\"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:48.864751 containerd[1937]: time="2024-10-08T19:35:48.864698974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:48.867249 containerd[1937]: time="2024-10-08T19:35:48.867066274Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.9\" with image id \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c164076eebaefdaebad46a5ccd550e9f38c63588c02d35163c6a09e164ab8a8\", size \"17239049\" in 1.239422586s" Oct 8 19:35:48.867249 containerd[1937]: time="2024-10-08T19:35:48.867122818Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.9\" returns image reference \"sha256:8282449c9a5dac69ec2afe9dc048807bbe6e8bae88040c889d1e219eca6f8a7d\"" Oct 8 19:35:48.904584 containerd[1937]: time="2024-10-08T19:35:48.904299226Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\"" Oct 8 19:35:49.538103 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:35:49.549748 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:35:49.957481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:35:49.966934 (kubelet)[2524]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:35:50.099715 kubelet[2524]: E1008 19:35:50.099265 2524 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:35:50.105961 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:35:50.106700 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:35:50.420800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount115028133.mount: Deactivated successfully. Oct 8 19:35:51.038763 containerd[1937]: time="2024-10-08T19:35:51.038699061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:51.041473 containerd[1937]: time="2024-10-08T19:35:51.041423397Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.9: active requests=0, bytes read=25254038" Oct 8 19:35:51.051368 containerd[1937]: time="2024-10-08T19:35:51.051300825Z" level=info msg="ImageCreate event name:\"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:51.065424 containerd[1937]: time="2024-10-08T19:35:51.065369217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:51.067085 containerd[1937]: time="2024-10-08T19:35:51.066488385Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.9\" with image id \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\", repo tag \"registry.k8s.io/kube-proxy:v1.29.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:124040dbe6b5294352355f5d34c692ecbc940cdc57a8fd06d0f38f76b6138906\", size \"25253057\" in 2.162126903s" Oct 8 19:35:51.067085 containerd[1937]: time="2024-10-08T19:35:51.066546441Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.9\" returns image reference \"sha256:0e8a375be0a8ed2d79dab5b4513dc4639ed6e7d3da10a53172b619355f666d4f\"" Oct 8 19:35:51.111665 containerd[1937]: time="2024-10-08T19:35:51.111396489Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:35:51.706266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256098409.mount: Deactivated successfully. Oct 8 19:35:52.959279 containerd[1937]: time="2024-10-08T19:35:52.959070914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:52.961294 containerd[1937]: time="2024-10-08T19:35:52.961132778Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Oct 8 19:35:52.964257 containerd[1937]: time="2024-10-08T19:35:52.962772302Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:52.974938 containerd[1937]: time="2024-10-08T19:35:52.974864498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:52.976841 containerd[1937]: time="2024-10-08T19:35:52.976776554Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.865315565s" Oct 8 19:35:52.977021 containerd[1937]: time="2024-10-08T19:35:52.976990226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:35:53.025538 containerd[1937]: time="2024-10-08T19:35:53.025482250Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Oct 8 19:35:53.512014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630638802.mount: Deactivated successfully. Oct 8 19:35:53.522846 containerd[1937]: time="2024-10-08T19:35:53.522426709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:53.524010 containerd[1937]: time="2024-10-08T19:35:53.523938649Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Oct 8 19:35:53.525419 containerd[1937]: time="2024-10-08T19:35:53.525322453Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:53.531931 containerd[1937]: time="2024-10-08T19:35:53.529956829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:53.531931 containerd[1937]: time="2024-10-08T19:35:53.531623341Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 506.082627ms" Oct 8 19:35:53.531931 containerd[1937]: time="2024-10-08T19:35:53.531668293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Oct 8 19:35:53.571559 containerd[1937]: time="2024-10-08T19:35:53.571505569Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Oct 8 19:35:54.176921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528638115.mount: Deactivated successfully. Oct 8 19:35:55.561829 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 8 19:35:59.070828 containerd[1937]: time="2024-10-08T19:35:59.070501732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:59.073113 containerd[1937]: time="2024-10-08T19:35:59.073051804Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Oct 8 19:35:59.074868 containerd[1937]: time="2024-10-08T19:35:59.074777584Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:59.080962 containerd[1937]: time="2024-10-08T19:35:59.080908636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:35:59.083758 containerd[1937]: time="2024-10-08T19:35:59.083565389Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 5.511773332s" Oct 8 19:35:59.083758 containerd[1937]: time="2024-10-08T19:35:59.083622605Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Oct 8 19:36:00.143033 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 8 19:36:00.153788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:01.125196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:01.139609 (kubelet)[2715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:36:01.241046 kubelet[2715]: E1008 19:36:01.240975 2715 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:36:01.246824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:36:01.247246 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:36:06.663121 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:06.673282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:06.722579 systemd[1]: Reloading requested from client PID 2729 ('systemctl') (unit session-9.scope)... Oct 8 19:36:06.722605 systemd[1]: Reloading... Oct 8 19:36:06.912317 zram_generator::config[2775]: No configuration found. Oct 8 19:36:07.146259 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:36:07.321195 systemd[1]: Reloading finished in 597 ms. Oct 8 19:36:07.422160 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:36:07.422477 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:36:07.424357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:07.433815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:08.918645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:08.921756 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:36:09.014528 kubelet[2829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:36:09.014528 kubelet[2829]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:36:09.014528 kubelet[2829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:36:09.017182 kubelet[2829]: I1008 19:36:09.017092 2829 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:36:10.066813 kubelet[2829]: I1008 19:36:10.066744 2829 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:36:10.066813 kubelet[2829]: I1008 19:36:10.066804 2829 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:36:10.067861 kubelet[2829]: I1008 19:36:10.067244 2829 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:36:10.103930 kubelet[2829]: E1008 19:36:10.103855 2829 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.200:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:10.104496 kubelet[2829]: I1008 19:36:10.104278 2829 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:36:10.126278 kubelet[2829]: I1008 19:36:10.125875 2829 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:36:10.126428 kubelet[2829]: I1008 19:36:10.126352 2829 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:36:10.126686 kubelet[2829]: I1008 19:36:10.126651 2829 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:36:10.126875 kubelet[2829]: I1008 19:36:10.126697 2829 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:36:10.126875 kubelet[2829]: I1008 19:36:10.126719 2829 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:36:10.129408 kubelet[2829]: I1008 19:36:10.129352 2829 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:36:10.134945 kubelet[2829]: I1008 19:36:10.134850 2829 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:36:10.134945 kubelet[2829]: I1008 19:36:10.134936 2829 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:36:10.136380 kubelet[2829]: I1008 19:36:10.134997 2829 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:36:10.136380 kubelet[2829]: I1008 19:36:10.135029 2829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:36:10.140454 kubelet[2829]: W1008 19:36:10.140384 2829 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.200:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:10.141682 kubelet[2829]: E1008 19:36:10.140668 2829 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.200:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:10.141682 kubelet[2829]: I1008 19:36:10.140834 2829 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:36:10.141682 kubelet[2829]: I1008 19:36:10.141409 2829 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:36:10.143943 kubelet[2829]: W1008 19:36:10.143901 2829 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:36:10.145317 kubelet[2829]: I1008 19:36:10.145281 2829 server.go:1256] "Started kubelet" Oct 8 19:36:10.149428 kubelet[2829]: W1008 19:36:10.149344 2829 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-200&limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:10.149584 kubelet[2829]: E1008 19:36:10.149438 2829 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-200&limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:10.149714 kubelet[2829]: I1008 19:36:10.149678 2829 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:36:10.151124 kubelet[2829]: I1008 19:36:10.151062 2829 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:36:10.153273 kubelet[2829]: I1008 19:36:10.152829 2829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:36:10.153534 kubelet[2829]: I1008 19:36:10.153504 2829 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:36:10.159255 kubelet[2829]: E1008 19:36:10.159189 2829 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.200:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.200:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-200.17fc9159658bd427 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-200,UID:ip-172-31-27-200,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-200,},FirstTimestamp:2024-10-08 19:36:10.145207335 +0000 UTC m=+1.215154915,LastTimestamp:2024-10-08 19:36:10.145207335 +0000 UTC m=+1.215154915,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-200,}" Oct 8 19:36:10.160195 kubelet[2829]: I1008 19:36:10.159949 2829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:36:10.168166 kubelet[2829]: E1008 19:36:10.168106 2829 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-27-200\" not found" Oct 8 19:36:10.168166 kubelet[2829]: I1008 19:36:10.168176 2829 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:36:10.172250 kubelet[2829]: I1008 19:36:10.169408 2829 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:36:10.172250 kubelet[2829]: I1008 19:36:10.170841 2829 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:36:10.176101 kubelet[2829]: W1008 19:36:10.174313 2829 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:10.176101 kubelet[2829]: E1008 19:36:10.174413 2829 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:10.176101 kubelet[2829]: E1008 19:36:10.174571 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-200?timeout=10s\": dial tcp 172.31.27.200:6443: connect: connection refused" interval="200ms" Oct 8 19:36:10.179538 kubelet[2829]: E1008 19:36:10.179498 2829 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:36:10.181566 update_engine[1904]: I20241008 19:36:10.181412 1904 update_attempter.cc:509] Updating boot flags... Oct 8 19:36:10.183455 kubelet[2829]: I1008 19:36:10.183305 2829 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:36:10.189244 kubelet[2829]: I1008 19:36:10.189133 2829 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:36:10.189244 kubelet[2829]: I1008 19:36:10.189207 2829 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:36:10.246451 kubelet[2829]: I1008 19:36:10.245642 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:36:10.255943 kubelet[2829]: I1008 19:36:10.255892 2829 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:36:10.256063 kubelet[2829]: I1008 19:36:10.255956 2829 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:36:10.256063 kubelet[2829]: I1008 19:36:10.255997 2829 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:36:10.256191 kubelet[2829]: E1008 19:36:10.256078 2829 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:36:10.262611 kubelet[2829]: W1008 19:36:10.262545 2829 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:10.262611 kubelet[2829]: E1008 19:36:10.262616 2829 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:10.267399 kubelet[2829]: I1008 19:36:10.267359 2829 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:36:10.267577 kubelet[2829]: I1008 19:36:10.267552 2829 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:36:10.267720 kubelet[2829]: I1008 19:36:10.267700 2829 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:36:10.270916 kubelet[2829]: I1008 19:36:10.270868 2829 policy_none.go:49] "None policy: Start" Oct 8 19:36:10.272464 kubelet[2829]: I1008 19:36:10.272410 2829 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:36:10.274828 kubelet[2829]: I1008 19:36:10.274347 2829 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:36:10.277517 kubelet[2829]: I1008 19:36:10.277448 2829 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-200" Oct 8 19:36:10.279135 kubelet[2829]: E1008 19:36:10.279080 2829 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.200:6443/api/v1/nodes\": dial tcp 172.31.27.200:6443: connect: connection refused" node="ip-172-31-27-200" Oct 8 19:36:10.295153 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:36:10.343370 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:36:10.356576 kubelet[2829]: E1008 19:36:10.356518 2829 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:36:10.358676 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2874) Oct 8 19:36:10.367638 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:36:10.374394 kubelet[2829]: I1008 19:36:10.374293 2829 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:36:10.375005 kubelet[2829]: I1008 19:36:10.374965 2829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:36:10.376410 kubelet[2829]: E1008 19:36:10.376352 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-200?timeout=10s\": dial tcp 172.31.27.200:6443: connect: connection refused" interval="400ms" Oct 8 19:36:10.384375 kubelet[2829]: E1008 19:36:10.384329 2829 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-200\" not found" Oct 8 19:36:10.482397 kubelet[2829]: I1008 19:36:10.482325 2829 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-200" Oct 8 19:36:10.482859 kubelet[2829]: E1008 19:36:10.482821 2829 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.200:6443/api/v1/nodes\": dial tcp 172.31.27.200:6443: connect: connection refused" node="ip-172-31-27-200" Oct 8 19:36:10.558365 kubelet[2829]: I1008 19:36:10.557705 2829 topology_manager.go:215] "Topology Admit Handler" podUID="9ec1e59c8043172843f84815e1db1835" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-200" Oct 8 19:36:10.562968 kubelet[2829]: I1008 19:36:10.562930 2829 topology_manager.go:215] "Topology Admit Handler" podUID="e5e9b96c24e78ddda5069c32d1c8d62f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:10.574905 kubelet[2829]: I1008 19:36:10.572376 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:10.574905 kubelet[2829]: I1008 19:36:10.572459 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:10.574905 kubelet[2829]: I1008 19:36:10.572513 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:10.574905 kubelet[2829]: I1008 19:36:10.572558 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:10.574905 kubelet[2829]: I1008 19:36:10.572614 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:10.575275 kubelet[2829]: I1008 19:36:10.572659 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ec1e59c8043172843f84815e1db1835-ca-certs\") pod \"kube-apiserver-ip-172-31-27-200\" (UID: \"9ec1e59c8043172843f84815e1db1835\") " pod="kube-system/kube-apiserver-ip-172-31-27-200" Oct 8 19:36:10.575275 kubelet[2829]: I1008 19:36:10.572703 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ec1e59c8043172843f84815e1db1835-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-200\" (UID: \"9ec1e59c8043172843f84815e1db1835\") " pod="kube-system/kube-apiserver-ip-172-31-27-200" Oct 8 19:36:10.575275 kubelet[2829]: I1008 19:36:10.572748 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ec1e59c8043172843f84815e1db1835-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-200\" (UID: \"9ec1e59c8043172843f84815e1db1835\") " pod="kube-system/kube-apiserver-ip-172-31-27-200" Oct 8 19:36:10.576448 kubelet[2829]: I1008 19:36:10.576352 2829 topology_manager.go:215] "Topology Admit Handler" podUID="5e382286ab0131870ac56d2323c497d5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-200" Oct 8 19:36:10.630887 systemd[1]: Created slice kubepods-burstable-pod9ec1e59c8043172843f84815e1db1835.slice - libcontainer container kubepods-burstable-pod9ec1e59c8043172843f84815e1db1835.slice. Oct 8 19:36:10.653923 systemd[1]: Created slice kubepods-burstable-pode5e9b96c24e78ddda5069c32d1c8d62f.slice - libcontainer container kubepods-burstable-pode5e9b96c24e78ddda5069c32d1c8d62f.slice. Oct 8 19:36:10.677388 kubelet[2829]: I1008 19:36:10.673539 2829 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e382286ab0131870ac56d2323c497d5-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-200\" (UID: \"5e382286ab0131870ac56d2323c497d5\") " pod="kube-system/kube-scheduler-ip-172-31-27-200" Oct 8 19:36:10.678095 systemd[1]: Created slice kubepods-burstable-pod5e382286ab0131870ac56d2323c497d5.slice - libcontainer container kubepods-burstable-pod5e382286ab0131870ac56d2323c497d5.slice. Oct 8 19:36:10.698369 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2878) Oct 8 19:36:10.778016 kubelet[2829]: E1008 19:36:10.777976 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-200?timeout=10s\": dial tcp 172.31.27.200:6443: connect: connection refused" interval="800ms" Oct 8 19:36:10.894579 kubelet[2829]: I1008 19:36:10.894541 2829 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-200" Oct 8 19:36:10.900358 kubelet[2829]: E1008 19:36:10.900310 2829 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.200:6443/api/v1/nodes\": dial tcp 172.31.27.200:6443: connect: connection refused" node="ip-172-31-27-200" Oct 8 19:36:10.950561 containerd[1937]: time="2024-10-08T19:36:10.950502667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-200,Uid:9ec1e59c8043172843f84815e1db1835,Namespace:kube-system,Attempt:0,}" Oct 8 19:36:10.966591 containerd[1937]: time="2024-10-08T19:36:10.966435284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-200,Uid:e5e9b96c24e78ddda5069c32d1c8d62f,Namespace:kube-system,Attempt:0,}" Oct 8 19:36:10.999984 containerd[1937]: time="2024-10-08T19:36:10.999931688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-200,Uid:5e382286ab0131870ac56d2323c497d5,Namespace:kube-system,Attempt:0,}" Oct 8 19:36:11.390442 kubelet[2829]: W1008 19:36:11.390159 2829 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.27.200:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:11.390442 kubelet[2829]: E1008 19:36:11.390281 2829 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.27.200:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:11.454242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240681082.mount: Deactivated successfully. Oct 8 19:36:11.465522 containerd[1937]: time="2024-10-08T19:36:11.464184342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:11.470931 containerd[1937]: time="2024-10-08T19:36:11.470358546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Oct 8 19:36:11.475850 containerd[1937]: time="2024-10-08T19:36:11.475075230Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:11.480303 containerd[1937]: time="2024-10-08T19:36:11.480013458Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:36:11.482972 containerd[1937]: time="2024-10-08T19:36:11.482592522Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:11.487212 containerd[1937]: time="2024-10-08T19:36:11.486757458Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:11.487212 containerd[1937]: time="2024-10-08T19:36:11.486990882Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:36:11.495127 containerd[1937]: time="2024-10-08T19:36:11.494956542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:36:11.499830 containerd[1937]: time="2024-10-08T19:36:11.498555534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 498.139382ms" Oct 8 19:36:11.503274 kubelet[2829]: W1008 19:36:11.503145 2829 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.27.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:11.504047 containerd[1937]: time="2024-10-08T19:36:11.503503482Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 552.885591ms" Oct 8 19:36:11.504159 kubelet[2829]: E1008 19:36:11.503449 2829 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.27.200:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:11.505670 containerd[1937]: time="2024-10-08T19:36:11.505609950Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.070086ms" Oct 8 19:36:11.577360 kubelet[2829]: W1008 19:36:11.575954 2829 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.27.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:11.578002 kubelet[2829]: W1008 19:36:11.576145 2829 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.27.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-200&limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:11.578002 kubelet[2829]: E1008 19:36:11.577891 2829 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.27.200:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:11.578002 kubelet[2829]: E1008 19:36:11.577847 2829 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.27.200:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-200&limit=500&resourceVersion=0": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:11.581261 kubelet[2829]: E1008 19:36:11.581171 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-200?timeout=10s\": dial tcp 172.31.27.200:6443: connect: connection refused" interval="1.6s" Oct 8 19:36:11.651839 kubelet[2829]: E1008 19:36:11.651726 2829 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.200:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.200:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-200.17fc9159658bd427 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-200,UID:ip-172-31-27-200,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-200,},FirstTimestamp:2024-10-08 19:36:10.145207335 +0000 UTC m=+1.215154915,LastTimestamp:2024-10-08 19:36:10.145207335 +0000 UTC m=+1.215154915,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-200,}" Oct 8 19:36:11.707076 kubelet[2829]: I1008 19:36:11.706475 2829 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-200" Oct 8 19:36:11.707076 kubelet[2829]: E1008 19:36:11.706967 2829 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.27.200:6443/api/v1/nodes\": dial tcp 172.31.27.200:6443: connect: connection refused" node="ip-172-31-27-200" Oct 8 19:36:11.791425 containerd[1937]: time="2024-10-08T19:36:11.790934084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:11.791425 containerd[1937]: time="2024-10-08T19:36:11.791040740Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:11.791425 containerd[1937]: time="2024-10-08T19:36:11.791078300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:11.791425 containerd[1937]: time="2024-10-08T19:36:11.791280296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:11.793751 containerd[1937]: time="2024-10-08T19:36:11.793132484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:11.793751 containerd[1937]: time="2024-10-08T19:36:11.793337720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:11.793751 containerd[1937]: time="2024-10-08T19:36:11.793391972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:11.795261 containerd[1937]: time="2024-10-08T19:36:11.794936240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:11.796468 containerd[1937]: time="2024-10-08T19:36:11.795730640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:11.796826 containerd[1937]: time="2024-10-08T19:36:11.796390292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:11.799686 containerd[1937]: time="2024-10-08T19:36:11.796917536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:11.800181 containerd[1937]: time="2024-10-08T19:36:11.800044784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:11.848541 systemd[1]: Started cri-containerd-064a7d5a09e1c2bf59795ea32ce91207ad2cf0ddc43e84a44b17197cb444d215.scope - libcontainer container 064a7d5a09e1c2bf59795ea32ce91207ad2cf0ddc43e84a44b17197cb444d215. Oct 8 19:36:11.853017 systemd[1]: Started cri-containerd-1fa43e43987767f4aa8f058d9cc8f416c32068223154263bc0eb99b163320dfa.scope - libcontainer container 1fa43e43987767f4aa8f058d9cc8f416c32068223154263bc0eb99b163320dfa. Oct 8 19:36:11.868853 systemd[1]: Started cri-containerd-518b140b5636df0dd6b8980fe5650eeb8ee6bf527d2cfab49a7368a721b1fed0.scope - libcontainer container 518b140b5636df0dd6b8980fe5650eeb8ee6bf527d2cfab49a7368a721b1fed0. Oct 8 19:36:11.988381 containerd[1937]: time="2024-10-08T19:36:11.987684981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-200,Uid:5e382286ab0131870ac56d2323c497d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fa43e43987767f4aa8f058d9cc8f416c32068223154263bc0eb99b163320dfa\"" Oct 8 19:36:12.001432 containerd[1937]: time="2024-10-08T19:36:12.001073501Z" level=info msg="CreateContainer within sandbox \"1fa43e43987767f4aa8f058d9cc8f416c32068223154263bc0eb99b163320dfa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:36:12.013412 containerd[1937]: time="2024-10-08T19:36:12.013306241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-200,Uid:e5e9b96c24e78ddda5069c32d1c8d62f,Namespace:kube-system,Attempt:0,} returns sandbox id \"518b140b5636df0dd6b8980fe5650eeb8ee6bf527d2cfab49a7368a721b1fed0\"" Oct 8 19:36:12.019553 containerd[1937]: time="2024-10-08T19:36:12.019366469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-200,Uid:9ec1e59c8043172843f84815e1db1835,Namespace:kube-system,Attempt:0,} returns sandbox id \"064a7d5a09e1c2bf59795ea32ce91207ad2cf0ddc43e84a44b17197cb444d215\"" Oct 8 19:36:12.024148 containerd[1937]: time="2024-10-08T19:36:12.024092789Z" level=info msg="CreateContainer within sandbox \"518b140b5636df0dd6b8980fe5650eeb8ee6bf527d2cfab49a7368a721b1fed0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:36:12.031571 containerd[1937]: time="2024-10-08T19:36:12.030786737Z" level=info msg="CreateContainer within sandbox \"064a7d5a09e1c2bf59795ea32ce91207ad2cf0ddc43e84a44b17197cb444d215\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:36:12.053252 containerd[1937]: time="2024-10-08T19:36:12.053176877Z" level=info msg="CreateContainer within sandbox \"1fa43e43987767f4aa8f058d9cc8f416c32068223154263bc0eb99b163320dfa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801\"" Oct 8 19:36:12.056114 containerd[1937]: time="2024-10-08T19:36:12.056057921Z" level=info msg="CreateContainer within sandbox \"518b140b5636df0dd6b8980fe5650eeb8ee6bf527d2cfab49a7368a721b1fed0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5\"" Oct 8 19:36:12.056651 containerd[1937]: time="2024-10-08T19:36:12.056610569Z" level=info msg="StartContainer for \"7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801\"" Oct 8 19:36:12.071759 containerd[1937]: time="2024-10-08T19:36:12.071678753Z" level=info msg="CreateContainer within sandbox \"064a7d5a09e1c2bf59795ea32ce91207ad2cf0ddc43e84a44b17197cb444d215\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"84b7ac98932f5d1c24eeab6b8763e27bcc82b2d8e69100dbdc41b3755c3025ed\"" Oct 8 19:36:12.073023 containerd[1937]: time="2024-10-08T19:36:12.072724949Z" level=info msg="StartContainer for \"741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5\"" Oct 8 19:36:12.077845 containerd[1937]: time="2024-10-08T19:36:12.077781245Z" level=info msg="StartContainer for \"84b7ac98932f5d1c24eeab6b8763e27bcc82b2d8e69100dbdc41b3755c3025ed\"" Oct 8 19:36:12.122686 systemd[1]: Started cri-containerd-7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801.scope - libcontainer container 7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801. Oct 8 19:36:12.168809 systemd[1]: Started cri-containerd-741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5.scope - libcontainer container 741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5. Oct 8 19:36:12.188745 systemd[1]: Started cri-containerd-84b7ac98932f5d1c24eeab6b8763e27bcc82b2d8e69100dbdc41b3755c3025ed.scope - libcontainer container 84b7ac98932f5d1c24eeab6b8763e27bcc82b2d8e69100dbdc41b3755c3025ed. Oct 8 19:36:12.291325 containerd[1937]: time="2024-10-08T19:36:12.289434978Z" level=info msg="StartContainer for \"7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801\" returns successfully" Oct 8 19:36:12.308018 kubelet[2829]: E1008 19:36:12.306794 2829 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.27.200:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.27.200:6443: connect: connection refused Oct 8 19:36:12.310281 containerd[1937]: time="2024-10-08T19:36:12.308658462Z" level=info msg="StartContainer for \"741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5\" returns successfully" Oct 8 19:36:12.330356 containerd[1937]: time="2024-10-08T19:36:12.330280710Z" level=info msg="StartContainer for \"84b7ac98932f5d1c24eeab6b8763e27bcc82b2d8e69100dbdc41b3755c3025ed\" returns successfully" Oct 8 19:36:13.309780 kubelet[2829]: I1008 19:36:13.309732 2829 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-200" Oct 8 19:36:17.568017 kubelet[2829]: E1008 19:36:17.567959 2829 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-200\" not found" node="ip-172-31-27-200" Oct 8 19:36:17.638959 kubelet[2829]: I1008 19:36:17.638887 2829 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-200" Oct 8 19:36:18.143486 kubelet[2829]: I1008 19:36:18.143395 2829 apiserver.go:52] "Watching apiserver" Oct 8 19:36:18.171639 kubelet[2829]: I1008 19:36:18.171580 2829 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:36:20.295050 kubelet[2829]: I1008 19:36:20.294962 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-200" podStartSLOduration=1.294899942 podStartE2EDuration="1.294899942s" podCreationTimestamp="2024-10-08 19:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:36:20.29465183 +0000 UTC m=+11.364599398" watchObservedRunningTime="2024-10-08 19:36:20.294899942 +0000 UTC m=+11.364847510" Oct 8 19:36:20.331855 kubelet[2829]: I1008 19:36:20.331782 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-200" podStartSLOduration=2.331719026 podStartE2EDuration="2.331719026s" podCreationTimestamp="2024-10-08 19:36:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:36:20.331349186 +0000 UTC m=+11.401296766" watchObservedRunningTime="2024-10-08 19:36:20.331719026 +0000 UTC m=+11.401666618" Oct 8 19:36:20.332171 kubelet[2829]: I1008 19:36:20.331900 2829 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-200" podStartSLOduration=1.331866518 podStartE2EDuration="1.331866518s" podCreationTimestamp="2024-10-08 19:36:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:36:20.31468715 +0000 UTC m=+11.384634802" watchObservedRunningTime="2024-10-08 19:36:20.331866518 +0000 UTC m=+11.401814110" Oct 8 19:36:20.642170 systemd[1]: Reloading requested from client PID 3284 ('systemctl') (unit session-9.scope)... Oct 8 19:36:20.642204 systemd[1]: Reloading... Oct 8 19:36:20.853326 zram_generator::config[3325]: No configuration found. Oct 8 19:36:21.107272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:36:21.314262 systemd[1]: Reloading finished in 671 ms. Oct 8 19:36:21.400259 kubelet[2829]: I1008 19:36:21.400178 2829 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:36:21.400585 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:21.419646 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:36:21.421328 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:21.421570 systemd[1]: kubelet.service: Consumed 2.065s CPU time, 115.3M memory peak, 0B memory swap peak. Oct 8 19:36:21.438767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:36:22.410134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:36:22.426672 (kubelet)[3384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:36:22.556983 kubelet[3384]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:36:22.556983 kubelet[3384]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:36:22.556983 kubelet[3384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:36:22.557814 kubelet[3384]: I1008 19:36:22.557026 3384 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:36:22.568343 kubelet[3384]: I1008 19:36:22.567112 3384 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Oct 8 19:36:22.568343 kubelet[3384]: I1008 19:36:22.567160 3384 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:36:22.568343 kubelet[3384]: I1008 19:36:22.567584 3384 server.go:919] "Client rotation is on, will bootstrap in background" Oct 8 19:36:22.574048 kubelet[3384]: I1008 19:36:22.574005 3384 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:36:22.581379 kubelet[3384]: I1008 19:36:22.580909 3384 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:36:22.597518 kubelet[3384]: I1008 19:36:22.597399 3384 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:36:22.598599 kubelet[3384]: I1008 19:36:22.598077 3384 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:36:22.598599 kubelet[3384]: I1008 19:36:22.598421 3384 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Oct 8 19:36:22.598599 kubelet[3384]: I1008 19:36:22.598461 3384 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:36:22.598599 kubelet[3384]: I1008 19:36:22.598481 3384 container_manager_linux.go:301] "Creating device plugin manager" Oct 8 19:36:22.598599 kubelet[3384]: I1008 19:36:22.598539 3384 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:36:22.599211 kubelet[3384]: I1008 19:36:22.599187 3384 kubelet.go:396] "Attempting to sync node with API server" Oct 8 19:36:22.599365 kubelet[3384]: I1008 19:36:22.599346 3384 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:36:22.599493 kubelet[3384]: I1008 19:36:22.599475 3384 kubelet.go:312] "Adding apiserver pod source" Oct 8 19:36:22.599613 kubelet[3384]: I1008 19:36:22.599595 3384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:36:22.602479 kubelet[3384]: I1008 19:36:22.602400 3384 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 8 19:36:22.606681 kubelet[3384]: I1008 19:36:22.604400 3384 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:36:22.606681 kubelet[3384]: I1008 19:36:22.605509 3384 server.go:1256] "Started kubelet" Oct 8 19:36:22.613037 kubelet[3384]: I1008 19:36:22.611659 3384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:36:22.621122 kubelet[3384]: I1008 19:36:22.620323 3384 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:36:22.625275 kubelet[3384]: I1008 19:36:22.624786 3384 server.go:461] "Adding debug handlers to kubelet server" Oct 8 19:36:22.629330 kubelet[3384]: I1008 19:36:22.628636 3384 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:36:22.651259 kubelet[3384]: I1008 19:36:22.632413 3384 volume_manager.go:291] "Starting Kubelet Volume Manager" Oct 8 19:36:22.653905 kubelet[3384]: I1008 19:36:22.653452 3384 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:36:22.666810 kubelet[3384]: I1008 19:36:22.632453 3384 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 8 19:36:22.669499 kubelet[3384]: I1008 19:36:22.668101 3384 reconciler_new.go:29] "Reconciler: start to sync state" Oct 8 19:36:22.686504 kubelet[3384]: I1008 19:36:22.686470 3384 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:36:22.686702 kubelet[3384]: I1008 19:36:22.686683 3384 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:36:22.686950 kubelet[3384]: I1008 19:36:22.686917 3384 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:36:22.715430 kubelet[3384]: E1008 19:36:22.715379 3384 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:36:22.737305 kubelet[3384]: I1008 19:36:22.736186 3384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:36:22.741866 kubelet[3384]: I1008 19:36:22.741828 3384 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:36:22.742073 kubelet[3384]: I1008 19:36:22.742053 3384 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:36:22.742195 kubelet[3384]: I1008 19:36:22.742176 3384 kubelet.go:2329] "Starting kubelet main sync loop" Oct 8 19:36:22.742437 kubelet[3384]: E1008 19:36:22.742415 3384 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:36:22.780871 kubelet[3384]: I1008 19:36:22.780809 3384 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-27-200" Oct 8 19:36:22.805254 kubelet[3384]: I1008 19:36:22.804202 3384 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-27-200" Oct 8 19:36:22.805254 kubelet[3384]: I1008 19:36:22.804365 3384 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-27-200" Oct 8 19:36:22.843160 kubelet[3384]: E1008 19:36:22.842620 3384 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:36:22.864521 kubelet[3384]: I1008 19:36:22.864333 3384 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:36:22.865672 kubelet[3384]: I1008 19:36:22.865614 3384 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:36:22.865672 kubelet[3384]: I1008 19:36:22.865673 3384 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:36:22.865954 kubelet[3384]: I1008 19:36:22.865914 3384 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:36:22.866025 kubelet[3384]: I1008 19:36:22.865966 3384 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:36:22.866025 kubelet[3384]: I1008 19:36:22.865986 3384 policy_none.go:49] "None policy: Start" Oct 8 19:36:22.868040 kubelet[3384]: I1008 19:36:22.867277 3384 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:36:22.868040 kubelet[3384]: I1008 19:36:22.867325 3384 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:36:22.868040 kubelet[3384]: I1008 19:36:22.867564 3384 state_mem.go:75] "Updated machine memory state" Oct 8 19:36:22.880361 kubelet[3384]: I1008 19:36:22.880326 3384 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:36:22.881896 kubelet[3384]: I1008 19:36:22.881868 3384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:36:23.044152 kubelet[3384]: I1008 19:36:23.043551 3384 topology_manager.go:215] "Topology Admit Handler" podUID="9ec1e59c8043172843f84815e1db1835" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-27-200" Oct 8 19:36:23.045283 kubelet[3384]: I1008 19:36:23.044708 3384 topology_manager.go:215] "Topology Admit Handler" podUID="e5e9b96c24e78ddda5069c32d1c8d62f" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:23.045283 kubelet[3384]: I1008 19:36:23.044926 3384 topology_manager.go:215] "Topology Admit Handler" podUID="5e382286ab0131870ac56d2323c497d5" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-27-200" Oct 8 19:36:23.058025 kubelet[3384]: E1008 19:36:23.057965 3384 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-200\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-200" Oct 8 19:36:23.062390 kubelet[3384]: E1008 19:36:23.062202 3384 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-27-200\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-200" Oct 8 19:36:23.062390 kubelet[3384]: E1008 19:36:23.062202 3384 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-27-200\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:23.072972 kubelet[3384]: I1008 19:36:23.072105 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ec1e59c8043172843f84815e1db1835-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-200\" (UID: \"9ec1e59c8043172843f84815e1db1835\") " pod="kube-system/kube-apiserver-ip-172-31-27-200" Oct 8 19:36:23.072972 kubelet[3384]: I1008 19:36:23.072177 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:23.072972 kubelet[3384]: I1008 19:36:23.072267 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:23.072972 kubelet[3384]: I1008 19:36:23.072348 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5e382286ab0131870ac56d2323c497d5-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-200\" (UID: \"5e382286ab0131870ac56d2323c497d5\") " pod="kube-system/kube-scheduler-ip-172-31-27-200" Oct 8 19:36:23.072972 kubelet[3384]: I1008 19:36:23.072395 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ec1e59c8043172843f84815e1db1835-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-200\" (UID: \"9ec1e59c8043172843f84815e1db1835\") " pod="kube-system/kube-apiserver-ip-172-31-27-200" Oct 8 19:36:23.073387 kubelet[3384]: I1008 19:36:23.072438 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:23.073387 kubelet[3384]: I1008 19:36:23.072481 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:23.073387 kubelet[3384]: I1008 19:36:23.072533 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e5e9b96c24e78ddda5069c32d1c8d62f-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-200\" (UID: \"e5e9b96c24e78ddda5069c32d1c8d62f\") " pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:23.073387 kubelet[3384]: I1008 19:36:23.072595 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ec1e59c8043172843f84815e1db1835-ca-certs\") pod \"kube-apiserver-ip-172-31-27-200\" (UID: \"9ec1e59c8043172843f84815e1db1835\") " pod="kube-system/kube-apiserver-ip-172-31-27-200" Oct 8 19:36:23.603777 kubelet[3384]: I1008 19:36:23.603326 3384 apiserver.go:52] "Watching apiserver" Oct 8 19:36:23.667102 kubelet[3384]: I1008 19:36:23.667012 3384 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 8 19:36:23.880555 kubelet[3384]: E1008 19:36:23.879141 3384 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-27-200\" already exists" pod="kube-system/kube-apiserver-ip-172-31-27-200" Oct 8 19:36:23.882411 kubelet[3384]: E1008 19:36:23.881035 3384 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-27-200\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-27-200" Oct 8 19:36:27.514707 sudo[2275]: pam_unix(sudo:session): session closed for user root Oct 8 19:36:27.539584 sshd[2272]: pam_unix(sshd:session): session closed for user core Oct 8 19:36:27.545867 systemd[1]: sshd@8-172.31.27.200:22-139.178.68.195:35852.service: Deactivated successfully. Oct 8 19:36:27.550201 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:36:27.550820 systemd[1]: session-9.scope: Consumed 11.230s CPU time, 184.7M memory peak, 0B memory swap peak. Oct 8 19:36:27.553903 systemd-logind[1903]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:36:27.556684 systemd-logind[1903]: Removed session 9. Oct 8 19:36:34.837568 kubelet[3384]: I1008 19:36:34.837307 3384 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:36:34.840097 containerd[1937]: time="2024-10-08T19:36:34.839937834Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:36:34.841927 kubelet[3384]: I1008 19:36:34.840482 3384 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:36:35.782258 kubelet[3384]: I1008 19:36:35.780029 3384 topology_manager.go:215] "Topology Admit Handler" podUID="1777a35a-8fff-4466-89d7-124f85cbfc98" podNamespace="kube-system" podName="kube-proxy-cbq8p" Oct 8 19:36:35.785164 kubelet[3384]: W1008 19:36:35.785081 3384 reflector.go:539] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:35.785342 kubelet[3384]: E1008 19:36:35.785173 3384 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:35.787035 kubelet[3384]: W1008 19:36:35.785523 3384 reflector.go:539] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:35.787035 kubelet[3384]: E1008 19:36:35.785560 3384 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:35.801961 systemd[1]: Created slice kubepods-besteffort-pod1777a35a_8fff_4466_89d7_124f85cbfc98.slice - libcontainer container kubepods-besteffort-pod1777a35a_8fff_4466_89d7_124f85cbfc98.slice. Oct 8 19:36:35.868471 kubelet[3384]: I1008 19:36:35.868135 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1777a35a-8fff-4466-89d7-124f85cbfc98-xtables-lock\") pod \"kube-proxy-cbq8p\" (UID: \"1777a35a-8fff-4466-89d7-124f85cbfc98\") " pod="kube-system/kube-proxy-cbq8p" Oct 8 19:36:35.868471 kubelet[3384]: I1008 19:36:35.868260 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1777a35a-8fff-4466-89d7-124f85cbfc98-kube-proxy\") pod \"kube-proxy-cbq8p\" (UID: \"1777a35a-8fff-4466-89d7-124f85cbfc98\") " pod="kube-system/kube-proxy-cbq8p" Oct 8 19:36:35.868471 kubelet[3384]: I1008 19:36:35.868313 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xzc2\" (UniqueName: \"kubernetes.io/projected/1777a35a-8fff-4466-89d7-124f85cbfc98-kube-api-access-4xzc2\") pod \"kube-proxy-cbq8p\" (UID: \"1777a35a-8fff-4466-89d7-124f85cbfc98\") " pod="kube-system/kube-proxy-cbq8p" Oct 8 19:36:35.868471 kubelet[3384]: I1008 19:36:35.868358 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1777a35a-8fff-4466-89d7-124f85cbfc98-lib-modules\") pod \"kube-proxy-cbq8p\" (UID: \"1777a35a-8fff-4466-89d7-124f85cbfc98\") " pod="kube-system/kube-proxy-cbq8p" Oct 8 19:36:35.943476 kubelet[3384]: I1008 19:36:35.942380 3384 topology_manager.go:215] "Topology Admit Handler" podUID="8b0f3f19-8193-4a27-8e68-3d294071bd54" podNamespace="tigera-operator" podName="tigera-operator-5d56685c77-xd87h" Oct 8 19:36:35.963797 systemd[1]: Created slice kubepods-besteffort-pod8b0f3f19_8193_4a27_8e68_3d294071bd54.slice - libcontainer container kubepods-besteffort-pod8b0f3f19_8193_4a27_8e68_3d294071bd54.slice. Oct 8 19:36:35.969875 kubelet[3384]: I1008 19:36:35.969190 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6dph\" (UniqueName: \"kubernetes.io/projected/8b0f3f19-8193-4a27-8e68-3d294071bd54-kube-api-access-c6dph\") pod \"tigera-operator-5d56685c77-xd87h\" (UID: \"8b0f3f19-8193-4a27-8e68-3d294071bd54\") " pod="tigera-operator/tigera-operator-5d56685c77-xd87h" Oct 8 19:36:35.969875 kubelet[3384]: I1008 19:36:35.969300 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8b0f3f19-8193-4a27-8e68-3d294071bd54-var-lib-calico\") pod \"tigera-operator-5d56685c77-xd87h\" (UID: \"8b0f3f19-8193-4a27-8e68-3d294071bd54\") " pod="tigera-operator/tigera-operator-5d56685c77-xd87h" Oct 8 19:36:36.275259 containerd[1937]: time="2024-10-08T19:36:36.275155985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-xd87h,Uid:8b0f3f19-8193-4a27-8e68-3d294071bd54,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:36:36.318729 containerd[1937]: time="2024-10-08T19:36:36.318554477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:36.319039 containerd[1937]: time="2024-10-08T19:36:36.318749933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:36.319039 containerd[1937]: time="2024-10-08T19:36:36.318855989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:36.319352 containerd[1937]: time="2024-10-08T19:36:36.319130357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:36.358574 systemd[1]: Started cri-containerd-77c4a08f6f933c12b41e2c38e79f9f123f7583494a1e8c0adc728963dd5780b6.scope - libcontainer container 77c4a08f6f933c12b41e2c38e79f9f123f7583494a1e8c0adc728963dd5780b6. Oct 8 19:36:36.420396 containerd[1937]: time="2024-10-08T19:36:36.420327738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5d56685c77-xd87h,Uid:8b0f3f19-8193-4a27-8e68-3d294071bd54,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"77c4a08f6f933c12b41e2c38e79f9f123f7583494a1e8c0adc728963dd5780b6\"" Oct 8 19:36:36.427269 containerd[1937]: time="2024-10-08T19:36:36.427172598Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:36:36.970695 kubelet[3384]: E1008 19:36:36.970628 3384 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:36.972451 kubelet[3384]: E1008 19:36:36.970764 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1777a35a-8fff-4466-89d7-124f85cbfc98-kube-proxy podName:1777a35a-8fff-4466-89d7-124f85cbfc98 nodeName:}" failed. No retries permitted until 2024-10-08 19:36:37.470726761 +0000 UTC m=+15.031132005 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1777a35a-8fff-4466-89d7-124f85cbfc98-kube-proxy") pod "kube-proxy-cbq8p" (UID: "1777a35a-8fff-4466-89d7-124f85cbfc98") : failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:36.995295 kubelet[3384]: E1008 19:36:36.995208 3384 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:36.995295 kubelet[3384]: E1008 19:36:36.995297 3384 projected.go:200] Error preparing data for projected volume kube-api-access-4xzc2 for pod kube-system/kube-proxy-cbq8p: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:36.995510 kubelet[3384]: E1008 19:36:36.995421 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1777a35a-8fff-4466-89d7-124f85cbfc98-kube-api-access-4xzc2 podName:1777a35a-8fff-4466-89d7-124f85cbfc98 nodeName:}" failed. No retries permitted until 2024-10-08 19:36:37.495371725 +0000 UTC m=+15.055776969 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4xzc2" (UniqueName: "kubernetes.io/projected/1777a35a-8fff-4466-89d7-124f85cbfc98-kube-api-access-4xzc2") pod "kube-proxy-cbq8p" (UID: "1777a35a-8fff-4466-89d7-124f85cbfc98") : failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:37.617757 containerd[1937]: time="2024-10-08T19:36:37.617168204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbq8p,Uid:1777a35a-8fff-4466-89d7-124f85cbfc98,Namespace:kube-system,Attempt:0,}" Oct 8 19:36:37.635161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4166324264.mount: Deactivated successfully. Oct 8 19:36:37.690406 containerd[1937]: time="2024-10-08T19:36:37.689820644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:37.690406 containerd[1937]: time="2024-10-08T19:36:37.690045392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:37.690406 containerd[1937]: time="2024-10-08T19:36:37.690084764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:37.691183 containerd[1937]: time="2024-10-08T19:36:37.690600272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:37.731560 systemd[1]: Started cri-containerd-077fa9195fed962b3a04ae39140976b61b2922e2db2317f24d0867697f9916b1.scope - libcontainer container 077fa9195fed962b3a04ae39140976b61b2922e2db2317f24d0867697f9916b1. Oct 8 19:36:37.788822 containerd[1937]: time="2024-10-08T19:36:37.788760285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cbq8p,Uid:1777a35a-8fff-4466-89d7-124f85cbfc98,Namespace:kube-system,Attempt:0,} returns sandbox id \"077fa9195fed962b3a04ae39140976b61b2922e2db2317f24d0867697f9916b1\"" Oct 8 19:36:37.798209 containerd[1937]: time="2024-10-08T19:36:37.798072357Z" level=info msg="CreateContainer within sandbox \"077fa9195fed962b3a04ae39140976b61b2922e2db2317f24d0867697f9916b1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:36:38.167904 containerd[1937]: time="2024-10-08T19:36:38.167818591Z" level=info msg="CreateContainer within sandbox \"077fa9195fed962b3a04ae39140976b61b2922e2db2317f24d0867697f9916b1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4e6343563227b6a2a4758a7abe3472af5f75d10451d69f947859ed88014719cb\"" Oct 8 19:36:38.170286 containerd[1937]: time="2024-10-08T19:36:38.170040487Z" level=info msg="StartContainer for \"4e6343563227b6a2a4758a7abe3472af5f75d10451d69f947859ed88014719cb\"" Oct 8 19:36:38.250607 systemd[1]: Started cri-containerd-4e6343563227b6a2a4758a7abe3472af5f75d10451d69f947859ed88014719cb.scope - libcontainer container 4e6343563227b6a2a4758a7abe3472af5f75d10451d69f947859ed88014719cb. Oct 8 19:36:38.334285 containerd[1937]: time="2024-10-08T19:36:38.334086655Z" level=info msg="StartContainer for \"4e6343563227b6a2a4758a7abe3472af5f75d10451d69f947859ed88014719cb\" returns successfully" Oct 8 19:36:39.223437 containerd[1937]: time="2024-10-08T19:36:39.223355228Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:39.225619 containerd[1937]: time="2024-10-08T19:36:39.225554228Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485895" Oct 8 19:36:39.227911 containerd[1937]: time="2024-10-08T19:36:39.227795264Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:39.233892 containerd[1937]: time="2024-10-08T19:36:39.233779892Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:39.235552 containerd[1937]: time="2024-10-08T19:36:39.235492700Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.808236462s" Oct 8 19:36:39.235683 containerd[1937]: time="2024-10-08T19:36:39.235551536Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:36:39.239492 containerd[1937]: time="2024-10-08T19:36:39.239423228Z" level=info msg="CreateContainer within sandbox \"77c4a08f6f933c12b41e2c38e79f9f123f7583494a1e8c0adc728963dd5780b6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:36:39.260288 containerd[1937]: time="2024-10-08T19:36:39.260111864Z" level=info msg="CreateContainer within sandbox \"77c4a08f6f933c12b41e2c38e79f9f123f7583494a1e8c0adc728963dd5780b6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82\"" Oct 8 19:36:39.263274 containerd[1937]: time="2024-10-08T19:36:39.261808940Z" level=info msg="StartContainer for \"bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82\"" Oct 8 19:36:39.325545 systemd[1]: Started cri-containerd-bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82.scope - libcontainer container bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82. Oct 8 19:36:39.374624 containerd[1937]: time="2024-10-08T19:36:39.373884177Z" level=info msg="StartContainer for \"bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82\" returns successfully" Oct 8 19:36:39.535449 systemd[1]: run-containerd-runc-k8s.io-bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82-runc.0H8z8T.mount: Deactivated successfully. Oct 8 19:36:39.883876 kubelet[3384]: I1008 19:36:39.883412 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-cbq8p" podStartSLOduration=4.8833151390000005 podStartE2EDuration="4.883315139s" podCreationTimestamp="2024-10-08 19:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:36:38.893747038 +0000 UTC m=+16.454152282" watchObservedRunningTime="2024-10-08 19:36:39.883315139 +0000 UTC m=+17.443720383" Oct 8 19:36:39.883876 kubelet[3384]: I1008 19:36:39.883633 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5d56685c77-xd87h" podStartSLOduration=2.073071713 podStartE2EDuration="4.883592303s" podCreationTimestamp="2024-10-08 19:36:35 +0000 UTC" firstStartedPulling="2024-10-08 19:36:36.425568402 +0000 UTC m=+13.985973658" lastFinishedPulling="2024-10-08 19:36:39.236089004 +0000 UTC m=+16.796494248" observedRunningTime="2024-10-08 19:36:39.882210743 +0000 UTC m=+17.442615987" watchObservedRunningTime="2024-10-08 19:36:39.883592303 +0000 UTC m=+17.443997583" Oct 8 19:36:44.548251 kubelet[3384]: I1008 19:36:44.546728 3384 topology_manager.go:215] "Topology Admit Handler" podUID="536c0ddd-ab88-451b-9828-cab6c38a8e7e" podNamespace="calico-system" podName="calico-typha-67689f86fb-vvlpb" Oct 8 19:36:44.572894 systemd[1]: Created slice kubepods-besteffort-pod536c0ddd_ab88_451b_9828_cab6c38a8e7e.slice - libcontainer container kubepods-besteffort-pod536c0ddd_ab88_451b_9828_cab6c38a8e7e.slice. Oct 8 19:36:44.584968 kubelet[3384]: W1008 19:36:44.584884 3384 reflector.go:539] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:44.584968 kubelet[3384]: E1008 19:36:44.584970 3384 reflector.go:147] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:44.585507 kubelet[3384]: W1008 19:36:44.585424 3384 reflector.go:539] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:44.585507 kubelet[3384]: E1008 19:36:44.585469 3384 reflector.go:147] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:44.585507 kubelet[3384]: W1008 19:36:44.585505 3384 reflector.go:539] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:44.585736 kubelet[3384]: E1008 19:36:44.585536 3384 reflector.go:147] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-27-200" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-27-200' and this object Oct 8 19:36:44.629797 kubelet[3384]: I1008 19:36:44.629678 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/536c0ddd-ab88-451b-9828-cab6c38a8e7e-tigera-ca-bundle\") pod \"calico-typha-67689f86fb-vvlpb\" (UID: \"536c0ddd-ab88-451b-9828-cab6c38a8e7e\") " pod="calico-system/calico-typha-67689f86fb-vvlpb" Oct 8 19:36:44.629797 kubelet[3384]: I1008 19:36:44.629798 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/536c0ddd-ab88-451b-9828-cab6c38a8e7e-typha-certs\") pod \"calico-typha-67689f86fb-vvlpb\" (UID: \"536c0ddd-ab88-451b-9828-cab6c38a8e7e\") " pod="calico-system/calico-typha-67689f86fb-vvlpb" Oct 8 19:36:44.630376 kubelet[3384]: I1008 19:36:44.629861 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqrf5\" (UniqueName: \"kubernetes.io/projected/536c0ddd-ab88-451b-9828-cab6c38a8e7e-kube-api-access-nqrf5\") pod \"calico-typha-67689f86fb-vvlpb\" (UID: \"536c0ddd-ab88-451b-9828-cab6c38a8e7e\") " pod="calico-system/calico-typha-67689f86fb-vvlpb" Oct 8 19:36:44.844807 kubelet[3384]: I1008 19:36:44.843069 3384 topology_manager.go:215] "Topology Admit Handler" podUID="f1ded46c-d09b-4ce9-80cd-36dce5274ecb" podNamespace="calico-system" podName="calico-node-9sfk7" Oct 8 19:36:44.863814 systemd[1]: Created slice kubepods-besteffort-podf1ded46c_d09b_4ce9_80cd_36dce5274ecb.slice - libcontainer container kubepods-besteffort-podf1ded46c_d09b_4ce9_80cd_36dce5274ecb.slice. Oct 8 19:36:44.931866 kubelet[3384]: I1008 19:36:44.931812 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-policysync\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.932246 kubelet[3384]: I1008 19:36:44.932189 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztq9n\" (UniqueName: \"kubernetes.io/projected/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-kube-api-access-ztq9n\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.932412 kubelet[3384]: I1008 19:36:44.932392 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-flexvol-driver-host\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.932646 kubelet[3384]: I1008 19:36:44.932553 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-var-lib-calico\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.932646 kubelet[3384]: I1008 19:36:44.932608 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-node-certs\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.932800 kubelet[3384]: I1008 19:36:44.932703 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-lib-modules\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.932800 kubelet[3384]: I1008 19:36:44.932774 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-xtables-lock\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.932907 kubelet[3384]: I1008 19:36:44.932866 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-net-dir\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.933301 kubelet[3384]: I1008 19:36:44.932970 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-bin-dir\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.933301 kubelet[3384]: I1008 19:36:44.933065 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-log-dir\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.933301 kubelet[3384]: I1008 19:36:44.933165 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-tigera-ca-bundle\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.933301 kubelet[3384]: I1008 19:36:44.933210 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-var-run-calico\") pod \"calico-node-9sfk7\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " pod="calico-system/calico-node-9sfk7" Oct 8 19:36:44.999786 kubelet[3384]: I1008 19:36:44.999431 3384 topology_manager.go:215] "Topology Admit Handler" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" podNamespace="calico-system" podName="csi-node-driver-2xd4p" Oct 8 19:36:45.001258 kubelet[3384]: E1008 19:36:45.000418 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:36:45.033945 kubelet[3384]: I1008 19:36:45.033901 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6-kubelet-dir\") pod \"csi-node-driver-2xd4p\" (UID: \"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6\") " pod="calico-system/csi-node-driver-2xd4p" Oct 8 19:36:45.035256 kubelet[3384]: I1008 19:36:45.034206 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6-socket-dir\") pod \"csi-node-driver-2xd4p\" (UID: \"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6\") " pod="calico-system/csi-node-driver-2xd4p" Oct 8 19:36:45.035256 kubelet[3384]: I1008 19:36:45.034340 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6-varrun\") pod \"csi-node-driver-2xd4p\" (UID: \"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6\") " pod="calico-system/csi-node-driver-2xd4p" Oct 8 19:36:45.035256 kubelet[3384]: I1008 19:36:45.034394 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khhm2\" (UniqueName: \"kubernetes.io/projected/6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6-kube-api-access-khhm2\") pod \"csi-node-driver-2xd4p\" (UID: \"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6\") " pod="calico-system/csi-node-driver-2xd4p" Oct 8 19:36:45.036434 kubelet[3384]: E1008 19:36:45.036147 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.036434 kubelet[3384]: W1008 19:36:45.036180 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.036434 kubelet[3384]: E1008 19:36:45.036238 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.036814 kubelet[3384]: E1008 19:36:45.036792 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.037711 kubelet[3384]: W1008 19:36:45.036900 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.037711 kubelet[3384]: E1008 19:36:45.036945 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.038421 kubelet[3384]: E1008 19:36:45.038103 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.038421 kubelet[3384]: W1008 19:36:45.038138 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.038421 kubelet[3384]: E1008 19:36:45.038188 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.038998 kubelet[3384]: E1008 19:36:45.038776 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.038998 kubelet[3384]: W1008 19:36:45.038801 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.038998 kubelet[3384]: E1008 19:36:45.038833 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.039170 kubelet[3384]: E1008 19:36:45.039156 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.039636 kubelet[3384]: W1008 19:36:45.039171 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.039636 kubelet[3384]: E1008 19:36:45.039195 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.041078 kubelet[3384]: E1008 19:36:45.040551 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.041078 kubelet[3384]: W1008 19:36:45.040588 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.041078 kubelet[3384]: E1008 19:36:45.040941 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.041078 kubelet[3384]: W1008 19:36:45.040960 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.041078 kubelet[3384]: E1008 19:36:45.041005 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.042081 kubelet[3384]: E1008 19:36:45.041113 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.042081 kubelet[3384]: E1008 19:36:45.041900 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.042081 kubelet[3384]: W1008 19:36:45.041926 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.042882 kubelet[3384]: E1008 19:36:45.042761 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.043644 kubelet[3384]: E1008 19:36:45.043598 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.043644 kubelet[3384]: W1008 19:36:45.043633 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.044081 kubelet[3384]: E1008 19:36:45.043682 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.045334 kubelet[3384]: E1008 19:36:45.044908 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.045334 kubelet[3384]: W1008 19:36:45.044944 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.045334 kubelet[3384]: E1008 19:36:45.045045 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.045334 kubelet[3384]: E1008 19:36:45.045334 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.046010 kubelet[3384]: W1008 19:36:45.045353 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.046010 kubelet[3384]: E1008 19:36:45.045410 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.046010 kubelet[3384]: E1008 19:36:45.045670 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.046010 kubelet[3384]: W1008 19:36:45.045687 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.046010 kubelet[3384]: E1008 19:36:45.045976 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.046010 kubelet[3384]: W1008 19:36:45.045992 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.046354 kubelet[3384]: E1008 19:36:45.046260 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.046354 kubelet[3384]: W1008 19:36:45.046275 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.047670 kubelet[3384]: E1008 19:36:45.046516 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.047670 kubelet[3384]: W1008 19:36:45.046542 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.047670 kubelet[3384]: E1008 19:36:45.047028 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.047670 kubelet[3384]: E1008 19:36:45.047085 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.047670 kubelet[3384]: E1008 19:36:45.047115 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.047670 kubelet[3384]: E1008 19:36:45.047141 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.047670 kubelet[3384]: I1008 19:36:45.047190 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6-registration-dir\") pod \"csi-node-driver-2xd4p\" (UID: \"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6\") " pod="calico-system/csi-node-driver-2xd4p" Oct 8 19:36:45.048637 kubelet[3384]: E1008 19:36:45.047942 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.048637 kubelet[3384]: W1008 19:36:45.047967 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.048637 kubelet[3384]: E1008 19:36:45.048274 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.048637 kubelet[3384]: E1008 19:36:45.048419 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.048637 kubelet[3384]: W1008 19:36:45.048441 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.048891 kubelet[3384]: E1008 19:36:45.048723 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.048891 kubelet[3384]: W1008 19:36:45.048738 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.049005 kubelet[3384]: E1008 19:36:45.048983 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.049005 kubelet[3384]: W1008 19:36:45.048997 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.050593 kubelet[3384]: E1008 19:36:45.049295 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.050593 kubelet[3384]: W1008 19:36:45.049322 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.050593 kubelet[3384]: E1008 19:36:45.049592 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.050593 kubelet[3384]: W1008 19:36:45.049608 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.050593 kubelet[3384]: E1008 19:36:45.049879 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.050593 kubelet[3384]: W1008 19:36:45.049895 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.050593 kubelet[3384]: E1008 19:36:45.049922 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.050593 kubelet[3384]: E1008 19:36:45.049927 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.050593 kubelet[3384]: E1008 19:36:45.049966 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.050593 kubelet[3384]: E1008 19:36:45.049990 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.052600 kubelet[3384]: E1008 19:36:45.050206 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.052600 kubelet[3384]: E1008 19:36:45.050242 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.052600 kubelet[3384]: E1008 19:36:45.050209 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.052600 kubelet[3384]: W1008 19:36:45.050494 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.052600 kubelet[3384]: E1008 19:36:45.050813 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.052600 kubelet[3384]: E1008 19:36:45.051807 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.052600 kubelet[3384]: W1008 19:36:45.051836 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.052600 kubelet[3384]: E1008 19:36:45.051883 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.052600 kubelet[3384]: E1008 19:36:45.052331 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.052600 kubelet[3384]: W1008 19:36:45.052351 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.054205 kubelet[3384]: E1008 19:36:45.052515 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.054205 kubelet[3384]: E1008 19:36:45.052726 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.054205 kubelet[3384]: W1008 19:36:45.052742 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.054205 kubelet[3384]: E1008 19:36:45.052957 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.054205 kubelet[3384]: E1008 19:36:45.053091 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.054205 kubelet[3384]: W1008 19:36:45.053107 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.054205 kubelet[3384]: E1008 19:36:45.053475 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.054205 kubelet[3384]: E1008 19:36:45.053722 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.054205 kubelet[3384]: W1008 19:36:45.053740 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.054205 kubelet[3384]: E1008 19:36:45.053906 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.055101 kubelet[3384]: E1008 19:36:45.054126 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.055101 kubelet[3384]: W1008 19:36:45.054141 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.055101 kubelet[3384]: E1008 19:36:45.054333 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.057885 kubelet[3384]: E1008 19:36:45.056990 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.057885 kubelet[3384]: W1008 19:36:45.057029 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.057885 kubelet[3384]: E1008 19:36:45.057445 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.057885 kubelet[3384]: W1008 19:36:45.057466 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.057885 kubelet[3384]: E1008 19:36:45.057497 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.057885 kubelet[3384]: E1008 19:36:45.057549 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.068767 kubelet[3384]: E1008 19:36:45.066600 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.068767 kubelet[3384]: W1008 19:36:45.066642 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.068767 kubelet[3384]: E1008 19:36:45.066679 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.149567 kubelet[3384]: E1008 19:36:45.149466 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.149567 kubelet[3384]: W1008 19:36:45.149544 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.150016 kubelet[3384]: E1008 19:36:45.149612 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.150262 kubelet[3384]: E1008 19:36:45.150234 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.150330 kubelet[3384]: W1008 19:36:45.150261 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.150330 kubelet[3384]: E1008 19:36:45.150291 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.150735 kubelet[3384]: E1008 19:36:45.150708 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.150801 kubelet[3384]: W1008 19:36:45.150734 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.150801 kubelet[3384]: E1008 19:36:45.150761 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.152294 kubelet[3384]: E1008 19:36:45.152160 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.152645 kubelet[3384]: W1008 19:36:45.152198 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.152645 kubelet[3384]: E1008 19:36:45.152507 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.153295 kubelet[3384]: E1008 19:36:45.153091 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.153295 kubelet[3384]: W1008 19:36:45.153113 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.153295 kubelet[3384]: E1008 19:36:45.153153 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.153955 kubelet[3384]: E1008 19:36:45.153751 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.153955 kubelet[3384]: W1008 19:36:45.153772 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.153955 kubelet[3384]: E1008 19:36:45.153814 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.155071 kubelet[3384]: E1008 19:36:45.154831 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.155071 kubelet[3384]: W1008 19:36:45.154867 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.155071 kubelet[3384]: E1008 19:36:45.154953 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.155541 kubelet[3384]: E1008 19:36:45.155438 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.155541 kubelet[3384]: W1008 19:36:45.155459 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.155541 kubelet[3384]: E1008 19:36:45.155515 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.156201 kubelet[3384]: E1008 19:36:45.156033 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.156201 kubelet[3384]: W1008 19:36:45.156055 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.156201 kubelet[3384]: E1008 19:36:45.156108 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.156767 kubelet[3384]: E1008 19:36:45.156612 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.156767 kubelet[3384]: W1008 19:36:45.156645 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.156767 kubelet[3384]: E1008 19:36:45.156714 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.157620 kubelet[3384]: E1008 19:36:45.157434 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.157620 kubelet[3384]: W1008 19:36:45.157466 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.157620 kubelet[3384]: E1008 19:36:45.157545 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.158034 kubelet[3384]: E1008 19:36:45.157930 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.158034 kubelet[3384]: W1008 19:36:45.157951 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.158034 kubelet[3384]: E1008 19:36:45.158002 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.158665 kubelet[3384]: E1008 19:36:45.158512 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.158665 kubelet[3384]: W1008 19:36:45.158533 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.158665 kubelet[3384]: E1008 19:36:45.158595 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.159506 kubelet[3384]: E1008 19:36:45.159076 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.159506 kubelet[3384]: W1008 19:36:45.159108 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.159506 kubelet[3384]: E1008 19:36:45.159177 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.159967 kubelet[3384]: E1008 19:36:45.159852 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.159967 kubelet[3384]: W1008 19:36:45.159873 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.159967 kubelet[3384]: E1008 19:36:45.159929 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.160706 kubelet[3384]: E1008 19:36:45.160525 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.160706 kubelet[3384]: W1008 19:36:45.160546 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.160706 kubelet[3384]: E1008 19:36:45.160599 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.161101 kubelet[3384]: E1008 19:36:45.161004 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.161101 kubelet[3384]: W1008 19:36:45.161023 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.161101 kubelet[3384]: E1008 19:36:45.161074 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.162081 kubelet[3384]: E1008 19:36:45.161803 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.162081 kubelet[3384]: W1008 19:36:45.161841 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.162081 kubelet[3384]: E1008 19:36:45.161917 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.162644 kubelet[3384]: E1008 19:36:45.162540 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.162644 kubelet[3384]: W1008 19:36:45.162562 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.162644 kubelet[3384]: E1008 19:36:45.162617 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.163336 kubelet[3384]: E1008 19:36:45.163108 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.163336 kubelet[3384]: W1008 19:36:45.163129 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.163336 kubelet[3384]: E1008 19:36:45.163181 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.163738 kubelet[3384]: E1008 19:36:45.163618 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.163738 kubelet[3384]: W1008 19:36:45.163639 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.164134 kubelet[3384]: E1008 19:36:45.163904 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.164625 kubelet[3384]: E1008 19:36:45.164535 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.164625 kubelet[3384]: W1008 19:36:45.164564 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.165155 kubelet[3384]: E1008 19:36:45.164971 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.165747 kubelet[3384]: E1008 19:36:45.165723 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.165944 kubelet[3384]: W1008 19:36:45.165837 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.166078 kubelet[3384]: E1008 19:36:45.166028 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.167292 kubelet[3384]: E1008 19:36:45.166928 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.167292 kubelet[3384]: W1008 19:36:45.166979 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.167575 kubelet[3384]: E1008 19:36:45.167553 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.167855 kubelet[3384]: E1008 19:36:45.167748 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.167855 kubelet[3384]: W1008 19:36:45.167766 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.167855 kubelet[3384]: E1008 19:36:45.167829 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.168535 kubelet[3384]: E1008 19:36:45.168422 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.168535 kubelet[3384]: W1008 19:36:45.168446 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.168535 kubelet[3384]: E1008 19:36:45.168485 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.169005 kubelet[3384]: E1008 19:36:45.168843 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.169005 kubelet[3384]: W1008 19:36:45.168868 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.169005 kubelet[3384]: E1008 19:36:45.168922 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.170494 kubelet[3384]: E1008 19:36:45.170450 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.170494 kubelet[3384]: W1008 19:36:45.170480 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.170904 kubelet[3384]: E1008 19:36:45.170522 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.170974 kubelet[3384]: E1008 19:36:45.170919 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.170974 kubelet[3384]: W1008 19:36:45.170937 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.170974 kubelet[3384]: E1008 19:36:45.170963 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.172612 kubelet[3384]: E1008 19:36:45.172538 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.172612 kubelet[3384]: W1008 19:36:45.172612 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.172790 kubelet[3384]: E1008 19:36:45.172649 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.264178 kubelet[3384]: E1008 19:36:45.264017 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.264178 kubelet[3384]: W1008 19:36:45.264048 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.264178 kubelet[3384]: E1008 19:36:45.264081 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.264964 kubelet[3384]: E1008 19:36:45.264764 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.264964 kubelet[3384]: W1008 19:36:45.264786 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.264964 kubelet[3384]: E1008 19:36:45.264810 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.265543 kubelet[3384]: E1008 19:36:45.265337 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.265543 kubelet[3384]: W1008 19:36:45.265357 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.265543 kubelet[3384]: E1008 19:36:45.265381 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.266103 kubelet[3384]: E1008 19:36:45.265908 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.266103 kubelet[3384]: W1008 19:36:45.265925 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.266103 kubelet[3384]: E1008 19:36:45.265948 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.266549 kubelet[3384]: E1008 19:36:45.266429 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.266549 kubelet[3384]: W1008 19:36:45.266450 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.266549 kubelet[3384]: E1008 19:36:45.266473 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.267324 kubelet[3384]: E1008 19:36:45.267110 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.267324 kubelet[3384]: W1008 19:36:45.267201 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.267324 kubelet[3384]: E1008 19:36:45.267267 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.368130 kubelet[3384]: E1008 19:36:45.368014 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.369378 kubelet[3384]: W1008 19:36:45.369072 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.369378 kubelet[3384]: E1008 19:36:45.369123 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.369651 kubelet[3384]: E1008 19:36:45.369612 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.369651 kubelet[3384]: W1008 19:36:45.369642 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.369765 kubelet[3384]: E1008 19:36:45.369673 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.370088 kubelet[3384]: E1008 19:36:45.370049 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.370088 kubelet[3384]: W1008 19:36:45.370077 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.370252 kubelet[3384]: E1008 19:36:45.370107 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.370741 kubelet[3384]: E1008 19:36:45.370688 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.370741 kubelet[3384]: W1008 19:36:45.370727 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.370909 kubelet[3384]: E1008 19:36:45.370760 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.371299 kubelet[3384]: E1008 19:36:45.371233 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.371299 kubelet[3384]: W1008 19:36:45.371293 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.371446 kubelet[3384]: E1008 19:36:45.371337 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.371739 kubelet[3384]: E1008 19:36:45.371700 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.371739 kubelet[3384]: W1008 19:36:45.371729 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.371861 kubelet[3384]: E1008 19:36:45.371757 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.472778 kubelet[3384]: E1008 19:36:45.472624 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.473042 kubelet[3384]: W1008 19:36:45.472860 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.473042 kubelet[3384]: E1008 19:36:45.472904 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.477660 kubelet[3384]: E1008 19:36:45.477607 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.477660 kubelet[3384]: W1008 19:36:45.477645 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.477973 kubelet[3384]: E1008 19:36:45.477684 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.481797 kubelet[3384]: E1008 19:36:45.480472 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.481969 kubelet[3384]: W1008 19:36:45.481789 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.481969 kubelet[3384]: E1008 19:36:45.481847 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.484122 kubelet[3384]: E1008 19:36:45.484080 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.484122 kubelet[3384]: W1008 19:36:45.484115 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.484340 kubelet[3384]: E1008 19:36:45.484152 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.486292 kubelet[3384]: E1008 19:36:45.486251 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.486292 kubelet[3384]: W1008 19:36:45.486285 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.486467 kubelet[3384]: E1008 19:36:45.486321 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.490836 kubelet[3384]: E1008 19:36:45.490778 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.490836 kubelet[3384]: W1008 19:36:45.490822 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.491039 kubelet[3384]: E1008 19:36:45.490861 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.515389 kubelet[3384]: E1008 19:36:45.514496 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.517872 kubelet[3384]: W1008 19:36:45.516374 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.517872 kubelet[3384]: E1008 19:36:45.517555 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.592313 kubelet[3384]: E1008 19:36:45.591926 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.592313 kubelet[3384]: W1008 19:36:45.591960 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.592313 kubelet[3384]: E1008 19:36:45.592014 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.593698 kubelet[3384]: E1008 19:36:45.593097 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.593698 kubelet[3384]: W1008 19:36:45.593121 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.593698 kubelet[3384]: E1008 19:36:45.593150 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.593698 kubelet[3384]: E1008 19:36:45.593522 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.593698 kubelet[3384]: W1008 19:36:45.593538 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.593698 kubelet[3384]: E1008 19:36:45.593561 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.594498 kubelet[3384]: E1008 19:36:45.594257 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.594498 kubelet[3384]: W1008 19:36:45.594273 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.594498 kubelet[3384]: E1008 19:36:45.594297 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.594984 kubelet[3384]: E1008 19:36:45.594889 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.594984 kubelet[3384]: W1008 19:36:45.594908 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.594984 kubelet[3384]: E1008 19:36:45.594932 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.695919 kubelet[3384]: E1008 19:36:45.695768 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.695919 kubelet[3384]: W1008 19:36:45.695798 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.695919 kubelet[3384]: E1008 19:36:45.695830 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.696793 kubelet[3384]: E1008 19:36:45.696511 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.696793 kubelet[3384]: W1008 19:36:45.696533 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.696793 kubelet[3384]: E1008 19:36:45.696558 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.697312 kubelet[3384]: E1008 19:36:45.697101 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.697312 kubelet[3384]: W1008 19:36:45.697121 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.697312 kubelet[3384]: E1008 19:36:45.697149 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.697910 kubelet[3384]: E1008 19:36:45.697719 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.697910 kubelet[3384]: W1008 19:36:45.697739 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.697910 kubelet[3384]: E1008 19:36:45.697763 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.698373 kubelet[3384]: E1008 19:36:45.698275 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.698373 kubelet[3384]: W1008 19:36:45.698294 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.698373 kubelet[3384]: E1008 19:36:45.698317 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.731773 kubelet[3384]: E1008 19:36:45.731543 3384 configmap.go:199] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:45.731773 kubelet[3384]: E1008 19:36:45.731636 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/536c0ddd-ab88-451b-9828-cab6c38a8e7e-tigera-ca-bundle podName:536c0ddd-ab88-451b-9828-cab6c38a8e7e nodeName:}" failed. No retries permitted until 2024-10-08 19:36:46.23160932 +0000 UTC m=+23.792014564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/536c0ddd-ab88-451b-9828-cab6c38a8e7e-tigera-ca-bundle") pod "calico-typha-67689f86fb-vvlpb" (UID: "536c0ddd-ab88-451b-9828-cab6c38a8e7e") : failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:45.752474 kubelet[3384]: E1008 19:36:45.751963 3384 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:45.752474 kubelet[3384]: E1008 19:36:45.752023 3384 projected.go:200] Error preparing data for projected volume kube-api-access-nqrf5 for pod calico-system/calico-typha-67689f86fb-vvlpb: failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:45.752474 kubelet[3384]: E1008 19:36:45.752101 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/536c0ddd-ab88-451b-9828-cab6c38a8e7e-kube-api-access-nqrf5 podName:536c0ddd-ab88-451b-9828-cab6c38a8e7e nodeName:}" failed. No retries permitted until 2024-10-08 19:36:46.252073964 +0000 UTC m=+23.812479208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqrf5" (UniqueName: "kubernetes.io/projected/536c0ddd-ab88-451b-9828-cab6c38a8e7e-kube-api-access-nqrf5") pod "calico-typha-67689f86fb-vvlpb" (UID: "536c0ddd-ab88-451b-9828-cab6c38a8e7e") : failed to sync configmap cache: timed out waiting for the condition Oct 8 19:36:45.799666 kubelet[3384]: E1008 19:36:45.799503 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.799666 kubelet[3384]: W1008 19:36:45.799534 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.799666 kubelet[3384]: E1008 19:36:45.799566 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.801600 kubelet[3384]: E1008 19:36:45.801164 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.801600 kubelet[3384]: W1008 19:36:45.801195 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.801600 kubelet[3384]: E1008 19:36:45.801277 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.802242 kubelet[3384]: E1008 19:36:45.801955 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.802242 kubelet[3384]: W1008 19:36:45.801975 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.802242 kubelet[3384]: E1008 19:36:45.802002 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.802962 kubelet[3384]: E1008 19:36:45.802711 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.802962 kubelet[3384]: W1008 19:36:45.802732 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.802962 kubelet[3384]: E1008 19:36:45.802761 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.803511 kubelet[3384]: E1008 19:36:45.803441 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.803511 kubelet[3384]: W1008 19:36:45.803463 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.803701 kubelet[3384]: E1008 19:36:45.803639 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.843257 kubelet[3384]: E1008 19:36:45.841260 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.843995 kubelet[3384]: W1008 19:36:45.843546 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.843995 kubelet[3384]: E1008 19:36:45.843639 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.867844 kubelet[3384]: E1008 19:36:45.867549 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.867844 kubelet[3384]: W1008 19:36:45.867586 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.867844 kubelet[3384]: E1008 19:36:45.867624 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.904416 kubelet[3384]: E1008 19:36:45.904370 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.904614 kubelet[3384]: W1008 19:36:45.904590 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.904751 kubelet[3384]: E1008 19:36:45.904729 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.905441 kubelet[3384]: E1008 19:36:45.905303 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.905441 kubelet[3384]: W1008 19:36:45.905325 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.905441 kubelet[3384]: E1008 19:36:45.905353 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.906040 kubelet[3384]: E1008 19:36:45.905948 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.906040 kubelet[3384]: W1008 19:36:45.905969 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.906040 kubelet[3384]: E1008 19:36:45.905993 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:45.988391 kubelet[3384]: E1008 19:36:45.988266 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:45.988852 kubelet[3384]: W1008 19:36:45.988538 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:45.988852 kubelet[3384]: E1008 19:36:45.988581 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.007807 kubelet[3384]: E1008 19:36:46.007650 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.007807 kubelet[3384]: W1008 19:36:46.007679 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.007807 kubelet[3384]: E1008 19:36:46.007711 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.008575 kubelet[3384]: E1008 19:36:46.008454 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.008575 kubelet[3384]: W1008 19:36:46.008476 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.008575 kubelet[3384]: E1008 19:36:46.008503 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.076041 containerd[1937]: time="2024-10-08T19:36:46.075901994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9sfk7,Uid:f1ded46c-d09b-4ce9-80cd-36dce5274ecb,Namespace:calico-system,Attempt:0,}" Oct 8 19:36:46.115448 kubelet[3384]: E1008 19:36:46.113475 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.115448 kubelet[3384]: W1008 19:36:46.113642 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.115448 kubelet[3384]: E1008 19:36:46.113685 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.115448 kubelet[3384]: E1008 19:36:46.115261 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.115448 kubelet[3384]: W1008 19:36:46.115290 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.116541 kubelet[3384]: E1008 19:36:46.116262 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.133241 containerd[1937]: time="2024-10-08T19:36:46.132750338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:46.133468 containerd[1937]: time="2024-10-08T19:36:46.133389446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:46.133570 containerd[1937]: time="2024-10-08T19:36:46.133508810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:46.133941 containerd[1937]: time="2024-10-08T19:36:46.133873130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:46.174664 systemd[1]: run-containerd-runc-k8s.io-ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38-runc.ZHGHX4.mount: Deactivated successfully. Oct 8 19:36:46.191551 systemd[1]: Started cri-containerd-ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38.scope - libcontainer container ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38. Oct 8 19:36:46.219569 kubelet[3384]: E1008 19:36:46.219419 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.219569 kubelet[3384]: W1008 19:36:46.219479 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.219569 kubelet[3384]: E1008 19:36:46.219518 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.220661 kubelet[3384]: E1008 19:36:46.220502 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.220661 kubelet[3384]: W1008 19:36:46.220532 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.220661 kubelet[3384]: E1008 19:36:46.220564 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.263574 containerd[1937]: time="2024-10-08T19:36:46.261451539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9sfk7,Uid:f1ded46c-d09b-4ce9-80cd-36dce5274ecb,Namespace:calico-system,Attempt:0,} returns sandbox id \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\"" Oct 8 19:36:46.267518 containerd[1937]: time="2024-10-08T19:36:46.267452703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:36:46.322045 kubelet[3384]: E1008 19:36:46.321998 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.322045 kubelet[3384]: W1008 19:36:46.322033 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.322297 kubelet[3384]: E1008 19:36:46.322069 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.322636 kubelet[3384]: E1008 19:36:46.322554 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.322636 kubelet[3384]: W1008 19:36:46.322606 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.323622 kubelet[3384]: E1008 19:36:46.322652 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.323622 kubelet[3384]: E1008 19:36:46.322994 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.323622 kubelet[3384]: W1008 19:36:46.323011 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.323622 kubelet[3384]: E1008 19:36:46.323036 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.323622 kubelet[3384]: E1008 19:36:46.323499 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.323622 kubelet[3384]: W1008 19:36:46.323541 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.323622 kubelet[3384]: E1008 19:36:46.323621 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.324123 kubelet[3384]: E1008 19:36:46.324034 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.324123 kubelet[3384]: W1008 19:36:46.324054 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.324123 kubelet[3384]: E1008 19:36:46.324080 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.324560 kubelet[3384]: E1008 19:36:46.324528 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.324640 kubelet[3384]: W1008 19:36:46.324574 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.324702 kubelet[3384]: E1008 19:36:46.324665 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.325185 kubelet[3384]: E1008 19:36:46.325146 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.325298 kubelet[3384]: W1008 19:36:46.325185 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.325298 kubelet[3384]: E1008 19:36:46.325214 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.327512 kubelet[3384]: E1008 19:36:46.327470 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.327512 kubelet[3384]: W1008 19:36:46.327504 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.327701 kubelet[3384]: E1008 19:36:46.327551 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.327926 kubelet[3384]: E1008 19:36:46.327893 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.327926 kubelet[3384]: W1008 19:36:46.327919 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.328052 kubelet[3384]: E1008 19:36:46.328031 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.328491 kubelet[3384]: E1008 19:36:46.328457 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.328491 kubelet[3384]: W1008 19:36:46.328483 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.328640 kubelet[3384]: E1008 19:36:46.328567 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.329106 kubelet[3384]: E1008 19:36:46.329073 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.329106 kubelet[3384]: W1008 19:36:46.329098 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.329256 kubelet[3384]: E1008 19:36:46.329147 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.339983 kubelet[3384]: E1008 19:36:46.339897 3384 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:36:46.343013 kubelet[3384]: W1008 19:36:46.340544 3384 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:36:46.343013 kubelet[3384]: E1008 19:36:46.340594 3384 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:36:46.384436 containerd[1937]: time="2024-10-08T19:36:46.383877231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67689f86fb-vvlpb,Uid:536c0ddd-ab88-451b-9828-cab6c38a8e7e,Namespace:calico-system,Attempt:0,}" Oct 8 19:36:46.431667 containerd[1937]: time="2024-10-08T19:36:46.431518420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:46.432859 containerd[1937]: time="2024-10-08T19:36:46.432746356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:46.433180 containerd[1937]: time="2024-10-08T19:36:46.433009348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:46.433933 containerd[1937]: time="2024-10-08T19:36:46.433583128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:46.468822 systemd[1]: Started cri-containerd-3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d.scope - libcontainer container 3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d. Oct 8 19:36:46.584934 containerd[1937]: time="2024-10-08T19:36:46.583565236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67689f86fb-vvlpb,Uid:536c0ddd-ab88-451b-9828-cab6c38a8e7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\"" Oct 8 19:36:46.743291 kubelet[3384]: E1008 19:36:46.743209 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:36:47.590686 containerd[1937]: time="2024-10-08T19:36:47.590609309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:47.593311 containerd[1937]: time="2024-10-08T19:36:47.593204609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:36:47.595511 containerd[1937]: time="2024-10-08T19:36:47.595430645Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:47.601700 containerd[1937]: time="2024-10-08T19:36:47.601553850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:47.604537 containerd[1937]: time="2024-10-08T19:36:47.604459962Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.336929367s" Oct 8 19:36:47.604537 containerd[1937]: time="2024-10-08T19:36:47.604529274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:36:47.606581 containerd[1937]: time="2024-10-08T19:36:47.606464274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:36:47.613183 containerd[1937]: time="2024-10-08T19:36:47.612729366Z" level=info msg="CreateContainer within sandbox \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:36:47.656418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4086823293.mount: Deactivated successfully. Oct 8 19:36:47.705027 containerd[1937]: time="2024-10-08T19:36:47.704935938Z" level=info msg="CreateContainer within sandbox \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\"" Oct 8 19:36:47.706609 containerd[1937]: time="2024-10-08T19:36:47.706537830Z" level=info msg="StartContainer for \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\"" Oct 8 19:36:47.804645 systemd[1]: Started cri-containerd-77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1.scope - libcontainer container 77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1. Oct 8 19:36:47.970183 containerd[1937]: time="2024-10-08T19:36:47.970107811Z" level=info msg="StartContainer for \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\" returns successfully" Oct 8 19:36:48.113888 systemd[1]: cri-containerd-77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1.scope: Deactivated successfully. Oct 8 19:36:48.222996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1-rootfs.mount: Deactivated successfully. Oct 8 19:36:48.745257 kubelet[3384]: E1008 19:36:48.743427 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:36:48.922109 containerd[1937]: time="2024-10-08T19:36:48.921982652Z" level=info msg="StopContainer for \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\" with timeout 5 (s)" Oct 8 19:36:48.929379 containerd[1937]: time="2024-10-08T19:36:48.928817648Z" level=info msg="Stop container \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\" with signal terminated" Oct 8 19:36:48.930842 containerd[1937]: time="2024-10-08T19:36:48.930705620Z" level=info msg="shim disconnected" id=77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1 namespace=k8s.io Oct 8 19:36:48.930842 containerd[1937]: time="2024-10-08T19:36:48.930790124Z" level=warning msg="cleaning up after shim disconnected" id=77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1 namespace=k8s.io Oct 8 19:36:48.930842 containerd[1937]: time="2024-10-08T19:36:48.930811652Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:36:48.962324 containerd[1937]: time="2024-10-08T19:36:48.962136488Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:36:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:36:48.968340 containerd[1937]: time="2024-10-08T19:36:48.968270192Z" level=info msg="StopContainer for \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\" returns successfully" Oct 8 19:36:48.969763 containerd[1937]: time="2024-10-08T19:36:48.969635984Z" level=info msg="StopPodSandbox for \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\"" Oct 8 19:36:48.970545 containerd[1937]: time="2024-10-08T19:36:48.970345424Z" level=info msg="Container to stop \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:36:48.978495 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38-shm.mount: Deactivated successfully. Oct 8 19:36:48.998914 systemd[1]: cri-containerd-ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38.scope: Deactivated successfully. Oct 8 19:36:49.077950 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38-rootfs.mount: Deactivated successfully. Oct 8 19:36:49.087390 containerd[1937]: time="2024-10-08T19:36:49.087305345Z" level=info msg="shim disconnected" id=ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38 namespace=k8s.io Oct 8 19:36:49.087390 containerd[1937]: time="2024-10-08T19:36:49.087380861Z" level=warning msg="cleaning up after shim disconnected" id=ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38 namespace=k8s.io Oct 8 19:36:49.088164 containerd[1937]: time="2024-10-08T19:36:49.087403169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:36:49.129473 containerd[1937]: time="2024-10-08T19:36:49.129366749Z" level=info msg="TearDown network for sandbox \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\" successfully" Oct 8 19:36:49.130394 containerd[1937]: time="2024-10-08T19:36:49.129870065Z" level=info msg="StopPodSandbox for \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\" returns successfully" Oct 8 19:36:49.264559 kubelet[3384]: I1008 19:36:49.263746 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-var-lib-calico\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.264559 kubelet[3384]: I1008 19:36:49.263878 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-xtables-lock\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.264559 kubelet[3384]: I1008 19:36:49.263945 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-flexvol-driver-host\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.264559 kubelet[3384]: I1008 19:36:49.264011 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-lib-modules\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.264559 kubelet[3384]: I1008 19:36:49.264080 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztq9n\" (UniqueName: \"kubernetes.io/projected/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-kube-api-access-ztq9n\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.264559 kubelet[3384]: I1008 19:36:49.264126 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-net-dir\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.265177 kubelet[3384]: I1008 19:36:49.264170 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-log-dir\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.267359 kubelet[3384]: I1008 19:36:49.265348 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:36:49.267359 kubelet[3384]: I1008 19:36:49.265426 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:36:49.267359 kubelet[3384]: I1008 19:36:49.265466 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:36:49.267359 kubelet[3384]: I1008 19:36:49.265506 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:36:49.267359 kubelet[3384]: I1008 19:36:49.265555 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:36:49.269768 kubelet[3384]: I1008 19:36:49.268134 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:36:49.269768 kubelet[3384]: I1008 19:36:49.268212 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:36:49.269768 kubelet[3384]: I1008 19:36:49.268297 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-bin-dir\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.269768 kubelet[3384]: I1008 19:36:49.268359 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-tigera-ca-bundle\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.269768 kubelet[3384]: I1008 19:36:49.268411 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-var-run-calico\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.269768 kubelet[3384]: I1008 19:36:49.268455 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-policysync\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.270345 kubelet[3384]: I1008 19:36:49.268499 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-node-certs\") pod \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\" (UID: \"f1ded46c-d09b-4ce9-80cd-36dce5274ecb\") " Oct 8 19:36:49.270345 kubelet[3384]: I1008 19:36:49.268573 3384 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-var-lib-calico\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.270345 kubelet[3384]: I1008 19:36:49.268609 3384 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-xtables-lock\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.270345 kubelet[3384]: I1008 19:36:49.268633 3384 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-flexvol-driver-host\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.270345 kubelet[3384]: I1008 19:36:49.268658 3384 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-lib-modules\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.270345 kubelet[3384]: I1008 19:36:49.268682 3384 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-net-dir\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.270345 kubelet[3384]: I1008 19:36:49.268707 3384 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-log-dir\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.270345 kubelet[3384]: I1008 19:36:49.268730 3384 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-cni-bin-dir\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.289354 kubelet[3384]: I1008 19:36:49.284955 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:36:49.289354 kubelet[3384]: I1008 19:36:49.284955 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-policysync" (OuterVolumeSpecName: "policysync") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 8 19:36:49.289354 kubelet[3384]: I1008 19:36:49.285725 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:36:49.295616 kubelet[3384]: I1008 19:36:49.295563 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-node-certs" (OuterVolumeSpecName: "node-certs") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:36:49.296072 systemd[1]: var-lib-kubelet-pods-f1ded46c\x2dd09b\x2d4ce9\x2d80cd\x2d36dce5274ecb-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Oct 8 19:36:49.309621 systemd[1]: var-lib-kubelet-pods-f1ded46c\x2dd09b\x2d4ce9\x2d80cd\x2d36dce5274ecb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dztq9n.mount: Deactivated successfully. Oct 8 19:36:49.317575 kubelet[3384]: I1008 19:36:49.317488 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-kube-api-access-ztq9n" (OuterVolumeSpecName: "kube-api-access-ztq9n") pod "f1ded46c-d09b-4ce9-80cd-36dce5274ecb" (UID: "f1ded46c-d09b-4ce9-80cd-36dce5274ecb"). InnerVolumeSpecName "kube-api-access-ztq9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:36:49.369442 kubelet[3384]: I1008 19:36:49.369209 3384 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-var-run-calico\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.369442 kubelet[3384]: I1008 19:36:49.369329 3384 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-policysync\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.369442 kubelet[3384]: I1008 19:36:49.369357 3384 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-node-certs\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.369442 kubelet[3384]: I1008 19:36:49.369384 3384 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ztq9n\" (UniqueName: \"kubernetes.io/projected/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-kube-api-access-ztq9n\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.369442 kubelet[3384]: I1008 19:36:49.369408 3384 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ded46c-d09b-4ce9-80cd-36dce5274ecb-tigera-ca-bundle\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:49.926737 kubelet[3384]: I1008 19:36:49.926684 3384 scope.go:117] "RemoveContainer" containerID="77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1" Oct 8 19:36:49.938161 containerd[1937]: time="2024-10-08T19:36:49.938084085Z" level=info msg="RemoveContainer for \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\"" Oct 8 19:36:49.947924 containerd[1937]: time="2024-10-08T19:36:49.947868897Z" level=info msg="RemoveContainer for \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\" returns successfully" Oct 8 19:36:49.948983 kubelet[3384]: I1008 19:36:49.948831 3384 scope.go:117] "RemoveContainer" containerID="77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1" Oct 8 19:36:49.950163 containerd[1937]: time="2024-10-08T19:36:49.949494081Z" level=error msg="ContainerStatus for \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\": not found" Oct 8 19:36:49.950356 kubelet[3384]: E1008 19:36:49.949813 3384 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\": not found" containerID="77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1" Oct 8 19:36:49.950356 kubelet[3384]: I1008 19:36:49.949906 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1"} err="failed to get container status \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\": rpc error: code = NotFound desc = an error occurred when try to find container \"77fb043966a415fc03c9c2b2c1a5b85f7b6da172096ed246adefa98bcf3f64a1\": not found" Oct 8 19:36:49.961914 systemd[1]: Removed slice kubepods-besteffort-podf1ded46c_d09b_4ce9_80cd_36dce5274ecb.slice - libcontainer container kubepods-besteffort-podf1ded46c_d09b_4ce9_80cd_36dce5274ecb.slice. Oct 8 19:36:50.066348 kubelet[3384]: I1008 19:36:50.065185 3384 topology_manager.go:215] "Topology Admit Handler" podUID="d93bb137-6f93-49b2-81bd-8bd6c2912b2a" podNamespace="calico-system" podName="calico-node-89wvx" Oct 8 19:36:50.066348 kubelet[3384]: E1008 19:36:50.065523 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f1ded46c-d09b-4ce9-80cd-36dce5274ecb" containerName="flexvol-driver" Oct 8 19:36:50.066348 kubelet[3384]: I1008 19:36:50.065605 3384 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1ded46c-d09b-4ce9-80cd-36dce5274ecb" containerName="flexvol-driver" Oct 8 19:36:50.104760 systemd[1]: Created slice kubepods-besteffort-podd93bb137_6f93_49b2_81bd_8bd6c2912b2a.slice - libcontainer container kubepods-besteffort-podd93bb137_6f93_49b2_81bd_8bd6c2912b2a.slice. Oct 8 19:36:50.176322 kubelet[3384]: I1008 19:36:50.176159 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-node-certs\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.176511 kubelet[3384]: I1008 19:36:50.176352 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-var-lib-calico\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.176511 kubelet[3384]: I1008 19:36:50.176406 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-flexvol-driver-host\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.176511 kubelet[3384]: I1008 19:36:50.176460 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-lib-modules\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.176511 kubelet[3384]: I1008 19:36:50.176508 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-var-run-calico\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.176881 kubelet[3384]: I1008 19:36:50.176555 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-tigera-ca-bundle\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.176881 kubelet[3384]: I1008 19:36:50.176603 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-policysync\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.176881 kubelet[3384]: I1008 19:36:50.176648 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-cni-net-dir\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.176881 kubelet[3384]: I1008 19:36:50.176697 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-xtables-lock\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.176881 kubelet[3384]: I1008 19:36:50.176741 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpvsp\" (UniqueName: \"kubernetes.io/projected/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-kube-api-access-rpvsp\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.177171 kubelet[3384]: I1008 19:36:50.176783 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-cni-bin-dir\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.177171 kubelet[3384]: I1008 19:36:50.176827 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d93bb137-6f93-49b2-81bd-8bd6c2912b2a-cni-log-dir\") pod \"calico-node-89wvx\" (UID: \"d93bb137-6f93-49b2-81bd-8bd6c2912b2a\") " pod="calico-system/calico-node-89wvx" Oct 8 19:36:50.416744 containerd[1937]: time="2024-10-08T19:36:50.416510743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-89wvx,Uid:d93bb137-6f93-49b2-81bd-8bd6c2912b2a,Namespace:calico-system,Attempt:0,}" Oct 8 19:36:50.544182 containerd[1937]: time="2024-10-08T19:36:50.542920856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:50.544182 containerd[1937]: time="2024-10-08T19:36:50.543519524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:50.544182 containerd[1937]: time="2024-10-08T19:36:50.543696032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:50.548723 containerd[1937]: time="2024-10-08T19:36:50.545553332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:50.631567 systemd[1]: Started cri-containerd-64f2806e3a72dca526590e18d634d8942515c39d625f4693231b28d1ae70552e.scope - libcontainer container 64f2806e3a72dca526590e18d634d8942515c39d625f4693231b28d1ae70552e. Oct 8 19:36:50.747407 kubelet[3384]: E1008 19:36:50.746431 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:36:50.758570 kubelet[3384]: I1008 19:36:50.758511 3384 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f1ded46c-d09b-4ce9-80cd-36dce5274ecb" path="/var/lib/kubelet/pods/f1ded46c-d09b-4ce9-80cd-36dce5274ecb/volumes" Oct 8 19:36:50.838825 containerd[1937]: time="2024-10-08T19:36:50.838147822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-89wvx,Uid:d93bb137-6f93-49b2-81bd-8bd6c2912b2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"64f2806e3a72dca526590e18d634d8942515c39d625f4693231b28d1ae70552e\"" Oct 8 19:36:50.849960 containerd[1937]: time="2024-10-08T19:36:50.849884146Z" level=info msg="CreateContainer within sandbox \"64f2806e3a72dca526590e18d634d8942515c39d625f4693231b28d1ae70552e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:36:50.913276 containerd[1937]: time="2024-10-08T19:36:50.910962334Z" level=info msg="CreateContainer within sandbox \"64f2806e3a72dca526590e18d634d8942515c39d625f4693231b28d1ae70552e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f1aeb7358f253e0564be775caefc6e94d44076bec928050f730f380612bc4acd\"" Oct 8 19:36:50.913276 containerd[1937]: time="2024-10-08T19:36:50.912359254Z" level=info msg="StartContainer for \"f1aeb7358f253e0564be775caefc6e94d44076bec928050f730f380612bc4acd\"" Oct 8 19:36:51.088567 systemd[1]: Started cri-containerd-f1aeb7358f253e0564be775caefc6e94d44076bec928050f730f380612bc4acd.scope - libcontainer container f1aeb7358f253e0564be775caefc6e94d44076bec928050f730f380612bc4acd. Oct 8 19:36:51.126355 containerd[1937]: time="2024-10-08T19:36:51.125753719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:51.130788 containerd[1937]: time="2024-10-08T19:36:51.130707967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:36:51.131436 containerd[1937]: time="2024-10-08T19:36:51.131354875Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:51.142290 containerd[1937]: time="2024-10-08T19:36:51.141407491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:51.144867 containerd[1937]: time="2024-10-08T19:36:51.144799735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 3.538250213s" Oct 8 19:36:51.145446 containerd[1937]: time="2024-10-08T19:36:51.145400755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:36:51.175131 containerd[1937]: time="2024-10-08T19:36:51.175075783Z" level=info msg="CreateContainer within sandbox \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:36:51.210002 containerd[1937]: time="2024-10-08T19:36:51.208947343Z" level=info msg="CreateContainer within sandbox \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\"" Oct 8 19:36:51.212822 containerd[1937]: time="2024-10-08T19:36:51.212761147Z" level=info msg="StartContainer for \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\"" Oct 8 19:36:51.289984 systemd[1]: Started cri-containerd-9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133.scope - libcontainer container 9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133. Oct 8 19:36:51.389462 containerd[1937]: time="2024-10-08T19:36:51.388998068Z" level=info msg="StartContainer for \"f1aeb7358f253e0564be775caefc6e94d44076bec928050f730f380612bc4acd\" returns successfully" Oct 8 19:36:51.476528 systemd[1]: cri-containerd-f1aeb7358f253e0564be775caefc6e94d44076bec928050f730f380612bc4acd.scope: Deactivated successfully. Oct 8 19:36:51.480967 containerd[1937]: time="2024-10-08T19:36:51.477629625Z" level=info msg="StartContainer for \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\" returns successfully" Oct 8 19:36:51.571641 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1aeb7358f253e0564be775caefc6e94d44076bec928050f730f380612bc4acd-rootfs.mount: Deactivated successfully. Oct 8 19:36:51.987703 containerd[1937]: time="2024-10-08T19:36:51.987373847Z" level=info msg="StopContainer for \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\" with timeout 300 (s)" Oct 8 19:36:51.993284 containerd[1937]: time="2024-10-08T19:36:51.991160663Z" level=info msg="Stop container \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\" with signal terminated" Oct 8 19:36:52.064433 systemd[1]: cri-containerd-9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133.scope: Deactivated successfully. Oct 8 19:36:52.084130 kubelet[3384]: I1008 19:36:52.083877 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-67689f86fb-vvlpb" podStartSLOduration=3.526538369 podStartE2EDuration="8.083814368s" podCreationTimestamp="2024-10-08 19:36:44 +0000 UTC" firstStartedPulling="2024-10-08 19:36:46.588637276 +0000 UTC m=+24.149042520" lastFinishedPulling="2024-10-08 19:36:51.145913275 +0000 UTC m=+28.706318519" observedRunningTime="2024-10-08 19:36:52.021291355 +0000 UTC m=+29.581696647" watchObservedRunningTime="2024-10-08 19:36:52.083814368 +0000 UTC m=+29.644219948" Oct 8 19:36:52.151039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133-rootfs.mount: Deactivated successfully. Oct 8 19:36:52.235724 containerd[1937]: time="2024-10-08T19:36:52.235282389Z" level=info msg="shim disconnected" id=9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133 namespace=k8s.io Oct 8 19:36:52.237520 containerd[1937]: time="2024-10-08T19:36:52.235503273Z" level=info msg="shim disconnected" id=f1aeb7358f253e0564be775caefc6e94d44076bec928050f730f380612bc4acd namespace=k8s.io Oct 8 19:36:52.237520 containerd[1937]: time="2024-10-08T19:36:52.236369325Z" level=warning msg="cleaning up after shim disconnected" id=f1aeb7358f253e0564be775caefc6e94d44076bec928050f730f380612bc4acd namespace=k8s.io Oct 8 19:36:52.237520 containerd[1937]: time="2024-10-08T19:36:52.236430381Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:36:52.237520 containerd[1937]: time="2024-10-08T19:36:52.236513421Z" level=warning msg="cleaning up after shim disconnected" id=9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133 namespace=k8s.io Oct 8 19:36:52.237520 containerd[1937]: time="2024-10-08T19:36:52.236537217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:36:52.284011 containerd[1937]: time="2024-10-08T19:36:52.282984609Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:36:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:36:52.289013 containerd[1937]: time="2024-10-08T19:36:52.288705381Z" level=info msg="StopContainer for \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\" returns successfully" Oct 8 19:36:52.289854 containerd[1937]: time="2024-10-08T19:36:52.289785897Z" level=info msg="StopPodSandbox for \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\"" Oct 8 19:36:52.293289 containerd[1937]: time="2024-10-08T19:36:52.289859817Z" level=info msg="Container to stop \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 8 19:36:52.295341 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d-shm.mount: Deactivated successfully. Oct 8 19:36:52.317485 systemd[1]: cri-containerd-3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d.scope: Deactivated successfully. Oct 8 19:36:52.386481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d-rootfs.mount: Deactivated successfully. Oct 8 19:36:52.396627 containerd[1937]: time="2024-10-08T19:36:52.396279285Z" level=info msg="shim disconnected" id=3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d namespace=k8s.io Oct 8 19:36:52.396627 containerd[1937]: time="2024-10-08T19:36:52.396357693Z" level=warning msg="cleaning up after shim disconnected" id=3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d namespace=k8s.io Oct 8 19:36:52.396627 containerd[1937]: time="2024-10-08T19:36:52.396379773Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:36:52.434851 containerd[1937]: time="2024-10-08T19:36:52.434647114Z" level=info msg="TearDown network for sandbox \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\" successfully" Oct 8 19:36:52.434851 containerd[1937]: time="2024-10-08T19:36:52.434707102Z" level=info msg="StopPodSandbox for \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\" returns successfully" Oct 8 19:36:52.486338 kubelet[3384]: I1008 19:36:52.486207 3384 topology_manager.go:215] "Topology Admit Handler" podUID="cfded20d-1526-493e-b70b-2d74c843cf5d" podNamespace="calico-system" podName="calico-typha-8d6cb5bdc-q8zsp" Oct 8 19:36:52.486755 kubelet[3384]: E1008 19:36:52.486383 3384 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="536c0ddd-ab88-451b-9828-cab6c38a8e7e" containerName="calico-typha" Oct 8 19:36:52.486755 kubelet[3384]: I1008 19:36:52.486452 3384 memory_manager.go:354] "RemoveStaleState removing state" podUID="536c0ddd-ab88-451b-9828-cab6c38a8e7e" containerName="calico-typha" Oct 8 19:36:52.508511 systemd[1]: Created slice kubepods-besteffort-podcfded20d_1526_493e_b70b_2d74c843cf5d.slice - libcontainer container kubepods-besteffort-podcfded20d_1526_493e_b70b_2d74c843cf5d.slice. Oct 8 19:36:52.509667 kubelet[3384]: I1008 19:36:52.508582 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqrf5\" (UniqueName: \"kubernetes.io/projected/536c0ddd-ab88-451b-9828-cab6c38a8e7e-kube-api-access-nqrf5\") pod \"536c0ddd-ab88-451b-9828-cab6c38a8e7e\" (UID: \"536c0ddd-ab88-451b-9828-cab6c38a8e7e\") " Oct 8 19:36:52.509667 kubelet[3384]: I1008 19:36:52.508655 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/536c0ddd-ab88-451b-9828-cab6c38a8e7e-tigera-ca-bundle\") pod \"536c0ddd-ab88-451b-9828-cab6c38a8e7e\" (UID: \"536c0ddd-ab88-451b-9828-cab6c38a8e7e\") " Oct 8 19:36:52.509667 kubelet[3384]: I1008 19:36:52.508703 3384 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/536c0ddd-ab88-451b-9828-cab6c38a8e7e-typha-certs\") pod \"536c0ddd-ab88-451b-9828-cab6c38a8e7e\" (UID: \"536c0ddd-ab88-451b-9828-cab6c38a8e7e\") " Oct 8 19:36:52.509667 kubelet[3384]: I1008 19:36:52.508806 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cfded20d-1526-493e-b70b-2d74c843cf5d-tigera-ca-bundle\") pod \"calico-typha-8d6cb5bdc-q8zsp\" (UID: \"cfded20d-1526-493e-b70b-2d74c843cf5d\") " pod="calico-system/calico-typha-8d6cb5bdc-q8zsp" Oct 8 19:36:52.509667 kubelet[3384]: I1008 19:36:52.508871 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65mw7\" (UniqueName: \"kubernetes.io/projected/cfded20d-1526-493e-b70b-2d74c843cf5d-kube-api-access-65mw7\") pod \"calico-typha-8d6cb5bdc-q8zsp\" (UID: \"cfded20d-1526-493e-b70b-2d74c843cf5d\") " pod="calico-system/calico-typha-8d6cb5bdc-q8zsp" Oct 8 19:36:52.510771 kubelet[3384]: I1008 19:36:52.508922 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cfded20d-1526-493e-b70b-2d74c843cf5d-typha-certs\") pod \"calico-typha-8d6cb5bdc-q8zsp\" (UID: \"cfded20d-1526-493e-b70b-2d74c843cf5d\") " pod="calico-system/calico-typha-8d6cb5bdc-q8zsp" Oct 8 19:36:52.525092 systemd[1]: var-lib-kubelet-pods-536c0ddd\x2dab88\x2d451b\x2d9828\x2dcab6c38a8e7e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqrf5.mount: Deactivated successfully. Oct 8 19:36:52.535120 kubelet[3384]: I1008 19:36:52.534614 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536c0ddd-ab88-451b-9828-cab6c38a8e7e-kube-api-access-nqrf5" (OuterVolumeSpecName: "kube-api-access-nqrf5") pod "536c0ddd-ab88-451b-9828-cab6c38a8e7e" (UID: "536c0ddd-ab88-451b-9828-cab6c38a8e7e"). InnerVolumeSpecName "kube-api-access-nqrf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 8 19:36:52.537589 systemd[1]: var-lib-kubelet-pods-536c0ddd\x2dab88\x2d451b\x2d9828\x2dcab6c38a8e7e-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Oct 8 19:36:52.548462 kubelet[3384]: I1008 19:36:52.548029 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/536c0ddd-ab88-451b-9828-cab6c38a8e7e-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "536c0ddd-ab88-451b-9828-cab6c38a8e7e" (UID: "536c0ddd-ab88-451b-9828-cab6c38a8e7e"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 8 19:36:52.553020 systemd[1]: var-lib-kubelet-pods-536c0ddd\x2dab88\x2d451b\x2d9828\x2dcab6c38a8e7e-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Oct 8 19:36:52.554189 kubelet[3384]: I1008 19:36:52.554096 3384 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536c0ddd-ab88-451b-9828-cab6c38a8e7e-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "536c0ddd-ab88-451b-9828-cab6c38a8e7e" (UID: "536c0ddd-ab88-451b-9828-cab6c38a8e7e"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 8 19:36:52.610341 kubelet[3384]: I1008 19:36:52.609427 3384 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nqrf5\" (UniqueName: \"kubernetes.io/projected/536c0ddd-ab88-451b-9828-cab6c38a8e7e-kube-api-access-nqrf5\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:52.610341 kubelet[3384]: I1008 19:36:52.609481 3384 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/536c0ddd-ab88-451b-9828-cab6c38a8e7e-tigera-ca-bundle\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:52.610341 kubelet[3384]: I1008 19:36:52.609507 3384 reconciler_common.go:300] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/536c0ddd-ab88-451b-9828-cab6c38a8e7e-typha-certs\") on node \"ip-172-31-27-200\" DevicePath \"\"" Oct 8 19:36:52.743593 kubelet[3384]: E1008 19:36:52.743514 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:36:52.768951 systemd[1]: Removed slice kubepods-besteffort-pod536c0ddd_ab88_451b_9828_cab6c38a8e7e.slice - libcontainer container kubepods-besteffort-pod536c0ddd_ab88_451b_9828_cab6c38a8e7e.slice. Oct 8 19:36:52.853088 containerd[1937]: time="2024-10-08T19:36:52.852931992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d6cb5bdc-q8zsp,Uid:cfded20d-1526-493e-b70b-2d74c843cf5d,Namespace:calico-system,Attempt:0,}" Oct 8 19:36:52.916259 containerd[1937]: time="2024-10-08T19:36:52.915047376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:36:52.916259 containerd[1937]: time="2024-10-08T19:36:52.915440388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:36:52.916259 containerd[1937]: time="2024-10-08T19:36:52.915519096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:52.916259 containerd[1937]: time="2024-10-08T19:36:52.915818736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:36:52.954589 systemd[1]: Started cri-containerd-78a53572817bcabec54708241b14d33fa7054af172fe2362be78951690c41c88.scope - libcontainer container 78a53572817bcabec54708241b14d33fa7054af172fe2362be78951690c41c88. Oct 8 19:36:53.024864 containerd[1937]: time="2024-10-08T19:36:53.024522860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:36:53.035647 kubelet[3384]: I1008 19:36:53.034390 3384 scope.go:117] "RemoveContainer" containerID="9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133" Oct 8 19:36:53.047263 containerd[1937]: time="2024-10-08T19:36:53.044987961Z" level=info msg="RemoveContainer for \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\"" Oct 8 19:36:53.067282 containerd[1937]: time="2024-10-08T19:36:53.066672369Z" level=info msg="RemoveContainer for \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\" returns successfully" Oct 8 19:36:53.067420 kubelet[3384]: I1008 19:36:53.067056 3384 scope.go:117] "RemoveContainer" containerID="9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133" Oct 8 19:36:53.070758 containerd[1937]: time="2024-10-08T19:36:53.069961905Z" level=error msg="ContainerStatus for \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\": not found" Oct 8 19:36:53.074401 kubelet[3384]: E1008 19:36:53.072555 3384 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\": not found" containerID="9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133" Oct 8 19:36:53.074401 kubelet[3384]: I1008 19:36:53.072662 3384 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133"} err="failed to get container status \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\": rpc error: code = NotFound desc = an error occurred when try to find container \"9475a97b0da7c5ffd05f0f4b9457c5ca40177fdf9743de55524d48501ba3e133\": not found" Oct 8 19:36:53.124206 containerd[1937]: time="2024-10-08T19:36:53.123831045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8d6cb5bdc-q8zsp,Uid:cfded20d-1526-493e-b70b-2d74c843cf5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"78a53572817bcabec54708241b14d33fa7054af172fe2362be78951690c41c88\"" Oct 8 19:36:53.169633 containerd[1937]: time="2024-10-08T19:36:53.168934257Z" level=info msg="CreateContainer within sandbox \"78a53572817bcabec54708241b14d33fa7054af172fe2362be78951690c41c88\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:36:53.196106 containerd[1937]: time="2024-10-08T19:36:53.195894225Z" level=info msg="CreateContainer within sandbox \"78a53572817bcabec54708241b14d33fa7054af172fe2362be78951690c41c88\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bde7fcec9a4a9080dca3d4c159195253777ea8b3a8ee3afc266fc86ffe5f217e\"" Oct 8 19:36:53.197084 containerd[1937]: time="2024-10-08T19:36:53.196927509Z" level=info msg="StartContainer for \"bde7fcec9a4a9080dca3d4c159195253777ea8b3a8ee3afc266fc86ffe5f217e\"" Oct 8 19:36:53.264610 systemd[1]: Started cri-containerd-bde7fcec9a4a9080dca3d4c159195253777ea8b3a8ee3afc266fc86ffe5f217e.scope - libcontainer container bde7fcec9a4a9080dca3d4c159195253777ea8b3a8ee3afc266fc86ffe5f217e. Oct 8 19:36:53.536331 containerd[1937]: time="2024-10-08T19:36:53.536274191Z" level=info msg="StartContainer for \"bde7fcec9a4a9080dca3d4c159195253777ea8b3a8ee3afc266fc86ffe5f217e\" returns successfully" Oct 8 19:36:54.114424 kubelet[3384]: I1008 19:36:54.114306 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8d6cb5bdc-q8zsp" podStartSLOduration=7.114204154 podStartE2EDuration="7.114204154s" podCreationTimestamp="2024-10-08 19:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:36:54.079865626 +0000 UTC m=+31.640270894" watchObservedRunningTime="2024-10-08 19:36:54.114204154 +0000 UTC m=+31.674609410" Oct 8 19:36:54.744322 kubelet[3384]: E1008 19:36:54.743730 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:36:54.752091 kubelet[3384]: I1008 19:36:54.751150 3384 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="536c0ddd-ab88-451b-9828-cab6c38a8e7e" path="/var/lib/kubelet/pods/536c0ddd-ab88-451b-9828-cab6c38a8e7e/volumes" Oct 8 19:36:56.743879 kubelet[3384]: E1008 19:36:56.743837 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:36:57.402361 containerd[1937]: time="2024-10-08T19:36:57.401872454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:57.403753 containerd[1937]: time="2024-10-08T19:36:57.403680230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:36:57.405427 containerd[1937]: time="2024-10-08T19:36:57.405337670Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:57.409804 containerd[1937]: time="2024-10-08T19:36:57.409721678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:36:57.411530 containerd[1937]: time="2024-10-08T19:36:57.411264266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 4.386675862s" Oct 8 19:36:57.411530 containerd[1937]: time="2024-10-08T19:36:57.411321410Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:36:57.416640 containerd[1937]: time="2024-10-08T19:36:57.416567150Z" level=info msg="CreateContainer within sandbox \"64f2806e3a72dca526590e18d634d8942515c39d625f4693231b28d1ae70552e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:36:57.442904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3524453222.mount: Deactivated successfully. Oct 8 19:36:57.446657 containerd[1937]: time="2024-10-08T19:36:57.446574878Z" level=info msg="CreateContainer within sandbox \"64f2806e3a72dca526590e18d634d8942515c39d625f4693231b28d1ae70552e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"03b76a7240eee92b86462c89ca299f79c9cc9c7a087b2dd4a6ea0cbd6e4810ab\"" Oct 8 19:36:57.448288 containerd[1937]: time="2024-10-08T19:36:57.448202318Z" level=info msg="StartContainer for \"03b76a7240eee92b86462c89ca299f79c9cc9c7a087b2dd4a6ea0cbd6e4810ab\"" Oct 8 19:36:57.511733 systemd[1]: Started cri-containerd-03b76a7240eee92b86462c89ca299f79c9cc9c7a087b2dd4a6ea0cbd6e4810ab.scope - libcontainer container 03b76a7240eee92b86462c89ca299f79c9cc9c7a087b2dd4a6ea0cbd6e4810ab. Oct 8 19:36:57.580913 containerd[1937]: time="2024-10-08T19:36:57.580834455Z" level=info msg="StartContainer for \"03b76a7240eee92b86462c89ca299f79c9cc9c7a087b2dd4a6ea0cbd6e4810ab\" returns successfully" Oct 8 19:36:58.745599 kubelet[3384]: E1008 19:36:58.743476 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:36:59.732429 systemd[1]: cri-containerd-03b76a7240eee92b86462c89ca299f79c9cc9c7a087b2dd4a6ea0cbd6e4810ab.scope: Deactivated successfully. Oct 8 19:36:59.740710 kubelet[3384]: I1008 19:36:59.739854 3384 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Oct 8 19:36:59.805326 kubelet[3384]: I1008 19:36:59.805283 3384 topology_manager.go:215] "Topology Admit Handler" podUID="e9100c31-c40a-445f-a34a-7baba2b66f92" podNamespace="kube-system" podName="coredns-76f75df574-r94pj" Oct 8 19:36:59.814599 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03b76a7240eee92b86462c89ca299f79c9cc9c7a087b2dd4a6ea0cbd6e4810ab-rootfs.mount: Deactivated successfully. Oct 8 19:36:59.819670 kubelet[3384]: I1008 19:36:59.817404 3384 topology_manager.go:215] "Topology Admit Handler" podUID="4493d8e3-881f-490b-a947-afdd7a012606" podNamespace="kube-system" podName="coredns-76f75df574-d6zdk" Oct 8 19:36:59.827265 kubelet[3384]: I1008 19:36:59.826193 3384 topology_manager.go:215] "Topology Admit Handler" podUID="703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6" podNamespace="calico-system" podName="calico-kube-controllers-b648c64d4-8r5k6" Oct 8 19:36:59.843213 systemd[1]: Created slice kubepods-burstable-pode9100c31_c40a_445f_a34a_7baba2b66f92.slice - libcontainer container kubepods-burstable-pode9100c31_c40a_445f_a34a_7baba2b66f92.slice. Oct 8 19:36:59.869442 kubelet[3384]: I1008 19:36:59.865994 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b88sq\" (UniqueName: \"kubernetes.io/projected/e9100c31-c40a-445f-a34a-7baba2b66f92-kube-api-access-b88sq\") pod \"coredns-76f75df574-r94pj\" (UID: \"e9100c31-c40a-445f-a34a-7baba2b66f92\") " pod="kube-system/coredns-76f75df574-r94pj" Oct 8 19:36:59.869442 kubelet[3384]: I1008 19:36:59.866074 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6-tigera-ca-bundle\") pod \"calico-kube-controllers-b648c64d4-8r5k6\" (UID: \"703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6\") " pod="calico-system/calico-kube-controllers-b648c64d4-8r5k6" Oct 8 19:36:59.869442 kubelet[3384]: I1008 19:36:59.866123 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9100c31-c40a-445f-a34a-7baba2b66f92-config-volume\") pod \"coredns-76f75df574-r94pj\" (UID: \"e9100c31-c40a-445f-a34a-7baba2b66f92\") " pod="kube-system/coredns-76f75df574-r94pj" Oct 8 19:36:59.869442 kubelet[3384]: I1008 19:36:59.866175 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4493d8e3-881f-490b-a947-afdd7a012606-config-volume\") pod \"coredns-76f75df574-d6zdk\" (UID: \"4493d8e3-881f-490b-a947-afdd7a012606\") " pod="kube-system/coredns-76f75df574-d6zdk" Oct 8 19:36:59.869442 kubelet[3384]: I1008 19:36:59.866247 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rh2t\" (UniqueName: \"kubernetes.io/projected/4493d8e3-881f-490b-a947-afdd7a012606-kube-api-access-4rh2t\") pod \"coredns-76f75df574-d6zdk\" (UID: \"4493d8e3-881f-490b-a947-afdd7a012606\") " pod="kube-system/coredns-76f75df574-d6zdk" Oct 8 19:36:59.870111 kubelet[3384]: I1008 19:36:59.866350 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmzpk\" (UniqueName: \"kubernetes.io/projected/703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6-kube-api-access-pmzpk\") pod \"calico-kube-controllers-b648c64d4-8r5k6\" (UID: \"703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6\") " pod="calico-system/calico-kube-controllers-b648c64d4-8r5k6" Oct 8 19:36:59.872381 systemd[1]: Created slice kubepods-burstable-pod4493d8e3_881f_490b_a947_afdd7a012606.slice - libcontainer container kubepods-burstable-pod4493d8e3_881f_490b_a947_afdd7a012606.slice. Oct 8 19:36:59.895049 systemd[1]: Created slice kubepods-besteffort-pod703d7ab4_2f87_4a4b_af2a_4f1fa691c2b6.slice - libcontainer container kubepods-besteffort-pod703d7ab4_2f87_4a4b_af2a_4f1fa691c2b6.slice. Oct 8 19:37:00.157169 containerd[1937]: time="2024-10-08T19:37:00.156944116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r94pj,Uid:e9100c31-c40a-445f-a34a-7baba2b66f92,Namespace:kube-system,Attempt:0,}" Oct 8 19:37:00.185461 containerd[1937]: time="2024-10-08T19:37:00.185381440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d6zdk,Uid:4493d8e3-881f-490b-a947-afdd7a012606,Namespace:kube-system,Attempt:0,}" Oct 8 19:37:00.209163 containerd[1937]: time="2024-10-08T19:37:00.209021512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b648c64d4-8r5k6,Uid:703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6,Namespace:calico-system,Attempt:0,}" Oct 8 19:37:00.757968 systemd[1]: Created slice kubepods-besteffort-pod6d4c90a1_3e00_4c0b_8eae_414c05c0dfb6.slice - libcontainer container kubepods-besteffort-pod6d4c90a1_3e00_4c0b_8eae_414c05c0dfb6.slice. Oct 8 19:37:00.765334 containerd[1937]: time="2024-10-08T19:37:00.765272431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xd4p,Uid:6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6,Namespace:calico-system,Attempt:0,}" Oct 8 19:37:01.506507 containerd[1937]: time="2024-10-08T19:37:01.506418607Z" level=info msg="shim disconnected" id=03b76a7240eee92b86462c89ca299f79c9cc9c7a087b2dd4a6ea0cbd6e4810ab namespace=k8s.io Oct 8 19:37:01.506507 containerd[1937]: time="2024-10-08T19:37:01.506499451Z" level=warning msg="cleaning up after shim disconnected" id=03b76a7240eee92b86462c89ca299f79c9cc9c7a087b2dd4a6ea0cbd6e4810ab namespace=k8s.io Oct 8 19:37:01.509008 containerd[1937]: time="2024-10-08T19:37:01.506521819Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:37:01.590335 containerd[1937]: time="2024-10-08T19:37:01.589935091Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:37:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:37:01.714728 containerd[1937]: time="2024-10-08T19:37:01.714639128Z" level=error msg="Failed to destroy network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.715713 containerd[1937]: time="2024-10-08T19:37:01.715465604Z" level=error msg="encountered an error cleaning up failed sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.715713 containerd[1937]: time="2024-10-08T19:37:01.715581836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d6zdk,Uid:4493d8e3-881f-490b-a947-afdd7a012606,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.717344 kubelet[3384]: E1008 19:37:01.716134 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.717344 kubelet[3384]: E1008 19:37:01.716245 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d6zdk" Oct 8 19:37:01.717344 kubelet[3384]: E1008 19:37:01.716288 3384 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-d6zdk" Oct 8 19:37:01.718002 kubelet[3384]: E1008 19:37:01.716383 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-d6zdk_kube-system(4493d8e3-881f-490b-a947-afdd7a012606)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-d6zdk_kube-system(4493d8e3-881f-490b-a947-afdd7a012606)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-d6zdk" podUID="4493d8e3-881f-490b-a947-afdd7a012606" Oct 8 19:37:01.763821 containerd[1937]: time="2024-10-08T19:37:01.762020144Z" level=error msg="Failed to destroy network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.763821 containerd[1937]: time="2024-10-08T19:37:01.762816128Z" level=error msg="encountered an error cleaning up failed sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.763821 containerd[1937]: time="2024-10-08T19:37:01.762977300Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xd4p,Uid:6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.764695 kubelet[3384]: E1008 19:37:01.764384 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.764695 kubelet[3384]: E1008 19:37:01.764477 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2xd4p" Oct 8 19:37:01.764695 kubelet[3384]: E1008 19:37:01.764516 3384 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2xd4p" Oct 8 19:37:01.766713 kubelet[3384]: E1008 19:37:01.764613 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2xd4p_calico-system(6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2xd4p_calico-system(6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:37:01.778264 containerd[1937]: time="2024-10-08T19:37:01.777454688Z" level=error msg="Failed to destroy network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.779534 containerd[1937]: time="2024-10-08T19:37:01.779185616Z" level=error msg="encountered an error cleaning up failed sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.779534 containerd[1937]: time="2024-10-08T19:37:01.779377076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r94pj,Uid:e9100c31-c40a-445f-a34a-7baba2b66f92,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.781519 kubelet[3384]: E1008 19:37:01.780390 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.781519 kubelet[3384]: E1008 19:37:01.780509 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-r94pj" Oct 8 19:37:01.781519 kubelet[3384]: E1008 19:37:01.780550 3384 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-r94pj" Oct 8 19:37:01.781912 kubelet[3384]: E1008 19:37:01.780660 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-r94pj_kube-system(e9100c31-c40a-445f-a34a-7baba2b66f92)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-r94pj_kube-system(e9100c31-c40a-445f-a34a-7baba2b66f92)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-r94pj" podUID="e9100c31-c40a-445f-a34a-7baba2b66f92" Oct 8 19:37:01.784341 containerd[1937]: time="2024-10-08T19:37:01.783861164Z" level=error msg="Failed to destroy network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.785913 containerd[1937]: time="2024-10-08T19:37:01.785795156Z" level=error msg="encountered an error cleaning up failed sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.786065 containerd[1937]: time="2024-10-08T19:37:01.785931704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b648c64d4-8r5k6,Uid:703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.786776 kubelet[3384]: E1008 19:37:01.786628 3384 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:01.786776 kubelet[3384]: E1008 19:37:01.786769 3384 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b648c64d4-8r5k6" Oct 8 19:37:01.787326 kubelet[3384]: E1008 19:37:01.786814 3384 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b648c64d4-8r5k6" Oct 8 19:37:01.787326 kubelet[3384]: E1008 19:37:01.786964 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b648c64d4-8r5k6_calico-system(703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b648c64d4-8r5k6_calico-system(703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b648c64d4-8r5k6" podUID="703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6" Oct 8 19:37:02.024609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d-shm.mount: Deactivated successfully. Oct 8 19:37:02.024767 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87-shm.mount: Deactivated successfully. Oct 8 19:37:02.024896 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a-shm.mount: Deactivated successfully. Oct 8 19:37:02.076613 kubelet[3384]: I1008 19:37:02.075802 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:02.080039 containerd[1937]: time="2024-10-08T19:37:02.079098185Z" level=info msg="StopPodSandbox for \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\"" Oct 8 19:37:02.080039 containerd[1937]: time="2024-10-08T19:37:02.079458617Z" level=info msg="Ensure that sandbox 11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87 in task-service has been cleanup successfully" Oct 8 19:37:02.085760 containerd[1937]: time="2024-10-08T19:37:02.085510817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:37:02.098324 kubelet[3384]: I1008 19:37:02.097081 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:02.100824 containerd[1937]: time="2024-10-08T19:37:02.100533594Z" level=info msg="StopPodSandbox for \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\"" Oct 8 19:37:02.100992 containerd[1937]: time="2024-10-08T19:37:02.100939902Z" level=info msg="Ensure that sandbox 1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a in task-service has been cleanup successfully" Oct 8 19:37:02.113809 kubelet[3384]: I1008 19:37:02.110188 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:02.115152 containerd[1937]: time="2024-10-08T19:37:02.112194402Z" level=info msg="StopPodSandbox for \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\"" Oct 8 19:37:02.118054 containerd[1937]: time="2024-10-08T19:37:02.117992298Z" level=info msg="Ensure that sandbox 18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d in task-service has been cleanup successfully" Oct 8 19:37:02.152960 kubelet[3384]: I1008 19:37:02.152920 3384 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:02.156165 containerd[1937]: time="2024-10-08T19:37:02.156037062Z" level=info msg="StopPodSandbox for \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\"" Oct 8 19:37:02.160168 containerd[1937]: time="2024-10-08T19:37:02.159077934Z" level=info msg="Ensure that sandbox 8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508 in task-service has been cleanup successfully" Oct 8 19:37:02.244661 containerd[1937]: time="2024-10-08T19:37:02.244581162Z" level=error msg="StopPodSandbox for \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\" failed" error="failed to destroy network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:02.246298 kubelet[3384]: E1008 19:37:02.246170 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:02.246520 kubelet[3384]: E1008 19:37:02.246339 3384 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87"} Oct 8 19:37:02.246520 kubelet[3384]: E1008 19:37:02.246469 3384 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9100c31-c40a-445f-a34a-7baba2b66f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:37:02.246721 kubelet[3384]: E1008 19:37:02.246563 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9100c31-c40a-445f-a34a-7baba2b66f92\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-r94pj" podUID="e9100c31-c40a-445f-a34a-7baba2b66f92" Oct 8 19:37:02.259389 containerd[1937]: time="2024-10-08T19:37:02.259321110Z" level=error msg="StopPodSandbox for \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\" failed" error="failed to destroy network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:02.260296 kubelet[3384]: E1008 19:37:02.260163 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:02.260536 kubelet[3384]: E1008 19:37:02.260325 3384 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508"} Oct 8 19:37:02.260536 kubelet[3384]: E1008 19:37:02.260410 3384 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:37:02.260536 kubelet[3384]: E1008 19:37:02.260486 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2xd4p" podUID="6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6" Oct 8 19:37:02.266779 containerd[1937]: time="2024-10-08T19:37:02.266711814Z" level=error msg="StopPodSandbox for \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\" failed" error="failed to destroy network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:02.267253 containerd[1937]: time="2024-10-08T19:37:02.266997114Z" level=error msg="StopPodSandbox for \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\" failed" error="failed to destroy network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:37:02.267601 kubelet[3384]: E1008 19:37:02.267555 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:02.267760 kubelet[3384]: E1008 19:37:02.267660 3384 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d"} Oct 8 19:37:02.267760 kubelet[3384]: E1008 19:37:02.267736 3384 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:37:02.267992 kubelet[3384]: E1008 19:37:02.267805 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b648c64d4-8r5k6" podUID="703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6" Oct 8 19:37:02.267992 kubelet[3384]: E1008 19:37:02.267930 3384 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:02.267992 kubelet[3384]: E1008 19:37:02.267989 3384 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a"} Oct 8 19:37:02.268383 kubelet[3384]: E1008 19:37:02.268073 3384 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4493d8e3-881f-490b-a947-afdd7a012606\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:37:02.268383 kubelet[3384]: E1008 19:37:02.268151 3384 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4493d8e3-881f-490b-a947-afdd7a012606\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-d6zdk" podUID="4493d8e3-881f-490b-a947-afdd7a012606" Oct 8 19:37:07.263790 systemd[1]: Started sshd@9-172.31.27.200:22-139.178.68.195:60902.service - OpenSSH per-connection server daemon (139.178.68.195:60902). Oct 8 19:37:07.498835 sshd[4668]: Accepted publickey for core from 139.178.68.195 port 60902 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:07.508344 sshd[4668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:07.533707 systemd-logind[1903]: New session 10 of user core. Oct 8 19:37:07.541405 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:37:07.872565 sshd[4668]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:07.879823 systemd[1]: sshd@9-172.31.27.200:22-139.178.68.195:60902.service: Deactivated successfully. Oct 8 19:37:07.884615 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:37:07.889804 systemd-logind[1903]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:37:07.893872 systemd-logind[1903]: Removed session 10. Oct 8 19:37:09.312075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115314756.mount: Deactivated successfully. Oct 8 19:37:09.466128 containerd[1937]: time="2024-10-08T19:37:09.466030598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:09.479598 containerd[1937]: time="2024-10-08T19:37:09.479495882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:37:09.489571 containerd[1937]: time="2024-10-08T19:37:09.489501542Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:09.544703 containerd[1937]: time="2024-10-08T19:37:09.544575734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:09.549382 containerd[1937]: time="2024-10-08T19:37:09.548505423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 7.462921178s" Oct 8 19:37:09.549382 containerd[1937]: time="2024-10-08T19:37:09.549152055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:37:09.591433 containerd[1937]: time="2024-10-08T19:37:09.590492475Z" level=info msg="CreateContainer within sandbox \"64f2806e3a72dca526590e18d634d8942515c39d625f4693231b28d1ae70552e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:37:09.674374 containerd[1937]: time="2024-10-08T19:37:09.674119635Z" level=info msg="CreateContainer within sandbox \"64f2806e3a72dca526590e18d634d8942515c39d625f4693231b28d1ae70552e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c6d7881e7916814d216e5155dbd64a41d9825d7edbe3d2eef23502c70317aeef\"" Oct 8 19:37:09.675780 containerd[1937]: time="2024-10-08T19:37:09.675384123Z" level=info msg="StartContainer for \"c6d7881e7916814d216e5155dbd64a41d9825d7edbe3d2eef23502c70317aeef\"" Oct 8 19:37:09.729173 systemd[1]: Started cri-containerd-c6d7881e7916814d216e5155dbd64a41d9825d7edbe3d2eef23502c70317aeef.scope - libcontainer container c6d7881e7916814d216e5155dbd64a41d9825d7edbe3d2eef23502c70317aeef. Oct 8 19:37:09.825815 containerd[1937]: time="2024-10-08T19:37:09.825732532Z" level=info msg="StartContainer for \"c6d7881e7916814d216e5155dbd64a41d9825d7edbe3d2eef23502c70317aeef\" returns successfully" Oct 8 19:37:10.064244 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:37:10.064468 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:37:11.238467 systemd[1]: run-containerd-runc-k8s.io-c6d7881e7916814d216e5155dbd64a41d9825d7edbe3d2eef23502c70317aeef-runc.YEsdG8.mount: Deactivated successfully. Oct 8 19:37:12.549058 kernel: bpftool[4912]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:37:12.904053 systemd-networkd[1821]: vxlan.calico: Link UP Oct 8 19:37:12.904075 systemd-networkd[1821]: vxlan.calico: Gained carrier Oct 8 19:37:12.905522 (udev-worker)[4722]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:37:12.923596 systemd[1]: Started sshd@10-172.31.27.200:22-139.178.68.195:53302.service - OpenSSH per-connection server daemon (139.178.68.195:53302). Oct 8 19:37:12.973724 (udev-worker)[4723]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:37:12.985498 (udev-worker)[4945]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:37:13.174843 sshd[4938]: Accepted publickey for core from 139.178.68.195 port 53302 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:13.179205 sshd[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:13.187127 systemd-logind[1903]: New session 11 of user core. Oct 8 19:37:13.196473 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:37:13.519539 sshd[4938]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:13.526847 systemd[1]: sshd@10-172.31.27.200:22-139.178.68.195:53302.service: Deactivated successfully. Oct 8 19:37:13.532097 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:37:13.533994 systemd-logind[1903]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:37:13.538689 systemd-logind[1903]: Removed session 11. Oct 8 19:37:13.746470 containerd[1937]: time="2024-10-08T19:37:13.746407879Z" level=info msg="StopPodSandbox for \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\"" Oct 8 19:37:13.876609 kubelet[3384]: I1008 19:37:13.876446 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-89wvx" podStartSLOduration=7.350404513 podStartE2EDuration="23.87637604s" podCreationTimestamp="2024-10-08 19:36:50 +0000 UTC" firstStartedPulling="2024-10-08 19:36:53.024008132 +0000 UTC m=+30.584413364" lastFinishedPulling="2024-10-08 19:37:09.549979635 +0000 UTC m=+47.110384891" observedRunningTime="2024-10-08 19:37:10.231189902 +0000 UTC m=+47.791595158" watchObservedRunningTime="2024-10-08 19:37:13.87637604 +0000 UTC m=+51.436781284" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.880 [INFO][5008] k8s.go 608: Cleaning up netns ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.881 [INFO][5008] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" iface="eth0" netns="/var/run/netns/cni-63aedbdc-8625-a2db-3b7d-0074f393c050" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.882 [INFO][5008] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" iface="eth0" netns="/var/run/netns/cni-63aedbdc-8625-a2db-3b7d-0074f393c050" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.883 [INFO][5008] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" iface="eth0" netns="/var/run/netns/cni-63aedbdc-8625-a2db-3b7d-0074f393c050" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.883 [INFO][5008] k8s.go 615: Releasing IP address(es) ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.883 [INFO][5008] utils.go 188: Calico CNI releasing IP address ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.951 [INFO][5016] ipam_plugin.go 417: Releasing address using handleID ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" HandleID="k8s-pod-network.1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.951 [INFO][5016] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.951 [INFO][5016] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.967 [WARNING][5016] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" HandleID="k8s-pod-network.1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.968 [INFO][5016] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" HandleID="k8s-pod-network.1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.971 [INFO][5016] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:13.979609 containerd[1937]: 2024-10-08 19:37:13.976 [INFO][5008] k8s.go 621: Teardown processing complete. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:13.982526 containerd[1937]: time="2024-10-08T19:37:13.982371957Z" level=info msg="TearDown network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\" successfully" Oct 8 19:37:13.982526 containerd[1937]: time="2024-10-08T19:37:13.982420293Z" level=info msg="StopPodSandbox for \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\" returns successfully" Oct 8 19:37:13.987011 systemd[1]: run-netns-cni\x2d63aedbdc\x2d8625\x2da2db\x2d3b7d\x2d0074f393c050.mount: Deactivated successfully. Oct 8 19:37:13.995428 containerd[1937]: time="2024-10-08T19:37:13.995309649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d6zdk,Uid:4493d8e3-881f-490b-a947-afdd7a012606,Namespace:kube-system,Attempt:1,}" Oct 8 19:37:14.254159 systemd-networkd[1821]: calicc8f67ad98f: Link UP Oct 8 19:37:14.256265 systemd-networkd[1821]: calicc8f67ad98f: Gained carrier Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.099 [INFO][5023] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0 coredns-76f75df574- kube-system 4493d8e3-881f-490b-a947-afdd7a012606 834 0 2024-10-08 19:36:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-200 coredns-76f75df574-d6zdk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicc8f67ad98f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Namespace="kube-system" Pod="coredns-76f75df574-d6zdk" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.099 [INFO][5023] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Namespace="kube-system" Pod="coredns-76f75df574-d6zdk" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.157 [INFO][5035] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" HandleID="k8s-pod-network.4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.175 [INFO][5035] ipam_plugin.go 270: Auto assigning IP ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" HandleID="k8s-pod-network.4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400026c270), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-200", "pod":"coredns-76f75df574-d6zdk", "timestamp":"2024-10-08 19:37:14.157643297 +0000 UTC"}, Hostname:"ip-172-31-27-200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.175 [INFO][5035] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.175 [INFO][5035] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.175 [INFO][5035] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-200' Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.178 [INFO][5035] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" host="ip-172-31-27-200" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.185 [INFO][5035] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-200" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.192 [INFO][5035] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.195 [INFO][5035] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.199 [INFO][5035] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.199 [INFO][5035] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" host="ip-172-31-27-200" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.202 [INFO][5035] ipam.go 1685: Creating new handle: k8s-pod-network.4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032 Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.216 [INFO][5035] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" host="ip-172-31-27-200" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.231 [INFO][5035] ipam.go 1216: Successfully claimed IPs: [192.168.40.1/26] block=192.168.40.0/26 handle="k8s-pod-network.4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" host="ip-172-31-27-200" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.231 [INFO][5035] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.1/26] handle="k8s-pod-network.4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" host="ip-172-31-27-200" Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.231 [INFO][5035] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:14.279724 containerd[1937]: 2024-10-08 19:37:14.231 [INFO][5035] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.40.1/26] IPv6=[] ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" HandleID="k8s-pod-network.4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:14.284189 containerd[1937]: 2024-10-08 19:37:14.235 [INFO][5023] k8s.go 386: Populated endpoint ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Namespace="kube-system" Pod="coredns-76f75df574-d6zdk" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4493d8e3-881f-490b-a947-afdd7a012606", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"", Pod:"coredns-76f75df574-d6zdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc8f67ad98f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:14.284189 containerd[1937]: 2024-10-08 19:37:14.235 [INFO][5023] k8s.go 387: Calico CNI using IPs: [192.168.40.1/32] ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Namespace="kube-system" Pod="coredns-76f75df574-d6zdk" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:14.284189 containerd[1937]: 2024-10-08 19:37:14.235 [INFO][5023] dataplane_linux.go 68: Setting the host side veth name to calicc8f67ad98f ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Namespace="kube-system" Pod="coredns-76f75df574-d6zdk" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:14.284189 containerd[1937]: 2024-10-08 19:37:14.243 [INFO][5023] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Namespace="kube-system" Pod="coredns-76f75df574-d6zdk" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:14.284189 containerd[1937]: 2024-10-08 19:37:14.244 [INFO][5023] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Namespace="kube-system" Pod="coredns-76f75df574-d6zdk" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4493d8e3-881f-490b-a947-afdd7a012606", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032", Pod:"coredns-76f75df574-d6zdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc8f67ad98f", MAC:"7e:bf:8f:3f:a3:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:14.284189 containerd[1937]: 2024-10-08 19:37:14.269 [INFO][5023] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032" Namespace="kube-system" Pod="coredns-76f75df574-d6zdk" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:14.331158 containerd[1937]: time="2024-10-08T19:37:14.330828438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:14.331727 containerd[1937]: time="2024-10-08T19:37:14.331104510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:14.332350 containerd[1937]: time="2024-10-08T19:37:14.331887930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:14.332504 containerd[1937]: time="2024-10-08T19:37:14.332146278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:14.387630 systemd[1]: Started cri-containerd-4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032.scope - libcontainer container 4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032. Oct 8 19:37:14.464049 containerd[1937]: time="2024-10-08T19:37:14.463597507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-d6zdk,Uid:4493d8e3-881f-490b-a947-afdd7a012606,Namespace:kube-system,Attempt:1,} returns sandbox id \"4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032\"" Oct 8 19:37:14.470540 containerd[1937]: time="2024-10-08T19:37:14.470490715Z" level=info msg="CreateContainer within sandbox \"4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:37:14.492380 containerd[1937]: time="2024-10-08T19:37:14.492239599Z" level=info msg="CreateContainer within sandbox \"4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3f16c2bcf75360a03237f83dc30d34b47f1a2674763cbc75f5a4dc21453270d\"" Oct 8 19:37:14.493532 containerd[1937]: time="2024-10-08T19:37:14.493247743Z" level=info msg="StartContainer for \"c3f16c2bcf75360a03237f83dc30d34b47f1a2674763cbc75f5a4dc21453270d\"" Oct 8 19:37:14.540977 systemd[1]: Started cri-containerd-c3f16c2bcf75360a03237f83dc30d34b47f1a2674763cbc75f5a4dc21453270d.scope - libcontainer container c3f16c2bcf75360a03237f83dc30d34b47f1a2674763cbc75f5a4dc21453270d. Oct 8 19:37:14.598734 systemd-networkd[1821]: vxlan.calico: Gained IPv6LL Oct 8 19:37:14.625656 containerd[1937]: time="2024-10-08T19:37:14.625551536Z" level=info msg="StartContainer for \"c3f16c2bcf75360a03237f83dc30d34b47f1a2674763cbc75f5a4dc21453270d\" returns successfully" Oct 8 19:37:14.745353 containerd[1937]: time="2024-10-08T19:37:14.744995684Z" level=info msg="StopPodSandbox for \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\"" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.833 [INFO][5144] k8s.go 608: Cleaning up netns ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.834 [INFO][5144] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" iface="eth0" netns="/var/run/netns/cni-48cada76-1631-80c2-573f-debacf2382be" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.837 [INFO][5144] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" iface="eth0" netns="/var/run/netns/cni-48cada76-1631-80c2-573f-debacf2382be" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.838 [INFO][5144] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" iface="eth0" netns="/var/run/netns/cni-48cada76-1631-80c2-573f-debacf2382be" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.838 [INFO][5144] k8s.go 615: Releasing IP address(es) ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.838 [INFO][5144] utils.go 188: Calico CNI releasing IP address ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.878 [INFO][5150] ipam_plugin.go 417: Releasing address using handleID ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" HandleID="k8s-pod-network.11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.879 [INFO][5150] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.879 [INFO][5150] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.891 [WARNING][5150] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" HandleID="k8s-pod-network.11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.891 [INFO][5150] ipam_plugin.go 445: Releasing address using workloadID ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" HandleID="k8s-pod-network.11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.894 [INFO][5150] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:14.901615 containerd[1937]: 2024-10-08 19:37:14.899 [INFO][5144] k8s.go 621: Teardown processing complete. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:14.904992 containerd[1937]: time="2024-10-08T19:37:14.901992081Z" level=info msg="TearDown network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\" successfully" Oct 8 19:37:14.904992 containerd[1937]: time="2024-10-08T19:37:14.902034333Z" level=info msg="StopPodSandbox for \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\" returns successfully" Oct 8 19:37:14.904992 containerd[1937]: time="2024-10-08T19:37:14.903008925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r94pj,Uid:e9100c31-c40a-445f-a34a-7baba2b66f92,Namespace:kube-system,Attempt:1,}" Oct 8 19:37:14.994296 systemd[1]: run-netns-cni\x2d48cada76\x2d1631\x2d80c2\x2d573f\x2ddebacf2382be.mount: Deactivated successfully. Oct 8 19:37:15.163621 systemd-networkd[1821]: cali1e8c926f5b8: Link UP Oct 8 19:37:15.168411 systemd-networkd[1821]: cali1e8c926f5b8: Gained carrier Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:14.998 [INFO][5157] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0 coredns-76f75df574- kube-system e9100c31-c40a-445f-a34a-7baba2b66f92 848 0 2024-10-08 19:36:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-27-200 coredns-76f75df574-r94pj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e8c926f5b8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Namespace="kube-system" Pod="coredns-76f75df574-r94pj" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:14.999 [INFO][5157] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Namespace="kube-system" Pod="coredns-76f75df574-r94pj" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.050 [INFO][5167] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" HandleID="k8s-pod-network.1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.072 [INFO][5167] ipam_plugin.go 270: Auto assigning IP ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" HandleID="k8s-pod-network.1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000289c20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-27-200", "pod":"coredns-76f75df574-r94pj", "timestamp":"2024-10-08 19:37:15.050132922 +0000 UTC"}, Hostname:"ip-172-31-27-200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.073 [INFO][5167] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.073 [INFO][5167] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.073 [INFO][5167] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-200' Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.078 [INFO][5167] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" host="ip-172-31-27-200" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.103 [INFO][5167] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-200" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.119 [INFO][5167] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.122 [INFO][5167] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.129 [INFO][5167] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.129 [INFO][5167] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" host="ip-172-31-27-200" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.133 [INFO][5167] ipam.go 1685: Creating new handle: k8s-pod-network.1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.142 [INFO][5167] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" host="ip-172-31-27-200" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.153 [INFO][5167] ipam.go 1216: Successfully claimed IPs: [192.168.40.2/26] block=192.168.40.0/26 handle="k8s-pod-network.1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" host="ip-172-31-27-200" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.153 [INFO][5167] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.2/26] handle="k8s-pod-network.1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" host="ip-172-31-27-200" Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.153 [INFO][5167] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:15.204882 containerd[1937]: 2024-10-08 19:37:15.154 [INFO][5167] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.40.2/26] IPv6=[] ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" HandleID="k8s-pod-network.1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:15.209686 containerd[1937]: 2024-10-08 19:37:15.157 [INFO][5157] k8s.go 386: Populated endpoint ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Namespace="kube-system" Pod="coredns-76f75df574-r94pj" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e9100c31-c40a-445f-a34a-7baba2b66f92", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"", Pod:"coredns-76f75df574-r94pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e8c926f5b8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:15.209686 containerd[1937]: 2024-10-08 19:37:15.158 [INFO][5157] k8s.go 387: Calico CNI using IPs: [192.168.40.2/32] ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Namespace="kube-system" Pod="coredns-76f75df574-r94pj" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:15.209686 containerd[1937]: 2024-10-08 19:37:15.158 [INFO][5157] dataplane_linux.go 68: Setting the host side veth name to cali1e8c926f5b8 ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Namespace="kube-system" Pod="coredns-76f75df574-r94pj" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:15.209686 containerd[1937]: 2024-10-08 19:37:15.166 [INFO][5157] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Namespace="kube-system" Pod="coredns-76f75df574-r94pj" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:15.209686 containerd[1937]: 2024-10-08 19:37:15.169 [INFO][5157] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Namespace="kube-system" Pod="coredns-76f75df574-r94pj" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e9100c31-c40a-445f-a34a-7baba2b66f92", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be", Pod:"coredns-76f75df574-r94pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e8c926f5b8", MAC:"76:1e:9e:32:73:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:15.209686 containerd[1937]: 2024-10-08 19:37:15.199 [INFO][5157] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be" Namespace="kube-system" Pod="coredns-76f75df574-r94pj" WorkloadEndpoint="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:15.281498 containerd[1937]: time="2024-10-08T19:37:15.281288575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:15.285471 containerd[1937]: time="2024-10-08T19:37:15.283608991Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:15.285471 containerd[1937]: time="2024-10-08T19:37:15.283709083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:15.285471 containerd[1937]: time="2024-10-08T19:37:15.284057923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:15.338557 kubelet[3384]: I1008 19:37:15.336516 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-d6zdk" podStartSLOduration=40.336455671 podStartE2EDuration="40.336455671s" podCreationTimestamp="2024-10-08 19:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:37:15.271731211 +0000 UTC m=+52.832136551" watchObservedRunningTime="2024-10-08 19:37:15.336455671 +0000 UTC m=+52.896860939" Oct 8 19:37:15.373568 systemd[1]: Started cri-containerd-1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be.scope - libcontainer container 1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be. Oct 8 19:37:15.494658 systemd-networkd[1821]: calicc8f67ad98f: Gained IPv6LL Oct 8 19:37:15.501802 containerd[1937]: time="2024-10-08T19:37:15.501154136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r94pj,Uid:e9100c31-c40a-445f-a34a-7baba2b66f92,Namespace:kube-system,Attempt:1,} returns sandbox id \"1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be\"" Oct 8 19:37:15.515012 containerd[1937]: time="2024-10-08T19:37:15.514853528Z" level=info msg="CreateContainer within sandbox \"1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:37:15.573089 containerd[1937]: time="2024-10-08T19:37:15.571921688Z" level=info msg="CreateContainer within sandbox \"1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0942a445d460255699d3505f2667f37fecbd099875597edcb760a009328d1f63\"" Oct 8 19:37:15.574132 containerd[1937]: time="2024-10-08T19:37:15.573576116Z" level=info msg="StartContainer for \"0942a445d460255699d3505f2667f37fecbd099875597edcb760a009328d1f63\"" Oct 8 19:37:15.644795 systemd[1]: Started cri-containerd-0942a445d460255699d3505f2667f37fecbd099875597edcb760a009328d1f63.scope - libcontainer container 0942a445d460255699d3505f2667f37fecbd099875597edcb760a009328d1f63. Oct 8 19:37:15.706895 containerd[1937]: time="2024-10-08T19:37:15.706813401Z" level=info msg="StartContainer for \"0942a445d460255699d3505f2667f37fecbd099875597edcb760a009328d1f63\" returns successfully" Oct 8 19:37:15.744161 containerd[1937]: time="2024-10-08T19:37:15.744084405Z" level=info msg="StopPodSandbox for \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\"" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.861 [INFO][5282] k8s.go 608: Cleaning up netns ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.862 [INFO][5282] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" iface="eth0" netns="/var/run/netns/cni-af2c56df-fc99-ab86-3f0e-16c7fb292092" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.863 [INFO][5282] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" iface="eth0" netns="/var/run/netns/cni-af2c56df-fc99-ab86-3f0e-16c7fb292092" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.863 [INFO][5282] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" iface="eth0" netns="/var/run/netns/cni-af2c56df-fc99-ab86-3f0e-16c7fb292092" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.865 [INFO][5282] k8s.go 615: Releasing IP address(es) ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.865 [INFO][5282] utils.go 188: Calico CNI releasing IP address ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.910 [INFO][5291] ipam_plugin.go 417: Releasing address using handleID ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" HandleID="k8s-pod-network.8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.910 [INFO][5291] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.910 [INFO][5291] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.924 [WARNING][5291] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" HandleID="k8s-pod-network.8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.924 [INFO][5291] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" HandleID="k8s-pod-network.8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.928 [INFO][5291] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:15.933615 containerd[1937]: 2024-10-08 19:37:15.930 [INFO][5282] k8s.go 621: Teardown processing complete. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:15.935527 containerd[1937]: time="2024-10-08T19:37:15.934552594Z" level=info msg="TearDown network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\" successfully" Oct 8 19:37:15.935527 containerd[1937]: time="2024-10-08T19:37:15.934645138Z" level=info msg="StopPodSandbox for \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\" returns successfully" Oct 8 19:37:15.936190 containerd[1937]: time="2024-10-08T19:37:15.936133954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xd4p,Uid:6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6,Namespace:calico-system,Attempt:1,}" Oct 8 19:37:15.995391 systemd[1]: run-netns-cni\x2daf2c56df\x2dfc99\x2dab86\x2d3f0e\x2d16c7fb292092.mount: Deactivated successfully. Oct 8 19:37:16.209873 systemd-networkd[1821]: calif76c03924c2: Link UP Oct 8 19:37:16.211702 systemd-networkd[1821]: calif76c03924c2: Gained carrier Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.036 [INFO][5299] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0 csi-node-driver- calico-system 6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6 866 0 2024-10-08 19:36:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78cd84fb8c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-27-200 csi-node-driver-2xd4p eth0 default [] [] [kns.calico-system ksa.calico-system.default] calif76c03924c2 [] []}} ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Namespace="calico-system" Pod="csi-node-driver-2xd4p" WorkloadEndpoint="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.037 [INFO][5299] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Namespace="calico-system" Pod="csi-node-driver-2xd4p" WorkloadEndpoint="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.098 [INFO][5310] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" HandleID="k8s-pod-network.04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.121 [INFO][5310] ipam_plugin.go 270: Auto assigning IP ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" HandleID="k8s-pod-network.04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030a100), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-200", "pod":"csi-node-driver-2xd4p", "timestamp":"2024-10-08 19:37:16.098525479 +0000 UTC"}, Hostname:"ip-172-31-27-200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.121 [INFO][5310] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.121 [INFO][5310] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.122 [INFO][5310] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-200' Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.125 [INFO][5310] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" host="ip-172-31-27-200" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.140 [INFO][5310] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-200" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.148 [INFO][5310] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.151 [INFO][5310] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.156 [INFO][5310] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.157 [INFO][5310] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" host="ip-172-31-27-200" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.160 [INFO][5310] ipam.go 1685: Creating new handle: k8s-pod-network.04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9 Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.167 [INFO][5310] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" host="ip-172-31-27-200" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.191 [INFO][5310] ipam.go 1216: Successfully claimed IPs: [192.168.40.3/26] block=192.168.40.0/26 handle="k8s-pod-network.04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" host="ip-172-31-27-200" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.191 [INFO][5310] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.3/26] handle="k8s-pod-network.04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" host="ip-172-31-27-200" Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.191 [INFO][5310] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:16.283074 containerd[1937]: 2024-10-08 19:37:16.191 [INFO][5310] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.40.3/26] IPv6=[] ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" HandleID="k8s-pod-network.04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:16.284610 containerd[1937]: 2024-10-08 19:37:16.196 [INFO][5299] k8s.go 386: Populated endpoint ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Namespace="calico-system" Pod="csi-node-driver-2xd4p" WorkloadEndpoint="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"", Pod:"csi-node-driver-2xd4p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif76c03924c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:16.284610 containerd[1937]: 2024-10-08 19:37:16.197 [INFO][5299] k8s.go 387: Calico CNI using IPs: [192.168.40.3/32] ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Namespace="calico-system" Pod="csi-node-driver-2xd4p" WorkloadEndpoint="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:16.284610 containerd[1937]: 2024-10-08 19:37:16.197 [INFO][5299] dataplane_linux.go 68: Setting the host side veth name to calif76c03924c2 ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Namespace="calico-system" Pod="csi-node-driver-2xd4p" WorkloadEndpoint="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:16.284610 containerd[1937]: 2024-10-08 19:37:16.209 [INFO][5299] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Namespace="calico-system" Pod="csi-node-driver-2xd4p" WorkloadEndpoint="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:16.284610 containerd[1937]: 2024-10-08 19:37:16.214 [INFO][5299] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Namespace="calico-system" Pod="csi-node-driver-2xd4p" WorkloadEndpoint="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9", Pod:"csi-node-driver-2xd4p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif76c03924c2", MAC:"ca:04:a6:f0:65:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:16.284610 containerd[1937]: 2024-10-08 19:37:16.276 [INFO][5299] k8s.go 500: Wrote updated endpoint to datastore ContainerID="04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9" Namespace="calico-system" Pod="csi-node-driver-2xd4p" WorkloadEndpoint="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:16.359263 containerd[1937]: time="2024-10-08T19:37:16.358773044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:16.359263 containerd[1937]: time="2024-10-08T19:37:16.358899632Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:16.359263 containerd[1937]: time="2024-10-08T19:37:16.358937180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:16.359263 containerd[1937]: time="2024-10-08T19:37:16.359136032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:16.365711 kubelet[3384]: I1008 19:37:16.365662 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-r94pj" podStartSLOduration=41.365606756 podStartE2EDuration="41.365606756s" podCreationTimestamp="2024-10-08 19:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:37:16.300556364 +0000 UTC m=+53.860961620" watchObservedRunningTime="2024-10-08 19:37:16.365606756 +0000 UTC m=+53.926012012" Oct 8 19:37:16.485214 systemd[1]: Started cri-containerd-04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9.scope - libcontainer container 04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9. Oct 8 19:37:16.569458 containerd[1937]: time="2024-10-08T19:37:16.568971105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2xd4p,Uid:6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6,Namespace:calico-system,Attempt:1,} returns sandbox id \"04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9\"" Oct 8 19:37:16.579613 containerd[1937]: time="2024-10-08T19:37:16.579209349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:37:16.747451 containerd[1937]: time="2024-10-08T19:37:16.745622830Z" level=info msg="StopPodSandbox for \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\"" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.831 [INFO][5383] k8s.go 608: Cleaning up netns ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.833 [INFO][5383] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" iface="eth0" netns="/var/run/netns/cni-51e892ab-302f-4ba7-a3a3-bbd3b24f4d55" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.833 [INFO][5383] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" iface="eth0" netns="/var/run/netns/cni-51e892ab-302f-4ba7-a3a3-bbd3b24f4d55" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.834 [INFO][5383] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" iface="eth0" netns="/var/run/netns/cni-51e892ab-302f-4ba7-a3a3-bbd3b24f4d55" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.835 [INFO][5383] k8s.go 615: Releasing IP address(es) ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.835 [INFO][5383] utils.go 188: Calico CNI releasing IP address ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.876 [INFO][5389] ipam_plugin.go 417: Releasing address using handleID ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" HandleID="k8s-pod-network.18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.876 [INFO][5389] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.876 [INFO][5389] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.890 [WARNING][5389] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" HandleID="k8s-pod-network.18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.890 [INFO][5389] ipam_plugin.go 445: Releasing address using workloadID ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" HandleID="k8s-pod-network.18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.894 [INFO][5389] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:16.901795 containerd[1937]: 2024-10-08 19:37:16.897 [INFO][5383] k8s.go 621: Teardown processing complete. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:16.906557 systemd-networkd[1821]: cali1e8c926f5b8: Gained IPv6LL Oct 8 19:37:16.910333 containerd[1937]: time="2024-10-08T19:37:16.907354871Z" level=info msg="TearDown network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\" successfully" Oct 8 19:37:16.910333 containerd[1937]: time="2024-10-08T19:37:16.907416239Z" level=info msg="StopPodSandbox for \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\" returns successfully" Oct 8 19:37:16.912900 systemd[1]: run-netns-cni\x2d51e892ab\x2d302f\x2d4ba7\x2da3a3\x2dbbd3b24f4d55.mount: Deactivated successfully. Oct 8 19:37:16.917241 containerd[1937]: time="2024-10-08T19:37:16.916288751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b648c64d4-8r5k6,Uid:703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6,Namespace:calico-system,Attempt:1,}" Oct 8 19:37:17.168751 systemd-networkd[1821]: cali449b831b70b: Link UP Oct 8 19:37:17.170688 systemd-networkd[1821]: cali449b831b70b: Gained carrier Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.023 [INFO][5395] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0 calico-kube-controllers-b648c64d4- calico-system 703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6 884 0 2024-10-08 19:36:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b648c64d4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-27-200 calico-kube-controllers-b648c64d4-8r5k6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali449b831b70b [] []}} ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Namespace="calico-system" Pod="calico-kube-controllers-b648c64d4-8r5k6" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.023 [INFO][5395] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Namespace="calico-system" Pod="calico-kube-controllers-b648c64d4-8r5k6" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.083 [INFO][5406] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" HandleID="k8s-pod-network.b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.102 [INFO][5406] ipam_plugin.go 270: Auto assigning IP ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" HandleID="k8s-pod-network.b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033e340), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-27-200", "pod":"calico-kube-controllers-b648c64d4-8r5k6", "timestamp":"2024-10-08 19:37:17.083586848 +0000 UTC"}, Hostname:"ip-172-31-27-200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.103 [INFO][5406] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.103 [INFO][5406] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.103 [INFO][5406] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-200' Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.106 [INFO][5406] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" host="ip-172-31-27-200" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.116 [INFO][5406] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-200" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.128 [INFO][5406] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.131 [INFO][5406] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.135 [INFO][5406] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.135 [INFO][5406] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" host="ip-172-31-27-200" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.138 [INFO][5406] ipam.go 1685: Creating new handle: k8s-pod-network.b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300 Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.146 [INFO][5406] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" host="ip-172-31-27-200" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.157 [INFO][5406] ipam.go 1216: Successfully claimed IPs: [192.168.40.4/26] block=192.168.40.0/26 handle="k8s-pod-network.b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" host="ip-172-31-27-200" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.157 [INFO][5406] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.4/26] handle="k8s-pod-network.b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" host="ip-172-31-27-200" Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.158 [INFO][5406] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:17.194536 containerd[1937]: 2024-10-08 19:37:17.158 [INFO][5406] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.40.4/26] IPv6=[] ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" HandleID="k8s-pod-network.b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:17.200424 containerd[1937]: 2024-10-08 19:37:17.163 [INFO][5395] k8s.go 386: Populated endpoint ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Namespace="calico-system" Pod="calico-kube-controllers-b648c64d4-8r5k6" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0", GenerateName:"calico-kube-controllers-b648c64d4-", Namespace:"calico-system", SelfLink:"", UID:"703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b648c64d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"", Pod:"calico-kube-controllers-b648c64d4-8r5k6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali449b831b70b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:17.200424 containerd[1937]: 2024-10-08 19:37:17.163 [INFO][5395] k8s.go 387: Calico CNI using IPs: [192.168.40.4/32] ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Namespace="calico-system" Pod="calico-kube-controllers-b648c64d4-8r5k6" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:17.200424 containerd[1937]: 2024-10-08 19:37:17.164 [INFO][5395] dataplane_linux.go 68: Setting the host side veth name to cali449b831b70b ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Namespace="calico-system" Pod="calico-kube-controllers-b648c64d4-8r5k6" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:17.200424 containerd[1937]: 2024-10-08 19:37:17.170 [INFO][5395] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Namespace="calico-system" Pod="calico-kube-controllers-b648c64d4-8r5k6" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:17.200424 containerd[1937]: 2024-10-08 19:37:17.171 [INFO][5395] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Namespace="calico-system" Pod="calico-kube-controllers-b648c64d4-8r5k6" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0", GenerateName:"calico-kube-controllers-b648c64d4-", Namespace:"calico-system", SelfLink:"", UID:"703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b648c64d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300", Pod:"calico-kube-controllers-b648c64d4-8r5k6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali449b831b70b", MAC:"76:39:c1:34:69:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:17.200424 containerd[1937]: 2024-10-08 19:37:17.189 [INFO][5395] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300" Namespace="calico-system" Pod="calico-kube-controllers-b648c64d4-8r5k6" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:17.244010 containerd[1937]: time="2024-10-08T19:37:17.241909881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:17.244010 containerd[1937]: time="2024-10-08T19:37:17.243558969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:17.244010 containerd[1937]: time="2024-10-08T19:37:17.243659229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:17.247563 containerd[1937]: time="2024-10-08T19:37:17.244369485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:17.320531 systemd[1]: Started cri-containerd-b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300.scope - libcontainer container b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300. Oct 8 19:37:17.416466 containerd[1937]: time="2024-10-08T19:37:17.416385346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b648c64d4-8r5k6,Uid:703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6,Namespace:calico-system,Attempt:1,} returns sandbox id \"b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300\"" Oct 8 19:37:17.606656 systemd-networkd[1821]: calif76c03924c2: Gained IPv6LL Oct 8 19:37:18.246628 systemd-networkd[1821]: cali449b831b70b: Gained IPv6LL Oct 8 19:37:18.427630 containerd[1937]: time="2024-10-08T19:37:18.426602939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:18.430264 containerd[1937]: time="2024-10-08T19:37:18.430155851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:37:18.433481 containerd[1937]: time="2024-10-08T19:37:18.433420883Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:18.443947 containerd[1937]: time="2024-10-08T19:37:18.443871167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:18.450774 containerd[1937]: time="2024-10-08T19:37:18.447679943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.868379958s" Oct 8 19:37:18.450774 containerd[1937]: time="2024-10-08T19:37:18.447772091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:37:18.453286 containerd[1937]: time="2024-10-08T19:37:18.452733731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:37:18.460145 containerd[1937]: time="2024-10-08T19:37:18.459900359Z" level=info msg="CreateContainer within sandbox \"04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:37:18.510117 containerd[1937]: time="2024-10-08T19:37:18.508443179Z" level=info msg="CreateContainer within sandbox \"04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"380220e2545adb84d12aafbd7d5f7260457a54fabe921c3c775ec374cd519b65\"" Oct 8 19:37:18.510690 containerd[1937]: time="2024-10-08T19:37:18.510648815Z" level=info msg="StartContainer for \"380220e2545adb84d12aafbd7d5f7260457a54fabe921c3c775ec374cd519b65\"" Oct 8 19:37:18.592421 systemd[1]: Started sshd@11-172.31.27.200:22-139.178.68.195:53310.service - OpenSSH per-connection server daemon (139.178.68.195:53310). Oct 8 19:37:18.652679 systemd[1]: Started cri-containerd-380220e2545adb84d12aafbd7d5f7260457a54fabe921c3c775ec374cd519b65.scope - libcontainer container 380220e2545adb84d12aafbd7d5f7260457a54fabe921c3c775ec374cd519b65. Oct 8 19:37:18.806671 containerd[1937]: time="2024-10-08T19:37:18.806464777Z" level=info msg="StartContainer for \"380220e2545adb84d12aafbd7d5f7260457a54fabe921c3c775ec374cd519b65\" returns successfully" Oct 8 19:37:18.822598 sshd[5496]: Accepted publickey for core from 139.178.68.195 port 53310 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:18.827660 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:18.846148 systemd-logind[1903]: New session 12 of user core. Oct 8 19:37:18.850882 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:37:19.205696 sshd[5496]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:19.217889 systemd[1]: sshd@11-172.31.27.200:22-139.178.68.195:53310.service: Deactivated successfully. Oct 8 19:37:19.225670 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:37:19.230992 systemd-logind[1903]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:37:19.265425 systemd[1]: Started sshd@12-172.31.27.200:22-139.178.68.195:53326.service - OpenSSH per-connection server daemon (139.178.68.195:53326). Oct 8 19:37:19.268430 systemd-logind[1903]: Removed session 12. Oct 8 19:37:19.467368 sshd[5532]: Accepted publickey for core from 139.178.68.195 port 53326 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:19.470696 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:19.483138 systemd-logind[1903]: New session 13 of user core. Oct 8 19:37:19.490904 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:37:20.019691 sshd[5532]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:20.034390 systemd[1]: sshd@12-172.31.27.200:22-139.178.68.195:53326.service: Deactivated successfully. Oct 8 19:37:20.045577 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:37:20.065602 systemd-logind[1903]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:37:20.075712 systemd[1]: Started sshd@13-172.31.27.200:22-139.178.68.195:53340.service - OpenSSH per-connection server daemon (139.178.68.195:53340). Oct 8 19:37:20.080640 systemd-logind[1903]: Removed session 13. Oct 8 19:37:20.292015 sshd[5543]: Accepted publickey for core from 139.178.68.195 port 53340 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:20.297301 sshd[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:20.314370 systemd-logind[1903]: New session 14 of user core. Oct 8 19:37:20.318985 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:37:20.499561 ntpd[1895]: Listen normally on 8 vxlan.calico 192.168.40.0:123 Oct 8 19:37:20.503391 ntpd[1895]: 8 Oct 19:37:20 ntpd[1895]: Listen normally on 8 vxlan.calico 192.168.40.0:123 Oct 8 19:37:20.503391 ntpd[1895]: 8 Oct 19:37:20 ntpd[1895]: Listen normally on 9 vxlan.calico [fe80::641a:cdff:fe3f:b5c1%4]:123 Oct 8 19:37:20.503391 ntpd[1895]: 8 Oct 19:37:20 ntpd[1895]: Listen normally on 10 calicc8f67ad98f [fe80::ecee:eeff:feee:eeee%7]:123 Oct 8 19:37:20.503391 ntpd[1895]: 8 Oct 19:37:20 ntpd[1895]: Listen normally on 11 cali1e8c926f5b8 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 8 19:37:20.503391 ntpd[1895]: 8 Oct 19:37:20 ntpd[1895]: Listen normally on 12 calif76c03924c2 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 19:37:20.503391 ntpd[1895]: 8 Oct 19:37:20 ntpd[1895]: Listen normally on 13 cali449b831b70b [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 19:37:20.499737 ntpd[1895]: Listen normally on 9 vxlan.calico [fe80::641a:cdff:fe3f:b5c1%4]:123 Oct 8 19:37:20.499860 ntpd[1895]: Listen normally on 10 calicc8f67ad98f [fe80::ecee:eeff:feee:eeee%7]:123 Oct 8 19:37:20.499937 ntpd[1895]: Listen normally on 11 cali1e8c926f5b8 [fe80::ecee:eeff:feee:eeee%8]:123 Oct 8 19:37:20.500016 ntpd[1895]: Listen normally on 12 calif76c03924c2 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 19:37:20.500086 ntpd[1895]: Listen normally on 13 cali449b831b70b [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 19:37:20.653635 sshd[5543]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:20.662023 systemd[1]: sshd@13-172.31.27.200:22-139.178.68.195:53340.service: Deactivated successfully. Oct 8 19:37:20.667266 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:37:20.669031 systemd-logind[1903]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:37:20.672883 systemd-logind[1903]: Removed session 14. Oct 8 19:37:22.209421 containerd[1937]: time="2024-10-08T19:37:22.209182921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:22.212083 containerd[1937]: time="2024-10-08T19:37:22.211548133Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:37:22.214748 containerd[1937]: time="2024-10-08T19:37:22.214586641Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:22.224282 containerd[1937]: time="2024-10-08T19:37:22.222867097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:22.225015 containerd[1937]: time="2024-10-08T19:37:22.224929009Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 3.77211639s" Oct 8 19:37:22.225291 containerd[1937]: time="2024-10-08T19:37:22.225207085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:37:22.227536 containerd[1937]: time="2024-10-08T19:37:22.227457457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:37:22.262446 containerd[1937]: time="2024-10-08T19:37:22.262379726Z" level=info msg="CreateContainer within sandbox \"b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:37:22.303856 containerd[1937]: time="2024-10-08T19:37:22.299681342Z" level=info msg="CreateContainer within sandbox \"b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f6fd653375566bb6196cd0ee236c572a3860cb5bc68bf0d2434dd4f49017c34a\"" Oct 8 19:37:22.304808 containerd[1937]: time="2024-10-08T19:37:22.304597658Z" level=info msg="StartContainer for \"f6fd653375566bb6196cd0ee236c572a3860cb5bc68bf0d2434dd4f49017c34a\"" Oct 8 19:37:22.400728 systemd[1]: Started cri-containerd-f6fd653375566bb6196cd0ee236c572a3860cb5bc68bf0d2434dd4f49017c34a.scope - libcontainer container f6fd653375566bb6196cd0ee236c572a3860cb5bc68bf0d2434dd4f49017c34a. Oct 8 19:37:22.540801 containerd[1937]: time="2024-10-08T19:37:22.540598455Z" level=info msg="StartContainer for \"f6fd653375566bb6196cd0ee236c572a3860cb5bc68bf0d2434dd4f49017c34a\" returns successfully" Oct 8 19:37:22.701071 containerd[1937]: time="2024-10-08T19:37:22.700342420Z" level=info msg="StopPodSandbox for \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\"" Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.821 [WARNING][5631] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0", GenerateName:"calico-kube-controllers-b648c64d4-", Namespace:"calico-system", SelfLink:"", UID:"703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b648c64d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300", Pod:"calico-kube-controllers-b648c64d4-8r5k6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali449b831b70b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.822 [INFO][5631] k8s.go 608: Cleaning up netns ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.822 [INFO][5631] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" iface="eth0" netns="" Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.822 [INFO][5631] k8s.go 615: Releasing IP address(es) ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.822 [INFO][5631] utils.go 188: Calico CNI releasing IP address ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.891 [INFO][5639] ipam_plugin.go 417: Releasing address using handleID ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" HandleID="k8s-pod-network.18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.891 [INFO][5639] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.891 [INFO][5639] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.909 [WARNING][5639] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" HandleID="k8s-pod-network.18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.910 [INFO][5639] ipam_plugin.go 445: Releasing address using workloadID ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" HandleID="k8s-pod-network.18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.912 [INFO][5639] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:22.922128 containerd[1937]: 2024-10-08 19:37:22.916 [INFO][5631] k8s.go 621: Teardown processing complete. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:22.923957 containerd[1937]: time="2024-10-08T19:37:22.922174817Z" level=info msg="TearDown network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\" successfully" Oct 8 19:37:22.923957 containerd[1937]: time="2024-10-08T19:37:22.922211621Z" level=info msg="StopPodSandbox for \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\" returns successfully" Oct 8 19:37:22.926469 containerd[1937]: time="2024-10-08T19:37:22.926407901Z" level=info msg="RemovePodSandbox for \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\"" Oct 8 19:37:22.926469 containerd[1937]: time="2024-10-08T19:37:22.926489237Z" level=info msg="Forcibly stopping sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\"" Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.039 [WARNING][5658] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0", GenerateName:"calico-kube-controllers-b648c64d4-", Namespace:"calico-system", SelfLink:"", UID:"703d7ab4-2f87-4a4b-af2a-4f1fa691c2b6", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b648c64d4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"b0c155577496a98424d7311a534a0993e737bbc50246998e074b5b6dcac18300", Pod:"calico-kube-controllers-b648c64d4-8r5k6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.40.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali449b831b70b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.040 [INFO][5658] k8s.go 608: Cleaning up netns ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.040 [INFO][5658] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" iface="eth0" netns="" Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.040 [INFO][5658] k8s.go 615: Releasing IP address(es) ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.040 [INFO][5658] utils.go 188: Calico CNI releasing IP address ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.096 [INFO][5664] ipam_plugin.go 417: Releasing address using handleID ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" HandleID="k8s-pod-network.18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.097 [INFO][5664] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.098 [INFO][5664] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.119 [WARNING][5664] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" HandleID="k8s-pod-network.18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.119 [INFO][5664] ipam_plugin.go 445: Releasing address using workloadID ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" HandleID="k8s-pod-network.18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Workload="ip--172--31--27--200-k8s-calico--kube--controllers--b648c64d4--8r5k6-eth0" Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.125 [INFO][5664] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:23.133517 containerd[1937]: 2024-10-08 19:37:23.129 [INFO][5658] k8s.go 621: Teardown processing complete. ContainerID="18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d" Oct 8 19:37:23.135745 containerd[1937]: time="2024-10-08T19:37:23.134340434Z" level=info msg="TearDown network for sandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\" successfully" Oct 8 19:37:23.150364 containerd[1937]: time="2024-10-08T19:37:23.149937050Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:37:23.150364 containerd[1937]: time="2024-10-08T19:37:23.150189062Z" level=info msg="RemovePodSandbox \"18d5c4db429a8634cb14b3c4ef11598cd2c58575e4cff08cdb0fa93480f1786d\" returns successfully" Oct 8 19:37:23.154561 containerd[1937]: time="2024-10-08T19:37:23.154479998Z" level=info msg="StopPodSandbox for \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\"" Oct 8 19:37:23.155125 containerd[1937]: time="2024-10-08T19:37:23.154692158Z" level=info msg="TearDown network for sandbox \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\" successfully" Oct 8 19:37:23.155385 containerd[1937]: time="2024-10-08T19:37:23.155107250Z" level=info msg="StopPodSandbox for \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\" returns successfully" Oct 8 19:37:23.158688 containerd[1937]: time="2024-10-08T19:37:23.158611022Z" level=info msg="RemovePodSandbox for \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\"" Oct 8 19:37:23.158688 containerd[1937]: time="2024-10-08T19:37:23.158681894Z" level=info msg="Forcibly stopping sandbox \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\"" Oct 8 19:37:23.160151 containerd[1937]: time="2024-10-08T19:37:23.159465074Z" level=info msg="TearDown network for sandbox \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\" successfully" Oct 8 19:37:23.183579 containerd[1937]: time="2024-10-08T19:37:23.183411674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:37:23.183579 containerd[1937]: time="2024-10-08T19:37:23.183512426Z" level=info msg="RemovePodSandbox \"3cca80d9b5999db20a649e6703f8dd13fd35cb9acb7b2b13ab214c6868ac9e3d\" returns successfully" Oct 8 19:37:23.186745 containerd[1937]: time="2024-10-08T19:37:23.186689318Z" level=info msg="StopPodSandbox for \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\"" Oct 8 19:37:23.508895 systemd[1]: run-containerd-runc-k8s.io-f6fd653375566bb6196cd0ee236c572a3860cb5bc68bf0d2434dd4f49017c34a-runc.yMe0gK.mount: Deactivated successfully. Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.305 [WARNING][5684] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4493d8e3-881f-490b-a947-afdd7a012606", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032", Pod:"coredns-76f75df574-d6zdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc8f67ad98f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.308 [INFO][5684] k8s.go 608: Cleaning up netns ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.309 [INFO][5684] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" iface="eth0" netns="" Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.310 [INFO][5684] k8s.go 615: Releasing IP address(es) ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.310 [INFO][5684] utils.go 188: Calico CNI releasing IP address ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.519 [INFO][5691] ipam_plugin.go 417: Releasing address using handleID ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" HandleID="k8s-pod-network.1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.519 [INFO][5691] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.519 [INFO][5691] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.548 [WARNING][5691] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" HandleID="k8s-pod-network.1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.548 [INFO][5691] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" HandleID="k8s-pod-network.1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.553 [INFO][5691] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:23.566632 containerd[1937]: 2024-10-08 19:37:23.564 [INFO][5684] k8s.go 621: Teardown processing complete. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:23.568092 containerd[1937]: time="2024-10-08T19:37:23.566680204Z" level=info msg="TearDown network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\" successfully" Oct 8 19:37:23.568092 containerd[1937]: time="2024-10-08T19:37:23.566721664Z" level=info msg="StopPodSandbox for \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\" returns successfully" Oct 8 19:37:23.568092 containerd[1937]: time="2024-10-08T19:37:23.567599068Z" level=info msg="RemovePodSandbox for \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\"" Oct 8 19:37:23.568092 containerd[1937]: time="2024-10-08T19:37:23.567653224Z" level=info msg="Forcibly stopping sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\"" Oct 8 19:37:23.709952 kubelet[3384]: I1008 19:37:23.709893 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-b648c64d4-8r5k6" podStartSLOduration=30.902233886 podStartE2EDuration="35.709827449s" podCreationTimestamp="2024-10-08 19:36:48 +0000 UTC" firstStartedPulling="2024-10-08 19:37:17.419055514 +0000 UTC m=+54.979460758" lastFinishedPulling="2024-10-08 19:37:22.226649017 +0000 UTC m=+59.787054321" observedRunningTime="2024-10-08 19:37:23.413683575 +0000 UTC m=+60.974088843" watchObservedRunningTime="2024-10-08 19:37:23.709827449 +0000 UTC m=+61.270232705" Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.688 [WARNING][5731] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"4493d8e3-881f-490b-a947-afdd7a012606", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"4fdbbcf39d60a2c8e10f83e796694dd5775a9755264f4461826d9531f7ad5032", Pod:"coredns-76f75df574-d6zdk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicc8f67ad98f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.689 [INFO][5731] k8s.go 608: Cleaning up netns ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.689 [INFO][5731] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" iface="eth0" netns="" Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.689 [INFO][5731] k8s.go 615: Releasing IP address(es) ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.689 [INFO][5731] utils.go 188: Calico CNI releasing IP address ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.761 [INFO][5741] ipam_plugin.go 417: Releasing address using handleID ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" HandleID="k8s-pod-network.1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.762 [INFO][5741] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.762 [INFO][5741] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.781 [WARNING][5741] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" HandleID="k8s-pod-network.1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.781 [INFO][5741] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" HandleID="k8s-pod-network.1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--d6zdk-eth0" Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.784 [INFO][5741] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:23.790824 containerd[1937]: 2024-10-08 19:37:23.788 [INFO][5731] k8s.go 621: Teardown processing complete. ContainerID="1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a" Oct 8 19:37:23.790824 containerd[1937]: time="2024-10-08T19:37:23.790763033Z" level=info msg="TearDown network for sandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\" successfully" Oct 8 19:37:23.798366 containerd[1937]: time="2024-10-08T19:37:23.798148493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:37:23.798535 containerd[1937]: time="2024-10-08T19:37:23.798396701Z" level=info msg="RemovePodSandbox \"1537b7f1f3369e811c830e26ab740aba1a811bd9400f30d8944b125848be681a\" returns successfully" Oct 8 19:37:23.799889 containerd[1937]: time="2024-10-08T19:37:23.799525337Z" level=info msg="StopPodSandbox for \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\"" Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.894 [WARNING][5764] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9", Pod:"csi-node-driver-2xd4p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif76c03924c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.896 [INFO][5764] k8s.go 608: Cleaning up netns ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.896 [INFO][5764] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" iface="eth0" netns="" Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.896 [INFO][5764] k8s.go 615: Releasing IP address(es) ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.896 [INFO][5764] utils.go 188: Calico CNI releasing IP address ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.966 [INFO][5771] ipam_plugin.go 417: Releasing address using handleID ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" HandleID="k8s-pod-network.8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.966 [INFO][5771] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.967 [INFO][5771] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.988 [WARNING][5771] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" HandleID="k8s-pod-network.8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.988 [INFO][5771] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" HandleID="k8s-pod-network.8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.995 [INFO][5771] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:24.001117 containerd[1937]: 2024-10-08 19:37:23.998 [INFO][5764] k8s.go 621: Teardown processing complete. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:24.005016 containerd[1937]: time="2024-10-08T19:37:24.001208306Z" level=info msg="TearDown network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\" successfully" Oct 8 19:37:24.005016 containerd[1937]: time="2024-10-08T19:37:24.001302014Z" level=info msg="StopPodSandbox for \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\" returns successfully" Oct 8 19:37:24.005715 containerd[1937]: time="2024-10-08T19:37:24.005641142Z" level=info msg="RemovePodSandbox for \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\"" Oct 8 19:37:24.005715 containerd[1937]: time="2024-10-08T19:37:24.005704946Z" level=info msg="Forcibly stopping sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\"" Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.103 [WARNING][5789] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6d4c90a1-3e00-4c0b-8eae-414c05c0dfb6", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78cd84fb8c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9", Pod:"csi-node-driver-2xd4p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.40.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calif76c03924c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.104 [INFO][5789] k8s.go 608: Cleaning up netns ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.104 [INFO][5789] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" iface="eth0" netns="" Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.104 [INFO][5789] k8s.go 615: Releasing IP address(es) ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.104 [INFO][5789] utils.go 188: Calico CNI releasing IP address ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.176 [INFO][5795] ipam_plugin.go 417: Releasing address using handleID ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" HandleID="k8s-pod-network.8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.177 [INFO][5795] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.177 [INFO][5795] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.192 [WARNING][5795] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" HandleID="k8s-pod-network.8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.192 [INFO][5795] ipam_plugin.go 445: Releasing address using workloadID ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" HandleID="k8s-pod-network.8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Workload="ip--172--31--27--200-k8s-csi--node--driver--2xd4p-eth0" Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.195 [INFO][5795] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:24.201526 containerd[1937]: 2024-10-08 19:37:24.197 [INFO][5789] k8s.go 621: Teardown processing complete. ContainerID="8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508" Oct 8 19:37:24.203012 containerd[1937]: time="2024-10-08T19:37:24.201715083Z" level=info msg="TearDown network for sandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\" successfully" Oct 8 19:37:24.209244 containerd[1937]: time="2024-10-08T19:37:24.209074371Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:37:24.209244 containerd[1937]: time="2024-10-08T19:37:24.209164335Z" level=info msg="RemovePodSandbox \"8eb98c6d6e50729edd213dfdd6acfa898c3f45a7604e30cb98f2fdd983665508\" returns successfully" Oct 8 19:37:24.210461 containerd[1937]: time="2024-10-08T19:37:24.209965131Z" level=info msg="StopPodSandbox for \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\"" Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.302 [WARNING][5813] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e9100c31-c40a-445f-a34a-7baba2b66f92", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be", Pod:"coredns-76f75df574-r94pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e8c926f5b8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.302 [INFO][5813] k8s.go 608: Cleaning up netns ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.303 [INFO][5813] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" iface="eth0" netns="" Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.303 [INFO][5813] k8s.go 615: Releasing IP address(es) ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.303 [INFO][5813] utils.go 188: Calico CNI releasing IP address ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.368 [INFO][5819] ipam_plugin.go 417: Releasing address using handleID ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" HandleID="k8s-pod-network.11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.368 [INFO][5819] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.369 [INFO][5819] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.397 [WARNING][5819] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" HandleID="k8s-pod-network.11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.398 [INFO][5819] ipam_plugin.go 445: Releasing address using workloadID ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" HandleID="k8s-pod-network.11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.401 [INFO][5819] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:24.408086 containerd[1937]: 2024-10-08 19:37:24.405 [INFO][5813] k8s.go 621: Teardown processing complete. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:24.409407 containerd[1937]: time="2024-10-08T19:37:24.408142312Z" level=info msg="TearDown network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\" successfully" Oct 8 19:37:24.409407 containerd[1937]: time="2024-10-08T19:37:24.408181168Z" level=info msg="StopPodSandbox for \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\" returns successfully" Oct 8 19:37:24.409407 containerd[1937]: time="2024-10-08T19:37:24.408957316Z" level=info msg="RemovePodSandbox for \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\"" Oct 8 19:37:24.409407 containerd[1937]: time="2024-10-08T19:37:24.409115524Z" level=info msg="Forcibly stopping sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\"" Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.509 [WARNING][5837] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"e9100c31-c40a-445f-a34a-7baba2b66f92", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 36, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"1579dbe796c50f53598f5ee3a250948016a552fd7e0af99c0e3a7018a29844be", Pod:"coredns-76f75df574-r94pj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.40.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e8c926f5b8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.509 [INFO][5837] k8s.go 608: Cleaning up netns ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.510 [INFO][5837] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" iface="eth0" netns="" Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.510 [INFO][5837] k8s.go 615: Releasing IP address(es) ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.510 [INFO][5837] utils.go 188: Calico CNI releasing IP address ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.570 [INFO][5843] ipam_plugin.go 417: Releasing address using handleID ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" HandleID="k8s-pod-network.11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.571 [INFO][5843] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.571 [INFO][5843] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.589 [WARNING][5843] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" HandleID="k8s-pod-network.11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.589 [INFO][5843] ipam_plugin.go 445: Releasing address using workloadID ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" HandleID="k8s-pod-network.11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Workload="ip--172--31--27--200-k8s-coredns--76f75df574--r94pj-eth0" Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.596 [INFO][5843] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:24.606831 containerd[1937]: 2024-10-08 19:37:24.601 [INFO][5837] k8s.go 621: Teardown processing complete. ContainerID="11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87" Oct 8 19:37:24.606831 containerd[1937]: time="2024-10-08T19:37:24.606754361Z" level=info msg="TearDown network for sandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\" successfully" Oct 8 19:37:24.617855 containerd[1937]: time="2024-10-08T19:37:24.616605005Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:37:24.617855 containerd[1937]: time="2024-10-08T19:37:24.616863353Z" level=info msg="RemovePodSandbox \"11a16286d4d78a9775a8e068a71cc1a247802d207566463bc95db3485fd54f87\" returns successfully" Oct 8 19:37:24.618417 containerd[1937]: time="2024-10-08T19:37:24.618345881Z" level=info msg="StopPodSandbox for \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\"" Oct 8 19:37:24.618539 containerd[1937]: time="2024-10-08T19:37:24.618511913Z" level=info msg="TearDown network for sandbox \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\" successfully" Oct 8 19:37:24.618627 containerd[1937]: time="2024-10-08T19:37:24.618537389Z" level=info msg="StopPodSandbox for \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\" returns successfully" Oct 8 19:37:24.622399 containerd[1937]: time="2024-10-08T19:37:24.620323853Z" level=info msg="RemovePodSandbox for \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\"" Oct 8 19:37:24.622399 containerd[1937]: time="2024-10-08T19:37:24.620386169Z" level=info msg="Forcibly stopping sandbox \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\"" Oct 8 19:37:24.622399 containerd[1937]: time="2024-10-08T19:37:24.620513069Z" level=info msg="TearDown network for sandbox \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\" successfully" Oct 8 19:37:24.629672 containerd[1937]: time="2024-10-08T19:37:24.629424881Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:37:24.629672 containerd[1937]: time="2024-10-08T19:37:24.629623397Z" level=info msg="RemovePodSandbox \"ee3913cf9a3eca2b21de5a85c613e879ec97fbbe97e99b2d2624e034d33d8e38\" returns successfully" Oct 8 19:37:25.719057 systemd[1]: Started sshd@14-172.31.27.200:22-139.178.68.195:45852.service - OpenSSH per-connection server daemon (139.178.68.195:45852). Oct 8 19:37:25.958858 containerd[1937]: time="2024-10-08T19:37:25.958788752Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:25.961917 containerd[1937]: time="2024-10-08T19:37:25.961782824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:37:25.963110 containerd[1937]: time="2024-10-08T19:37:25.962374856Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:25.963772 sshd[5855]: Accepted publickey for core from 139.178.68.195 port 45852 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:25.967984 sshd[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:25.970316 containerd[1937]: time="2024-10-08T19:37:25.969505604Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:25.971613 containerd[1937]: time="2024-10-08T19:37:25.971001524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 3.742290307s" Oct 8 19:37:25.971613 containerd[1937]: time="2024-10-08T19:37:25.971066504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:37:25.979160 containerd[1937]: time="2024-10-08T19:37:25.979106780Z" level=info msg="CreateContainer within sandbox \"04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:37:25.985266 systemd-logind[1903]: New session 15 of user core. Oct 8 19:37:25.992926 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:37:26.024712 containerd[1937]: time="2024-10-08T19:37:26.024546724Z" level=info msg="CreateContainer within sandbox \"04555845d15a3ea2a965ea2d67dc6c3ae8ff61f8c2cab6565bb9490dc9f53dc9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"156f66c66555cde9303c1cc7347d7d01a5658dd642384cb4b95365e5bae76d74\"" Oct 8 19:37:26.026784 containerd[1937]: time="2024-10-08T19:37:26.026719168Z" level=info msg="StartContainer for \"156f66c66555cde9303c1cc7347d7d01a5658dd642384cb4b95365e5bae76d74\"" Oct 8 19:37:26.093640 systemd[1]: Started cri-containerd-156f66c66555cde9303c1cc7347d7d01a5658dd642384cb4b95365e5bae76d74.scope - libcontainer container 156f66c66555cde9303c1cc7347d7d01a5658dd642384cb4b95365e5bae76d74. Oct 8 19:37:26.175251 containerd[1937]: time="2024-10-08T19:37:26.174029405Z" level=info msg="StartContainer for \"156f66c66555cde9303c1cc7347d7d01a5658dd642384cb4b95365e5bae76d74\" returns successfully" Oct 8 19:37:26.310517 sshd[5855]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:26.316616 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:37:26.317421 systemd-logind[1903]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:37:26.318998 systemd[1]: sshd@14-172.31.27.200:22-139.178.68.195:45852.service: Deactivated successfully. Oct 8 19:37:26.327416 systemd-logind[1903]: Removed session 15. Oct 8 19:37:26.430291 kubelet[3384]: I1008 19:37:26.428696 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-2xd4p" podStartSLOduration=33.032047399 podStartE2EDuration="42.428606982s" podCreationTimestamp="2024-10-08 19:36:44 +0000 UTC" firstStartedPulling="2024-10-08 19:37:16.575130357 +0000 UTC m=+54.135535601" lastFinishedPulling="2024-10-08 19:37:25.971689928 +0000 UTC m=+63.532095184" observedRunningTime="2024-10-08 19:37:26.427750806 +0000 UTC m=+63.988156086" watchObservedRunningTime="2024-10-08 19:37:26.428606982 +0000 UTC m=+63.989012238" Oct 8 19:37:26.945456 kubelet[3384]: I1008 19:37:26.945390 3384 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:37:26.945456 kubelet[3384]: I1008 19:37:26.945464 3384 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:37:30.248242 systemd[1]: run-containerd-runc-k8s.io-f6fd653375566bb6196cd0ee236c572a3860cb5bc68bf0d2434dd4f49017c34a-runc.iXNloo.mount: Deactivated successfully. Oct 8 19:37:31.350825 systemd[1]: Started sshd@15-172.31.27.200:22-139.178.68.195:45056.service - OpenSSH per-connection server daemon (139.178.68.195:45056). Oct 8 19:37:31.535816 sshd[5931]: Accepted publickey for core from 139.178.68.195 port 45056 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:31.539435 sshd[5931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:31.550347 systemd-logind[1903]: New session 16 of user core. Oct 8 19:37:31.559369 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:37:31.819122 sshd[5931]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:31.825872 systemd[1]: sshd@15-172.31.27.200:22-139.178.68.195:45056.service: Deactivated successfully. Oct 8 19:37:31.830595 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:37:31.832667 systemd-logind[1903]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:37:31.836330 systemd-logind[1903]: Removed session 16. Oct 8 19:37:36.859775 systemd[1]: Started sshd@16-172.31.27.200:22-139.178.68.195:45068.service - OpenSSH per-connection server daemon (139.178.68.195:45068). Oct 8 19:37:37.046193 sshd[5955]: Accepted publickey for core from 139.178.68.195 port 45068 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:37.048889 sshd[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:37.058328 systemd-logind[1903]: New session 17 of user core. Oct 8 19:37:37.066534 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:37:37.337059 sshd[5955]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:37.343862 systemd[1]: sshd@16-172.31.27.200:22-139.178.68.195:45068.service: Deactivated successfully. Oct 8 19:37:37.347791 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:37:37.349620 systemd-logind[1903]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:37:37.352156 systemd-logind[1903]: Removed session 17. Oct 8 19:37:42.378767 systemd[1]: Started sshd@17-172.31.27.200:22-139.178.68.195:40894.service - OpenSSH per-connection server daemon (139.178.68.195:40894). Oct 8 19:37:42.571149 sshd[5971]: Accepted publickey for core from 139.178.68.195 port 40894 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:42.574584 sshd[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:42.589376 systemd-logind[1903]: New session 18 of user core. Oct 8 19:37:42.595493 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:37:42.868576 sshd[5971]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:42.875971 systemd[1]: sshd@17-172.31.27.200:22-139.178.68.195:40894.service: Deactivated successfully. Oct 8 19:37:42.881970 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:37:42.883916 systemd-logind[1903]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:37:42.886136 systemd-logind[1903]: Removed session 18. Oct 8 19:37:42.916554 systemd[1]: Started sshd@18-172.31.27.200:22-139.178.68.195:40898.service - OpenSSH per-connection server daemon (139.178.68.195:40898). Oct 8 19:37:43.096358 sshd[5984]: Accepted publickey for core from 139.178.68.195 port 40898 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:43.099262 sshd[5984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:43.106775 systemd-logind[1903]: New session 19 of user core. Oct 8 19:37:43.115506 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:37:43.630814 sshd[5984]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:43.637929 systemd[1]: sshd@18-172.31.27.200:22-139.178.68.195:40898.service: Deactivated successfully. Oct 8 19:37:43.644647 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:37:43.646598 systemd-logind[1903]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:37:43.648787 systemd-logind[1903]: Removed session 19. Oct 8 19:37:43.673775 systemd[1]: Started sshd@19-172.31.27.200:22-139.178.68.195:40902.service - OpenSSH per-connection server daemon (139.178.68.195:40902). Oct 8 19:37:43.857898 sshd[5996]: Accepted publickey for core from 139.178.68.195 port 40902 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:43.860589 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:43.870490 systemd-logind[1903]: New session 20 of user core. Oct 8 19:37:43.875531 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:37:47.158546 sshd[5996]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:47.167168 systemd-logind[1903]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:37:47.171083 systemd[1]: sshd@19-172.31.27.200:22-139.178.68.195:40902.service: Deactivated successfully. Oct 8 19:37:47.177096 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:37:47.177565 systemd[1]: session-20.scope: Consumed 1.042s CPU time. Oct 8 19:37:47.200630 systemd-logind[1903]: Removed session 20. Oct 8 19:37:47.206766 systemd[1]: Started sshd@20-172.31.27.200:22-139.178.68.195:40914.service - OpenSSH per-connection server daemon (139.178.68.195:40914). Oct 8 19:37:47.412860 sshd[6022]: Accepted publickey for core from 139.178.68.195 port 40914 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:47.417650 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:47.427846 systemd-logind[1903]: New session 21 of user core. Oct 8 19:37:47.433767 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:37:47.960952 sshd[6022]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:47.970185 systemd[1]: sshd@20-172.31.27.200:22-139.178.68.195:40914.service: Deactivated successfully. Oct 8 19:37:47.976019 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:37:47.978649 systemd-logind[1903]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:37:47.980649 systemd-logind[1903]: Removed session 21. Oct 8 19:37:48.003779 systemd[1]: Started sshd@21-172.31.27.200:22-139.178.68.195:40920.service - OpenSSH per-connection server daemon (139.178.68.195:40920). Oct 8 19:37:48.190448 sshd[6034]: Accepted publickey for core from 139.178.68.195 port 40920 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:48.193378 sshd[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:48.201206 systemd-logind[1903]: New session 22 of user core. Oct 8 19:37:48.209512 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:37:48.506754 sshd[6034]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:48.521743 systemd[1]: sshd@21-172.31.27.200:22-139.178.68.195:40920.service: Deactivated successfully. Oct 8 19:37:48.534849 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:37:48.537586 systemd-logind[1903]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:37:48.544779 systemd-logind[1903]: Removed session 22. Oct 8 19:37:53.609087 systemd[1]: Started sshd@22-172.31.27.200:22-139.178.68.195:60520.service - OpenSSH per-connection server daemon (139.178.68.195:60520). Oct 8 19:37:53.716109 kubelet[3384]: I1008 19:37:53.713817 3384 topology_manager.go:215] "Topology Admit Handler" podUID="9389724f-e6a9-4d95-8309-0e39af51c027" podNamespace="calico-apiserver" podName="calico-apiserver-7c9cb85b-bmslv" Oct 8 19:37:53.734710 systemd[1]: Created slice kubepods-besteffort-pod9389724f_e6a9_4d95_8309_0e39af51c027.slice - libcontainer container kubepods-besteffort-pod9389724f_e6a9_4d95_8309_0e39af51c027.slice. Oct 8 19:37:53.851813 sshd[6094]: Accepted publickey for core from 139.178.68.195 port 60520 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:53.855448 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:53.869744 systemd-logind[1903]: New session 23 of user core. Oct 8 19:37:53.876640 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:37:53.897579 kubelet[3384]: I1008 19:37:53.897464 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6p69\" (UniqueName: \"kubernetes.io/projected/9389724f-e6a9-4d95-8309-0e39af51c027-kube-api-access-q6p69\") pod \"calico-apiserver-7c9cb85b-bmslv\" (UID: \"9389724f-e6a9-4d95-8309-0e39af51c027\") " pod="calico-apiserver/calico-apiserver-7c9cb85b-bmslv" Oct 8 19:37:53.897579 kubelet[3384]: I1008 19:37:53.897597 3384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9389724f-e6a9-4d95-8309-0e39af51c027-calico-apiserver-certs\") pod \"calico-apiserver-7c9cb85b-bmslv\" (UID: \"9389724f-e6a9-4d95-8309-0e39af51c027\") " pod="calico-apiserver/calico-apiserver-7c9cb85b-bmslv" Oct 8 19:37:53.999803 kubelet[3384]: E1008 19:37:53.999733 3384 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:37:53.999966 kubelet[3384]: E1008 19:37:53.999896 3384 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9389724f-e6a9-4d95-8309-0e39af51c027-calico-apiserver-certs podName:9389724f-e6a9-4d95-8309-0e39af51c027 nodeName:}" failed. No retries permitted until 2024-10-08 19:37:54.499867203 +0000 UTC m=+92.060272447 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9389724f-e6a9-4d95-8309-0e39af51c027-calico-apiserver-certs") pod "calico-apiserver-7c9cb85b-bmslv" (UID: "9389724f-e6a9-4d95-8309-0e39af51c027") : secret "calico-apiserver-certs" not found Oct 8 19:37:54.238474 sshd[6094]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:54.246858 systemd[1]: sshd@22-172.31.27.200:22-139.178.68.195:60520.service: Deactivated successfully. Oct 8 19:37:54.258144 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:37:54.260627 systemd-logind[1903]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:37:54.264349 systemd-logind[1903]: Removed session 23. Oct 8 19:37:54.649777 containerd[1937]: time="2024-10-08T19:37:54.649637687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9cb85b-bmslv,Uid:9389724f-e6a9-4d95-8309-0e39af51c027,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:37:54.936866 systemd-networkd[1821]: cali41ca59b2a5d: Link UP Oct 8 19:37:54.938583 systemd-networkd[1821]: cali41ca59b2a5d: Gained carrier Oct 8 19:37:54.947789 (udev-worker)[6136]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.760 [INFO][6113] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0 calico-apiserver-7c9cb85b- calico-apiserver 9389724f-e6a9-4d95-8309-0e39af51c027 1140 0 2024-10-08 19:37:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7c9cb85b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-27-200 calico-apiserver-7c9cb85b-bmslv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali41ca59b2a5d [] []}} ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Namespace="calico-apiserver" Pod="calico-apiserver-7c9cb85b-bmslv" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.761 [INFO][6113] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Namespace="calico-apiserver" Pod="calico-apiserver-7c9cb85b-bmslv" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.827 [INFO][6124] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" HandleID="k8s-pod-network.796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Workload="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.846 [INFO][6124] ipam_plugin.go 270: Auto assigning IP ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" HandleID="k8s-pod-network.796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Workload="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003171a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-27-200", "pod":"calico-apiserver-7c9cb85b-bmslv", "timestamp":"2024-10-08 19:37:54.827053199 +0000 UTC"}, Hostname:"ip-172-31-27-200", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.846 [INFO][6124] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.847 [INFO][6124] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.847 [INFO][6124] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-27-200' Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.853 [INFO][6124] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" host="ip-172-31-27-200" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.869 [INFO][6124] ipam.go 372: Looking up existing affinities for host host="ip-172-31-27-200" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.887 [INFO][6124] ipam.go 489: Trying affinity for 192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.892 [INFO][6124] ipam.go 155: Attempting to load block cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.898 [INFO][6124] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.40.0/26 host="ip-172-31-27-200" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.898 [INFO][6124] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.40.0/26 handle="k8s-pod-network.796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" host="ip-172-31-27-200" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.900 [INFO][6124] ipam.go 1685: Creating new handle: k8s-pod-network.796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.907 [INFO][6124] ipam.go 1203: Writing block in order to claim IPs block=192.168.40.0/26 handle="k8s-pod-network.796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" host="ip-172-31-27-200" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.923 [INFO][6124] ipam.go 1216: Successfully claimed IPs: [192.168.40.5/26] block=192.168.40.0/26 handle="k8s-pod-network.796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" host="ip-172-31-27-200" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.923 [INFO][6124] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.40.5/26] handle="k8s-pod-network.796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" host="ip-172-31-27-200" Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.923 [INFO][6124] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:37:54.982355 containerd[1937]: 2024-10-08 19:37:54.923 [INFO][6124] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.40.5/26] IPv6=[] ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" HandleID="k8s-pod-network.796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Workload="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" Oct 8 19:37:54.983666 containerd[1937]: 2024-10-08 19:37:54.928 [INFO][6113] k8s.go 386: Populated endpoint ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Namespace="calico-apiserver" Pod="calico-apiserver-7c9cb85b-bmslv" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0", GenerateName:"calico-apiserver-7c9cb85b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9389724f-e6a9-4d95-8309-0e39af51c027", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9cb85b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"", Pod:"calico-apiserver-7c9cb85b-bmslv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41ca59b2a5d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:54.983666 containerd[1937]: 2024-10-08 19:37:54.930 [INFO][6113] k8s.go 387: Calico CNI using IPs: [192.168.40.5/32] ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Namespace="calico-apiserver" Pod="calico-apiserver-7c9cb85b-bmslv" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" Oct 8 19:37:54.983666 containerd[1937]: 2024-10-08 19:37:54.930 [INFO][6113] dataplane_linux.go 68: Setting the host side veth name to cali41ca59b2a5d ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Namespace="calico-apiserver" Pod="calico-apiserver-7c9cb85b-bmslv" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" Oct 8 19:37:54.983666 containerd[1937]: 2024-10-08 19:37:54.937 [INFO][6113] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Namespace="calico-apiserver" Pod="calico-apiserver-7c9cb85b-bmslv" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" Oct 8 19:37:54.983666 containerd[1937]: 2024-10-08 19:37:54.940 [INFO][6113] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Namespace="calico-apiserver" Pod="calico-apiserver-7c9cb85b-bmslv" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0", GenerateName:"calico-apiserver-7c9cb85b-", Namespace:"calico-apiserver", SelfLink:"", UID:"9389724f-e6a9-4d95-8309-0e39af51c027", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 37, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7c9cb85b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-27-200", ContainerID:"796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f", Pod:"calico-apiserver-7c9cb85b-bmslv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.40.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41ca59b2a5d", MAC:"8a:d4:be:e3:10:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:37:54.983666 containerd[1937]: 2024-10-08 19:37:54.976 [INFO][6113] k8s.go 500: Wrote updated endpoint to datastore ContainerID="796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f" Namespace="calico-apiserver" Pod="calico-apiserver-7c9cb85b-bmslv" WorkloadEndpoint="ip--172--31--27--200-k8s-calico--apiserver--7c9cb85b--bmslv-eth0" Oct 8 19:37:55.048043 containerd[1937]: time="2024-10-08T19:37:55.047677605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:37:55.048043 containerd[1937]: time="2024-10-08T19:37:55.047819277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:37:55.048043 containerd[1937]: time="2024-10-08T19:37:55.047862345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:55.049568 containerd[1937]: time="2024-10-08T19:37:55.048623973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:37:55.132552 systemd[1]: Started cri-containerd-796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f.scope - libcontainer container 796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f. Oct 8 19:37:55.243778 containerd[1937]: time="2024-10-08T19:37:55.243333705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7c9cb85b-bmslv,Uid:9389724f-e6a9-4d95-8309-0e39af51c027,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f\"" Oct 8 19:37:55.250562 containerd[1937]: time="2024-10-08T19:37:55.250275658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:37:56.966421 systemd-networkd[1821]: cali41ca59b2a5d: Gained IPv6LL Oct 8 19:37:58.165264 containerd[1937]: time="2024-10-08T19:37:58.165106044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:58.167072 containerd[1937]: time="2024-10-08T19:37:58.166858776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 8 19:37:58.169065 containerd[1937]: time="2024-10-08T19:37:58.168445584Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:58.172887 containerd[1937]: time="2024-10-08T19:37:58.172775292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:37:58.174812 containerd[1937]: time="2024-10-08T19:37:58.174757956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 2.924172206s" Oct 8 19:37:58.174992 containerd[1937]: time="2024-10-08T19:37:58.174960120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:37:58.181209 containerd[1937]: time="2024-10-08T19:37:58.180769884Z" level=info msg="CreateContainer within sandbox \"796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:37:58.209310 containerd[1937]: time="2024-10-08T19:37:58.209065776Z" level=info msg="CreateContainer within sandbox \"796467afe63f8cd07d8f13db51d8936b695dad9cbe6e0e4af6b9526b14ef7e7f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bd41760e11d6fafab2c2ea61d107d900db3a58d49f8396016eca7b9495cb0e5b\"" Oct 8 19:37:58.211925 containerd[1937]: time="2024-10-08T19:37:58.211856544Z" level=info msg="StartContainer for \"bd41760e11d6fafab2c2ea61d107d900db3a58d49f8396016eca7b9495cb0e5b\"" Oct 8 19:37:58.214400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount778348014.mount: Deactivated successfully. Oct 8 19:37:58.290079 systemd[1]: run-containerd-runc-k8s.io-bd41760e11d6fafab2c2ea61d107d900db3a58d49f8396016eca7b9495cb0e5b-runc.SM7Ta8.mount: Deactivated successfully. Oct 8 19:37:58.307941 systemd[1]: Started cri-containerd-bd41760e11d6fafab2c2ea61d107d900db3a58d49f8396016eca7b9495cb0e5b.scope - libcontainer container bd41760e11d6fafab2c2ea61d107d900db3a58d49f8396016eca7b9495cb0e5b. Oct 8 19:37:58.443528 containerd[1937]: time="2024-10-08T19:37:58.442312033Z" level=info msg="StartContainer for \"bd41760e11d6fafab2c2ea61d107d900db3a58d49f8396016eca7b9495cb0e5b\" returns successfully" Oct 8 19:37:59.282146 systemd[1]: Started sshd@23-172.31.27.200:22-139.178.68.195:60526.service - OpenSSH per-connection server daemon (139.178.68.195:60526). Oct 8 19:37:59.482254 sshd[6247]: Accepted publickey for core from 139.178.68.195 port 60526 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:37:59.485352 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:37:59.499415 ntpd[1895]: Listen normally on 14 cali41ca59b2a5d [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 19:37:59.500087 ntpd[1895]: 8 Oct 19:37:59 ntpd[1895]: Listen normally on 14 cali41ca59b2a5d [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 19:37:59.503304 systemd-logind[1903]: New session 24 of user core. Oct 8 19:37:59.509575 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:37:59.840011 sshd[6247]: pam_unix(sshd:session): session closed for user core Oct 8 19:37:59.847262 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:37:59.849173 systemd[1]: sshd@23-172.31.27.200:22-139.178.68.195:60526.service: Deactivated successfully. Oct 8 19:37:59.860030 systemd-logind[1903]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:37:59.866276 systemd-logind[1903]: Removed session 24. Oct 8 19:38:01.614256 kubelet[3384]: I1008 19:38:01.613212 3384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7c9cb85b-bmslv" podStartSLOduration=5.685674399 podStartE2EDuration="8.613149113s" podCreationTimestamp="2024-10-08 19:37:53 +0000 UTC" firstStartedPulling="2024-10-08 19:37:55.248004106 +0000 UTC m=+92.808409350" lastFinishedPulling="2024-10-08 19:37:58.17547882 +0000 UTC m=+95.735884064" observedRunningTime="2024-10-08 19:37:58.550866326 +0000 UTC m=+96.111271594" watchObservedRunningTime="2024-10-08 19:38:01.613149113 +0000 UTC m=+99.173554369" Oct 8 19:38:04.883780 systemd[1]: Started sshd@24-172.31.27.200:22-139.178.68.195:34334.service - OpenSSH per-connection server daemon (139.178.68.195:34334). Oct 8 19:38:05.069387 sshd[6288]: Accepted publickey for core from 139.178.68.195 port 34334 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:05.072462 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:05.080363 systemd-logind[1903]: New session 25 of user core. Oct 8 19:38:05.088514 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:38:05.362050 sshd[6288]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:05.372238 systemd[1]: sshd@24-172.31.27.200:22-139.178.68.195:34334.service: Deactivated successfully. Oct 8 19:38:05.384887 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:38:05.397341 systemd-logind[1903]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:38:05.402446 systemd-logind[1903]: Removed session 25. Oct 8 19:38:10.401626 systemd[1]: Started sshd@25-172.31.27.200:22-139.178.68.195:34344.service - OpenSSH per-connection server daemon (139.178.68.195:34344). Oct 8 19:38:10.595299 sshd[6312]: Accepted publickey for core from 139.178.68.195 port 34344 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:10.597961 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:10.606454 systemd-logind[1903]: New session 26 of user core. Oct 8 19:38:10.610506 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:38:10.864805 sshd[6312]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:10.871319 systemd[1]: sshd@25-172.31.27.200:22-139.178.68.195:34344.service: Deactivated successfully. Oct 8 19:38:10.874874 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:38:10.877036 systemd-logind[1903]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:38:10.879450 systemd-logind[1903]: Removed session 26. Oct 8 19:38:15.905114 systemd[1]: Started sshd@26-172.31.27.200:22-139.178.68.195:50092.service - OpenSSH per-connection server daemon (139.178.68.195:50092). Oct 8 19:38:16.089345 sshd[6329]: Accepted publickey for core from 139.178.68.195 port 50092 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:16.092251 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:16.100843 systemd-logind[1903]: New session 27 of user core. Oct 8 19:38:16.106484 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:38:16.422116 sshd[6329]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:16.430157 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:38:16.431944 systemd[1]: sshd@26-172.31.27.200:22-139.178.68.195:50092.service: Deactivated successfully. Oct 8 19:38:16.443208 systemd-logind[1903]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:38:16.447807 systemd-logind[1903]: Removed session 27. Oct 8 19:38:21.465834 systemd[1]: Started sshd@27-172.31.27.200:22-139.178.68.195:46802.service - OpenSSH per-connection server daemon (139.178.68.195:46802). Oct 8 19:38:21.674324 sshd[6347]: Accepted publickey for core from 139.178.68.195 port 46802 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:21.678834 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:21.688823 systemd-logind[1903]: New session 28 of user core. Oct 8 19:38:21.692529 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 19:38:21.952595 sshd[6347]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:21.959199 systemd[1]: sshd@27-172.31.27.200:22-139.178.68.195:46802.service: Deactivated successfully. Oct 8 19:38:21.963595 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 19:38:21.965928 systemd-logind[1903]: Session 28 logged out. Waiting for processes to exit. Oct 8 19:38:21.968039 systemd-logind[1903]: Removed session 28. Oct 8 19:38:26.997748 systemd[1]: Started sshd@28-172.31.27.200:22-139.178.68.195:46808.service - OpenSSH per-connection server daemon (139.178.68.195:46808). Oct 8 19:38:27.190427 sshd[6389]: Accepted publickey for core from 139.178.68.195 port 46808 ssh2: RSA SHA256:Mk9S5TnwRn/Nvp9hJQCsIZR4kjDFrRbnnuGA/cRmM/Q Oct 8 19:38:27.193095 sshd[6389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 8 19:38:27.202520 systemd-logind[1903]: New session 29 of user core. Oct 8 19:38:27.207575 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 8 19:38:27.458716 sshd[6389]: pam_unix(sshd:session): session closed for user core Oct 8 19:38:27.465462 systemd[1]: sshd@28-172.31.27.200:22-139.178.68.195:46808.service: Deactivated successfully. Oct 8 19:38:27.468941 systemd[1]: session-29.scope: Deactivated successfully. Oct 8 19:38:27.471190 systemd-logind[1903]: Session 29 logged out. Waiting for processes to exit. Oct 8 19:38:27.473340 systemd-logind[1903]: Removed session 29. Oct 8 19:38:41.479438 systemd[1]: cri-containerd-741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5.scope: Deactivated successfully. Oct 8 19:38:41.482645 systemd[1]: cri-containerd-741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5.scope: Consumed 5.901s CPU time, 20.3M memory peak, 0B memory swap peak. Oct 8 19:38:41.528342 containerd[1937]: time="2024-10-08T19:38:41.528094555Z" level=info msg="shim disconnected" id=741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5 namespace=k8s.io Oct 8 19:38:41.530304 containerd[1937]: time="2024-10-08T19:38:41.529485043Z" level=warning msg="cleaning up after shim disconnected" id=741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5 namespace=k8s.io Oct 8 19:38:41.530304 containerd[1937]: time="2024-10-08T19:38:41.529528327Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:38:41.533381 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5-rootfs.mount: Deactivated successfully. Oct 8 19:38:41.594797 containerd[1937]: time="2024-10-08T19:38:41.594049160Z" level=warning msg="cleanup warnings time=\"2024-10-08T19:38:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Oct 8 19:38:41.673292 kubelet[3384]: I1008 19:38:41.672926 3384 scope.go:117] "RemoveContainer" containerID="741ecd6ea078cd2d795751c454bb38fe86d6ae0d239b39c6a8e08f11a998a7c5" Oct 8 19:38:41.681203 containerd[1937]: time="2024-10-08T19:38:41.681086504Z" level=info msg="CreateContainer within sandbox \"518b140b5636df0dd6b8980fe5650eeb8ee6bf527d2cfab49a7368a721b1fed0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 8 19:38:41.709261 containerd[1937]: time="2024-10-08T19:38:41.708177560Z" level=info msg="CreateContainer within sandbox \"518b140b5636df0dd6b8980fe5650eeb8ee6bf527d2cfab49a7368a721b1fed0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"28b7b7a99e10613038f32bfd468731033f816dab0f3e86c479b7fa9cd1c0b9e7\"" Oct 8 19:38:41.710572 containerd[1937]: time="2024-10-08T19:38:41.710128736Z" level=info msg="StartContainer for \"28b7b7a99e10613038f32bfd468731033f816dab0f3e86c479b7fa9cd1c0b9e7\"" Oct 8 19:38:41.769554 systemd[1]: Started cri-containerd-28b7b7a99e10613038f32bfd468731033f816dab0f3e86c479b7fa9cd1c0b9e7.scope - libcontainer container 28b7b7a99e10613038f32bfd468731033f816dab0f3e86c479b7fa9cd1c0b9e7. Oct 8 19:38:41.880350 containerd[1937]: time="2024-10-08T19:38:41.880279941Z" level=info msg="StartContainer for \"28b7b7a99e10613038f32bfd468731033f816dab0f3e86c479b7fa9cd1c0b9e7\" returns successfully" Oct 8 19:38:42.589791 systemd[1]: cri-containerd-bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82.scope: Deactivated successfully. Oct 8 19:38:42.590866 systemd[1]: cri-containerd-bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82.scope: Consumed 13.406s CPU time. Oct 8 19:38:42.649530 containerd[1937]: time="2024-10-08T19:38:42.648779733Z" level=info msg="shim disconnected" id=bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82 namespace=k8s.io Oct 8 19:38:42.649530 containerd[1937]: time="2024-10-08T19:38:42.648880545Z" level=warning msg="cleaning up after shim disconnected" id=bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82 namespace=k8s.io Oct 8 19:38:42.649530 containerd[1937]: time="2024-10-08T19:38:42.649180977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:38:42.651891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82-rootfs.mount: Deactivated successfully. Oct 8 19:38:43.699910 kubelet[3384]: I1008 19:38:43.699345 3384 scope.go:117] "RemoveContainer" containerID="bac6544513c398537c296c9fa1807917373bea3fe34e72e20766ecb243270b82" Oct 8 19:38:43.703519 containerd[1937]: time="2024-10-08T19:38:43.703209022Z" level=info msg="CreateContainer within sandbox \"77c4a08f6f933c12b41e2c38e79f9f123f7583494a1e8c0adc728963dd5780b6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 8 19:38:43.728758 containerd[1937]: time="2024-10-08T19:38:43.725631862Z" level=info msg="CreateContainer within sandbox \"77c4a08f6f933c12b41e2c38e79f9f123f7583494a1e8c0adc728963dd5780b6\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"1c7502844a476f739399516aab0861470c4f957d0b4f203193a2ac7f81f603ba\"" Oct 8 19:38:43.728758 containerd[1937]: time="2024-10-08T19:38:43.728864854Z" level=info msg="StartContainer for \"1c7502844a476f739399516aab0861470c4f957d0b4f203193a2ac7f81f603ba\"" Oct 8 19:38:43.801560 systemd[1]: Started cri-containerd-1c7502844a476f739399516aab0861470c4f957d0b4f203193a2ac7f81f603ba.scope - libcontainer container 1c7502844a476f739399516aab0861470c4f957d0b4f203193a2ac7f81f603ba. Oct 8 19:38:43.867549 containerd[1937]: time="2024-10-08T19:38:43.867473411Z" level=info msg="StartContainer for \"1c7502844a476f739399516aab0861470c4f957d0b4f203193a2ac7f81f603ba\" returns successfully" Oct 8 19:38:45.830595 kubelet[3384]: E1008 19:38:45.829907 3384 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-200?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 8 19:38:45.842721 systemd[1]: cri-containerd-7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801.scope: Deactivated successfully. Oct 8 19:38:45.845378 systemd[1]: cri-containerd-7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801.scope: Consumed 2.631s CPU time, 16.0M memory peak, 0B memory swap peak. Oct 8 19:38:45.882017 containerd[1937]: time="2024-10-08T19:38:45.881893621Z" level=info msg="shim disconnected" id=7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801 namespace=k8s.io Oct 8 19:38:45.882017 containerd[1937]: time="2024-10-08T19:38:45.881974609Z" level=warning msg="cleaning up after shim disconnected" id=7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801 namespace=k8s.io Oct 8 19:38:45.882017 containerd[1937]: time="2024-10-08T19:38:45.881996341Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:38:45.887717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801-rootfs.mount: Deactivated successfully. Oct 8 19:38:46.715893 kubelet[3384]: I1008 19:38:46.715835 3384 scope.go:117] "RemoveContainer" containerID="7c830ec7f3bc0fa8b476866352b53efddddb48c83e8d939618bde47fb33b3801" Oct 8 19:38:46.719731 containerd[1937]: time="2024-10-08T19:38:46.719638909Z" level=info msg="CreateContainer within sandbox \"1fa43e43987767f4aa8f058d9cc8f416c32068223154263bc0eb99b163320dfa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 8 19:38:46.742391 containerd[1937]: time="2024-10-08T19:38:46.742316125Z" level=info msg="CreateContainer within sandbox \"1fa43e43987767f4aa8f058d9cc8f416c32068223154263bc0eb99b163320dfa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8554151584233ffc50e3633a56712eac5bfcaf9ff4948f39b31f63b51ab45b9e\"" Oct 8 19:38:46.746251 containerd[1937]: time="2024-10-08T19:38:46.744661441Z" level=info msg="StartContainer for \"8554151584233ffc50e3633a56712eac5bfcaf9ff4948f39b31f63b51ab45b9e\"" Oct 8 19:38:46.810559 systemd[1]: Started cri-containerd-8554151584233ffc50e3633a56712eac5bfcaf9ff4948f39b31f63b51ab45b9e.scope - libcontainer container 8554151584233ffc50e3633a56712eac5bfcaf9ff4948f39b31f63b51ab45b9e. Oct 8 19:38:46.889753 containerd[1937]: time="2024-10-08T19:38:46.889658438Z" level=info msg="StartContainer for \"8554151584233ffc50e3633a56712eac5bfcaf9ff4948f39b31f63b51ab45b9e\" returns successfully" Oct 8 19:38:51.428656 systemd[1]: run-containerd-runc-k8s.io-f6fd653375566bb6196cd0ee236c572a3860cb5bc68bf0d2434dd4f49017c34a-runc.Got6I7.mount: Deactivated successfully. Oct 8 19:38:55.830836 kubelet[3384]: E1008 19:38:55.830733 3384 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.200:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-200?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"