Oct 8 19:31:16.265995 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Oct 8 19:31:16.266073 kernel: Linux version 6.6.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Tue Oct 8 18:22:02 -00 2024 Oct 8 19:31:16.266104 kernel: KASLR disabled due to lack of seed Oct 8 19:31:16.266122 kernel: efi: EFI v2.7 by EDK II Oct 8 19:31:16.266139 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Oct 8 19:31:16.266156 kernel: ACPI: Early table checksum verification disabled Oct 8 19:31:16.266175 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Oct 8 19:31:16.266194 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Oct 8 19:31:16.266211 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Oct 8 19:31:16.266228 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Oct 8 19:31:16.266259 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Oct 8 19:31:16.266276 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Oct 8 19:31:16.266294 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Oct 8 19:31:16.266312 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Oct 8 19:31:16.266333 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Oct 8 19:31:16.266361 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Oct 8 19:31:16.266381 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Oct 8 19:31:16.266400 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Oct 8 19:31:16.266418 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Oct 8 19:31:16.266437 kernel: printk: bootconsole [uart0] enabled Oct 8 19:31:16.266455 kernel: NUMA: Failed to initialise from firmware Oct 8 19:31:16.266474 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Oct 8 19:31:16.271184 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Oct 8 19:31:16.271229 kernel: Zone ranges: Oct 8 19:31:16.271249 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Oct 8 19:31:16.271267 kernel: DMA32 empty Oct 8 19:31:16.271300 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Oct 8 19:31:16.271318 kernel: Movable zone start for each node Oct 8 19:31:16.271335 kernel: Early memory node ranges Oct 8 19:31:16.271352 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Oct 8 19:31:16.271370 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Oct 8 19:31:16.271387 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Oct 8 19:31:16.271404 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Oct 8 19:31:16.271424 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Oct 8 19:31:16.271442 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Oct 8 19:31:16.271459 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Oct 8 19:31:16.271477 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Oct 8 19:31:16.271549 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Oct 8 19:31:16.271578 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Oct 8 19:31:16.271597 kernel: psci: probing for conduit method from ACPI. Oct 8 19:31:16.271622 kernel: psci: PSCIv1.0 detected in firmware. Oct 8 19:31:16.271641 kernel: psci: Using standard PSCI v0.2 function IDs Oct 8 19:31:16.271661 kernel: psci: Trusted OS migration not required Oct 8 19:31:16.271685 kernel: psci: SMC Calling Convention v1.1 Oct 8 19:31:16.271704 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Oct 8 19:31:16.271722 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Oct 8 19:31:16.271769 kernel: pcpu-alloc: [0] 0 [0] 1 Oct 8 19:31:16.271790 kernel: Detected PIPT I-cache on CPU0 Oct 8 19:31:16.271809 kernel: CPU features: detected: GIC system register CPU interface Oct 8 19:31:16.271828 kernel: CPU features: detected: Spectre-v2 Oct 8 19:31:16.271847 kernel: CPU features: detected: Spectre-v3a Oct 8 19:31:16.271865 kernel: CPU features: detected: Spectre-BHB Oct 8 19:31:16.271884 kernel: CPU features: detected: ARM erratum 1742098 Oct 8 19:31:16.271903 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Oct 8 19:31:16.271933 kernel: alternatives: applying boot alternatives Oct 8 19:31:16.271955 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:31:16.271977 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 8 19:31:16.271996 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 8 19:31:16.272014 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 8 19:31:16.272033 kernel: Fallback order for Node 0: 0 Oct 8 19:31:16.272052 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Oct 8 19:31:16.272071 kernel: Policy zone: Normal Oct 8 19:31:16.272089 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 8 19:31:16.272107 kernel: software IO TLB: area num 2. Oct 8 19:31:16.272126 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Oct 8 19:31:16.272154 kernel: Memory: 3820472K/4030464K available (10240K kernel code, 2184K rwdata, 8080K rodata, 39104K init, 897K bss, 209992K reserved, 0K cma-reserved) Oct 8 19:31:16.272175 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 8 19:31:16.272196 kernel: trace event string verifier disabled Oct 8 19:31:16.272216 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 8 19:31:16.272237 kernel: rcu: RCU event tracing is enabled. Oct 8 19:31:16.272259 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 8 19:31:16.272279 kernel: Trampoline variant of Tasks RCU enabled. Oct 8 19:31:16.272299 kernel: Tracing variant of Tasks RCU enabled. Oct 8 19:31:16.272319 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 8 19:31:16.272339 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 8 19:31:16.272358 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 8 19:31:16.272392 kernel: GICv3: 96 SPIs implemented Oct 8 19:31:16.272413 kernel: GICv3: 0 Extended SPIs implemented Oct 8 19:31:16.272434 kernel: Root IRQ handler: gic_handle_irq Oct 8 19:31:16.272453 kernel: GICv3: GICv3 features: 16 PPIs Oct 8 19:31:16.272472 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Oct 8 19:31:16.272555 kernel: ITS [mem 0x10080000-0x1009ffff] Oct 8 19:31:16.272585 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Oct 8 19:31:16.272605 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Oct 8 19:31:16.272625 kernel: GICv3: using LPI property table @0x00000004000e0000 Oct 8 19:31:16.272645 kernel: ITS: Using hypervisor restricted LPI range [128] Oct 8 19:31:16.272664 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Oct 8 19:31:16.272684 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 8 19:31:16.272724 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Oct 8 19:31:16.272744 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Oct 8 19:31:16.272765 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Oct 8 19:31:16.272785 kernel: Console: colour dummy device 80x25 Oct 8 19:31:16.272806 kernel: printk: console [tty1] enabled Oct 8 19:31:16.272828 kernel: ACPI: Core revision 20230628 Oct 8 19:31:16.272850 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Oct 8 19:31:16.272872 kernel: pid_max: default: 32768 minimum: 301 Oct 8 19:31:16.272893 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Oct 8 19:31:16.272914 kernel: SELinux: Initializing. Oct 8 19:31:16.272949 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:31:16.272971 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 8 19:31:16.272991 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:31:16.273014 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Oct 8 19:31:16.273036 kernel: rcu: Hierarchical SRCU implementation. Oct 8 19:31:16.273061 kernel: rcu: Max phase no-delay instances is 400. Oct 8 19:31:16.273083 kernel: Platform MSI: ITS@0x10080000 domain created Oct 8 19:31:16.273105 kernel: PCI/MSI: ITS@0x10080000 domain created Oct 8 19:31:16.273126 kernel: Remapping and enabling EFI services. Oct 8 19:31:16.273162 kernel: smp: Bringing up secondary CPUs ... Oct 8 19:31:16.273184 kernel: Detected PIPT I-cache on CPU1 Oct 8 19:31:16.273206 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Oct 8 19:31:16.273227 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Oct 8 19:31:16.273247 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Oct 8 19:31:16.273266 kernel: smp: Brought up 1 node, 2 CPUs Oct 8 19:31:16.273286 kernel: SMP: Total of 2 processors activated. Oct 8 19:31:16.273306 kernel: CPU features: detected: 32-bit EL0 Support Oct 8 19:31:16.273325 kernel: CPU features: detected: 32-bit EL1 Support Oct 8 19:31:16.273360 kernel: CPU features: detected: CRC32 instructions Oct 8 19:31:16.273381 kernel: CPU: All CPU(s) started at EL1 Oct 8 19:31:16.273415 kernel: alternatives: applying system-wide alternatives Oct 8 19:31:16.273442 kernel: devtmpfs: initialized Oct 8 19:31:16.273463 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 8 19:31:16.275552 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 8 19:31:16.275644 kernel: pinctrl core: initialized pinctrl subsystem Oct 8 19:31:16.275665 kernel: SMBIOS 3.0.0 present. Oct 8 19:31:16.275686 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Oct 8 19:31:16.275723 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 8 19:31:16.275765 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 8 19:31:16.275788 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 8 19:31:16.275808 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 8 19:31:16.275830 kernel: audit: initializing netlink subsys (disabled) Oct 8 19:31:16.275851 kernel: audit: type=2000 audit(0.302:1): state=initialized audit_enabled=0 res=1 Oct 8 19:31:16.275871 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 8 19:31:16.275899 kernel: cpuidle: using governor menu Oct 8 19:31:16.275921 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 8 19:31:16.275940 kernel: ASID allocator initialised with 65536 entries Oct 8 19:31:16.275959 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 8 19:31:16.275978 kernel: Serial: AMBA PL011 UART driver Oct 8 19:31:16.276002 kernel: Modules: 17584 pages in range for non-PLT usage Oct 8 19:31:16.276022 kernel: Modules: 509104 pages in range for PLT usage Oct 8 19:31:16.276041 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 8 19:31:16.276060 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 8 19:31:16.276085 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 8 19:31:16.276105 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 8 19:31:16.276124 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 8 19:31:16.276144 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 8 19:31:16.276164 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 8 19:31:16.276184 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 8 19:31:16.276204 kernel: ACPI: Added _OSI(Module Device) Oct 8 19:31:16.276224 kernel: ACPI: Added _OSI(Processor Device) Oct 8 19:31:16.276245 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 8 19:31:16.276278 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 8 19:31:16.276298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 8 19:31:16.276318 kernel: ACPI: Interpreter enabled Oct 8 19:31:16.276337 kernel: ACPI: Using GIC for interrupt routing Oct 8 19:31:16.276358 kernel: ACPI: MCFG table detected, 1 entries Oct 8 19:31:16.276377 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Oct 8 19:31:16.279231 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 8 19:31:16.283879 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 8 19:31:16.284254 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 8 19:31:16.284540 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Oct 8 19:31:16.284804 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Oct 8 19:31:16.284837 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Oct 8 19:31:16.284857 kernel: acpiphp: Slot [1] registered Oct 8 19:31:16.284877 kernel: acpiphp: Slot [2] registered Oct 8 19:31:16.284898 kernel: acpiphp: Slot [3] registered Oct 8 19:31:16.284919 kernel: acpiphp: Slot [4] registered Oct 8 19:31:16.284941 kernel: acpiphp: Slot [5] registered Oct 8 19:31:16.284988 kernel: acpiphp: Slot [6] registered Oct 8 19:31:16.285009 kernel: acpiphp: Slot [7] registered Oct 8 19:31:16.285029 kernel: acpiphp: Slot [8] registered Oct 8 19:31:16.285050 kernel: acpiphp: Slot [9] registered Oct 8 19:31:16.285070 kernel: acpiphp: Slot [10] registered Oct 8 19:31:16.285091 kernel: acpiphp: Slot [11] registered Oct 8 19:31:16.285112 kernel: acpiphp: Slot [12] registered Oct 8 19:31:16.285134 kernel: acpiphp: Slot [13] registered Oct 8 19:31:16.285154 kernel: acpiphp: Slot [14] registered Oct 8 19:31:16.285188 kernel: acpiphp: Slot [15] registered Oct 8 19:31:16.285211 kernel: acpiphp: Slot [16] registered Oct 8 19:31:16.285232 kernel: acpiphp: Slot [17] registered Oct 8 19:31:16.285252 kernel: acpiphp: Slot [18] registered Oct 8 19:31:16.285273 kernel: acpiphp: Slot [19] registered Oct 8 19:31:16.285295 kernel: acpiphp: Slot [20] registered Oct 8 19:31:16.285316 kernel: acpiphp: Slot [21] registered Oct 8 19:31:16.285339 kernel: acpiphp: Slot [22] registered Oct 8 19:31:16.285359 kernel: acpiphp: Slot [23] registered Oct 8 19:31:16.285382 kernel: acpiphp: Slot [24] registered Oct 8 19:31:16.285415 kernel: acpiphp: Slot [25] registered Oct 8 19:31:16.285436 kernel: acpiphp: Slot [26] registered Oct 8 19:31:16.285456 kernel: acpiphp: Slot [27] registered Oct 8 19:31:16.285477 kernel: acpiphp: Slot [28] registered Oct 8 19:31:16.287725 kernel: acpiphp: Slot [29] registered Oct 8 19:31:16.287788 kernel: acpiphp: Slot [30] registered Oct 8 19:31:16.287812 kernel: acpiphp: Slot [31] registered Oct 8 19:31:16.287835 kernel: PCI host bridge to bus 0000:00 Oct 8 19:31:16.288239 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Oct 8 19:31:16.288578 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 8 19:31:16.288903 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Oct 8 19:31:16.289169 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Oct 8 19:31:16.291722 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Oct 8 19:31:16.292199 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Oct 8 19:31:16.292448 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Oct 8 19:31:16.292780 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Oct 8 19:31:16.293038 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Oct 8 19:31:16.293310 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 8 19:31:16.295365 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Oct 8 19:31:16.295901 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Oct 8 19:31:16.296151 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Oct 8 19:31:16.296371 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Oct 8 19:31:16.298756 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Oct 8 19:31:16.299066 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Oct 8 19:31:16.299349 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Oct 8 19:31:16.299877 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Oct 8 19:31:16.300204 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Oct 8 19:31:16.300449 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Oct 8 19:31:16.300701 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Oct 8 19:31:16.300915 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 8 19:31:16.301110 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Oct 8 19:31:16.301140 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 8 19:31:16.301160 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 8 19:31:16.301183 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 8 19:31:16.301205 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 8 19:31:16.301225 kernel: iommu: Default domain type: Translated Oct 8 19:31:16.301246 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 8 19:31:16.301287 kernel: efivars: Registered efivars operations Oct 8 19:31:16.301307 kernel: vgaarb: loaded Oct 8 19:31:16.301327 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 8 19:31:16.301349 kernel: VFS: Disk quotas dquot_6.6.0 Oct 8 19:31:16.301370 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 8 19:31:16.301391 kernel: pnp: PnP ACPI init Oct 8 19:31:16.304081 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Oct 8 19:31:16.304174 kernel: pnp: PnP ACPI: found 1 devices Oct 8 19:31:16.304213 kernel: NET: Registered PF_INET protocol family Oct 8 19:31:16.304233 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 8 19:31:16.304254 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 8 19:31:16.304274 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 8 19:31:16.304294 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 8 19:31:16.304316 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 8 19:31:16.304335 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 8 19:31:16.304354 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:31:16.304374 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 8 19:31:16.304398 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 8 19:31:16.304417 kernel: PCI: CLS 0 bytes, default 64 Oct 8 19:31:16.304437 kernel: kvm [1]: HYP mode not available Oct 8 19:31:16.304457 kernel: Initialise system trusted keyrings Oct 8 19:31:16.304477 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 8 19:31:16.304585 kernel: Key type asymmetric registered Oct 8 19:31:16.304606 kernel: Asymmetric key parser 'x509' registered Oct 8 19:31:16.304625 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 8 19:31:16.304645 kernel: io scheduler mq-deadline registered Oct 8 19:31:16.304672 kernel: io scheduler kyber registered Oct 8 19:31:16.304692 kernel: io scheduler bfq registered Oct 8 19:31:16.304983 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Oct 8 19:31:16.305017 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 8 19:31:16.305038 kernel: ACPI: button: Power Button [PWRB] Oct 8 19:31:16.305057 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Oct 8 19:31:16.305076 kernel: ACPI: button: Sleep Button [SLPB] Oct 8 19:31:16.305097 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 8 19:31:16.305130 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Oct 8 19:31:16.305427 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Oct 8 19:31:16.305469 kernel: printk: console [ttyS0] disabled Oct 8 19:31:16.307591 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Oct 8 19:31:16.307637 kernel: printk: console [ttyS0] enabled Oct 8 19:31:16.307657 kernel: printk: bootconsole [uart0] disabled Oct 8 19:31:16.307677 kernel: thunder_xcv, ver 1.0 Oct 8 19:31:16.307697 kernel: thunder_bgx, ver 1.0 Oct 8 19:31:16.307717 kernel: nicpf, ver 1.0 Oct 8 19:31:16.307767 kernel: nicvf, ver 1.0 Oct 8 19:31:16.308192 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 8 19:31:16.308475 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-10-08T19:31:15 UTC (1728415875) Oct 8 19:31:16.308579 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 8 19:31:16.308601 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Oct 8 19:31:16.308624 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 8 19:31:16.308644 kernel: watchdog: Hard watchdog permanently disabled Oct 8 19:31:16.308664 kernel: NET: Registered PF_INET6 protocol family Oct 8 19:31:16.308684 kernel: Segment Routing with IPv6 Oct 8 19:31:16.308723 kernel: In-situ OAM (IOAM) with IPv6 Oct 8 19:31:16.308743 kernel: NET: Registered PF_PACKET protocol family Oct 8 19:31:16.308763 kernel: Key type dns_resolver registered Oct 8 19:31:16.308784 kernel: registered taskstats version 1 Oct 8 19:31:16.308804 kernel: Loading compiled-in X.509 certificates Oct 8 19:31:16.308824 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.54-flatcar: e5b54c43c129014ce5ace0e8cd7b641a0fcb136e' Oct 8 19:31:16.308845 kernel: Key type .fscrypt registered Oct 8 19:31:16.308865 kernel: Key type fscrypt-provisioning registered Oct 8 19:31:16.308884 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 8 19:31:16.308914 kernel: ima: Allocated hash algorithm: sha1 Oct 8 19:31:16.308934 kernel: ima: No architecture policies found Oct 8 19:31:16.308954 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 8 19:31:16.308974 kernel: clk: Disabling unused clocks Oct 8 19:31:16.308994 kernel: Freeing unused kernel memory: 39104K Oct 8 19:31:16.309014 kernel: Run /init as init process Oct 8 19:31:16.309034 kernel: with arguments: Oct 8 19:31:16.309054 kernel: /init Oct 8 19:31:16.309072 kernel: with environment: Oct 8 19:31:16.309100 kernel: HOME=/ Oct 8 19:31:16.309120 kernel: TERM=linux Oct 8 19:31:16.309139 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 8 19:31:16.309168 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:31:16.309195 systemd[1]: Detected virtualization amazon. Oct 8 19:31:16.309216 systemd[1]: Detected architecture arm64. Oct 8 19:31:16.309236 systemd[1]: Running in initrd. Oct 8 19:31:16.309256 systemd[1]: No hostname configured, using default hostname. Oct 8 19:31:16.309285 systemd[1]: Hostname set to . Oct 8 19:31:16.309308 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:31:16.309328 systemd[1]: Queued start job for default target initrd.target. Oct 8 19:31:16.309352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:31:16.309375 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:31:16.309401 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 8 19:31:16.309422 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:31:16.309465 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 8 19:31:16.311651 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 8 19:31:16.311767 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 8 19:31:16.311801 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 8 19:31:16.311825 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:31:16.311848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:31:16.311871 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:31:16.311911 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:31:16.311935 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:31:16.311957 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:31:16.311980 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:31:16.312003 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:31:16.312026 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 8 19:31:16.312048 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 8 19:31:16.312070 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:31:16.312093 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:31:16.312131 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:31:16.312154 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:31:16.312176 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 8 19:31:16.312198 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:31:16.312220 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 8 19:31:16.312244 systemd[1]: Starting systemd-fsck-usr.service... Oct 8 19:31:16.312266 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:31:16.312287 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:31:16.312320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:31:16.312343 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 8 19:31:16.312364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:31:16.312385 systemd[1]: Finished systemd-fsck-usr.service. Oct 8 19:31:16.312610 systemd-journald[251]: Collecting audit messages is disabled. Oct 8 19:31:16.312707 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 8 19:31:16.312739 systemd-journald[251]: Journal started Oct 8 19:31:16.312802 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2ae716018237f6470479d515d94e32) is 8.0M, max 75.3M, 67.3M free. Oct 8 19:31:16.285673 systemd-modules-load[252]: Inserted module 'overlay' Oct 8 19:31:16.322733 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:31:16.322790 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 8 19:31:16.332563 kernel: Bridge firewalling registered Oct 8 19:31:16.328009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:31:16.334606 systemd-modules-load[252]: Inserted module 'br_netfilter' Oct 8 19:31:16.340986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:31:16.345823 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 8 19:31:16.362923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:31:16.376121 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:31:16.385333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:31:16.389808 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:31:16.423015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:31:16.437896 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:31:16.457411 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:31:16.460064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:31:16.472580 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:31:16.482735 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 8 19:31:16.546981 dracut-cmdline[288]: dracut-dracut-053 Oct 8 19:31:16.553681 systemd-resolved[283]: Positive Trust Anchors: Oct 8 19:31:16.553721 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:31:16.553782 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:31:16.574085 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=c838587f25bc3913a152d0e9ed071e943b77b8dea81b67c254bbd10c29051fd2 Oct 8 19:31:16.809530 kernel: SCSI subsystem initialized Oct 8 19:31:16.817597 kernel: Loading iSCSI transport class v2.0-870. Oct 8 19:31:16.833401 kernel: iscsi: registered transport (tcp) Oct 8 19:31:16.833479 kernel: random: crng init done Oct 8 19:31:16.833891 systemd-resolved[283]: Defaulting to hostname 'linux'. Oct 8 19:31:16.837881 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:31:16.842766 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:31:16.869920 kernel: iscsi: registered transport (qla4xxx) Oct 8 19:31:16.870045 kernel: QLogic iSCSI HBA Driver Oct 8 19:31:16.970107 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 8 19:31:16.984901 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 8 19:31:17.022012 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 8 19:31:17.022089 kernel: device-mapper: uevent: version 1.0.3 Oct 8 19:31:17.023885 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 8 19:31:17.093553 kernel: raid6: neonx8 gen() 6767 MB/s Oct 8 19:31:17.110558 kernel: raid6: neonx4 gen() 6558 MB/s Oct 8 19:31:17.127555 kernel: raid6: neonx2 gen() 5436 MB/s Oct 8 19:31:17.144579 kernel: raid6: neonx1 gen() 3824 MB/s Oct 8 19:31:17.161572 kernel: raid6: int64x8 gen() 3691 MB/s Oct 8 19:31:17.178560 kernel: raid6: int64x4 gen() 3588 MB/s Oct 8 19:31:17.195546 kernel: raid6: int64x2 gen() 3569 MB/s Oct 8 19:31:17.213382 kernel: raid6: int64x1 gen() 2737 MB/s Oct 8 19:31:17.213457 kernel: raid6: using algorithm neonx8 gen() 6767 MB/s Oct 8 19:31:17.231384 kernel: raid6: .... xor() 4805 MB/s, rmw enabled Oct 8 19:31:17.231524 kernel: raid6: using neon recovery algorithm Oct 8 19:31:17.241297 kernel: xor: measuring software checksum speed Oct 8 19:31:17.241402 kernel: 8regs : 11014 MB/sec Oct 8 19:31:17.242526 kernel: 32regs : 10633 MB/sec Oct 8 19:31:17.244537 kernel: arm64_neon : 8791 MB/sec Oct 8 19:31:17.244590 kernel: xor: using function: 8regs (11014 MB/sec) Oct 8 19:31:17.338543 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 8 19:31:17.362558 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:31:17.373823 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:31:17.416183 systemd-udevd[469]: Using default interface naming scheme 'v255'. Oct 8 19:31:17.425907 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:31:17.439896 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 8 19:31:17.480971 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Oct 8 19:31:17.557643 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:31:17.569077 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:31:17.738717 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:31:17.754314 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 8 19:31:17.821762 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 8 19:31:17.831057 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:31:17.836157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:31:17.840913 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:31:17.857537 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 8 19:31:17.907148 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:31:17.984339 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 8 19:31:17.984421 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Oct 8 19:31:18.006057 kernel: ena 0000:00:05.0: ENA device version: 0.10 Oct 8 19:31:18.006561 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Oct 8 19:31:18.016529 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:68:c3:a8:20:83 Oct 8 19:31:18.019424 (udev-worker)[537]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:31:18.025379 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:31:18.025801 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:31:18.031557 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:31:18.036350 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:31:18.036783 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:31:18.041832 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:31:18.078006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:31:18.100541 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Oct 8 19:31:18.100620 kernel: nvme nvme0: pci function 0000:00:04.0 Oct 8 19:31:18.107701 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:31:18.112717 kernel: nvme nvme0: 2/0/0 default/read/poll queues Oct 8 19:31:18.124021 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 8 19:31:18.134333 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 8 19:31:18.134455 kernel: GPT:9289727 != 16777215 Oct 8 19:31:18.136058 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 8 19:31:18.137408 kernel: GPT:9289727 != 16777215 Oct 8 19:31:18.138622 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 8 19:31:18.139969 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:31:18.167170 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:31:18.237533 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (526) Oct 8 19:31:18.288546 kernel: BTRFS: device fsid a2a78d47-736b-4018-a518-3cfb16920575 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (515) Oct 8 19:31:18.385146 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Oct 8 19:31:18.403854 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:31:18.445036 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Oct 8 19:31:18.461130 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Oct 8 19:31:18.463855 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Oct 8 19:31:18.486946 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 8 19:31:18.502466 disk-uuid[660]: Primary Header is updated. Oct 8 19:31:18.502466 disk-uuid[660]: Secondary Entries is updated. Oct 8 19:31:18.502466 disk-uuid[660]: Secondary Header is updated. Oct 8 19:31:18.517553 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:31:18.527530 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:31:18.539531 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:31:19.546534 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Oct 8 19:31:19.546623 disk-uuid[661]: The operation has completed successfully. Oct 8 19:31:19.786456 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 8 19:31:19.788254 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 8 19:31:19.857245 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 8 19:31:19.883889 sh[1008]: Success Oct 8 19:31:19.913607 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 8 19:31:20.056619 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 8 19:31:20.081809 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 8 19:31:20.095223 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 8 19:31:20.125128 kernel: BTRFS info (device dm-0): first mount of filesystem a2a78d47-736b-4018-a518-3cfb16920575 Oct 8 19:31:20.125252 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:31:20.125311 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 8 19:31:20.126889 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 8 19:31:20.129248 kernel: BTRFS info (device dm-0): using free space tree Oct 8 19:31:20.198586 kernel: BTRFS info (device dm-0): enabling ssd optimizations Oct 8 19:31:20.210102 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 8 19:31:20.215021 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 8 19:31:20.223811 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 8 19:31:20.238675 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 8 19:31:20.287243 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:31:20.287344 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:31:20.288598 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:31:20.297572 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:31:20.318715 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 8 19:31:20.321680 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:31:20.333819 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 8 19:31:20.345877 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 8 19:31:20.442094 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:31:20.460802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:31:20.519045 systemd-networkd[1201]: lo: Link UP Oct 8 19:31:20.519838 systemd-networkd[1201]: lo: Gained carrier Oct 8 19:31:20.522884 systemd-networkd[1201]: Enumeration completed Oct 8 19:31:20.523781 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:31:20.525669 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:31:20.525676 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:31:20.530965 systemd[1]: Reached target network.target - Network. Oct 8 19:31:20.535261 systemd-networkd[1201]: eth0: Link UP Oct 8 19:31:20.535269 systemd-networkd[1201]: eth0: Gained carrier Oct 8 19:31:20.535287 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:31:20.569645 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.19.2/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:31:20.760146 ignition[1133]: Ignition 2.18.0 Oct 8 19:31:20.760899 ignition[1133]: Stage: fetch-offline Oct 8 19:31:20.763893 ignition[1133]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:31:20.763934 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:31:20.766059 ignition[1133]: Ignition finished successfully Oct 8 19:31:20.771909 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:31:20.781850 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 8 19:31:20.814388 ignition[1212]: Ignition 2.18.0 Oct 8 19:31:20.814417 ignition[1212]: Stage: fetch Oct 8 19:31:20.816011 ignition[1212]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:31:20.816039 ignition[1212]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:31:20.816196 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:31:20.827430 ignition[1212]: PUT result: OK Oct 8 19:31:20.830754 ignition[1212]: parsed url from cmdline: "" Oct 8 19:31:20.830776 ignition[1212]: no config URL provided Oct 8 19:31:20.830793 ignition[1212]: reading system config file "/usr/lib/ignition/user.ign" Oct 8 19:31:20.830820 ignition[1212]: no config at "/usr/lib/ignition/user.ign" Oct 8 19:31:20.830854 ignition[1212]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:31:20.832944 ignition[1212]: PUT result: OK Oct 8 19:31:20.833025 ignition[1212]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Oct 8 19:31:20.840201 ignition[1212]: GET result: OK Oct 8 19:31:20.843628 ignition[1212]: parsing config with SHA512: dd55ed606460a77df0304246423f3c46b7a463caf89e38102698fba0c51f9cc49d6f635af9f59b42879f508628ca6bd3d986bb1d2b6db42dcf432b9130e70e65 Oct 8 19:31:20.853046 unknown[1212]: fetched base config from "system" Oct 8 19:31:20.854385 ignition[1212]: fetch: fetch complete Oct 8 19:31:20.853088 unknown[1212]: fetched base config from "system" Oct 8 19:31:20.854414 ignition[1212]: fetch: fetch passed Oct 8 19:31:20.853111 unknown[1212]: fetched user config from "aws" Oct 8 19:31:20.860164 ignition[1212]: Ignition finished successfully Oct 8 19:31:20.869053 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 8 19:31:20.885061 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 8 19:31:20.929245 ignition[1220]: Ignition 2.18.0 Oct 8 19:31:20.929306 ignition[1220]: Stage: kargs Oct 8 19:31:20.930855 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:31:20.930884 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:31:20.931042 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:31:20.933595 ignition[1220]: PUT result: OK Oct 8 19:31:20.945232 ignition[1220]: kargs: kargs passed Oct 8 19:31:20.946075 ignition[1220]: Ignition finished successfully Oct 8 19:31:20.952594 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 8 19:31:20.964173 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 8 19:31:21.004263 ignition[1227]: Ignition 2.18.0 Oct 8 19:31:21.004291 ignition[1227]: Stage: disks Oct 8 19:31:21.005921 ignition[1227]: no configs at "/usr/lib/ignition/base.d" Oct 8 19:31:21.005956 ignition[1227]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:31:21.006126 ignition[1227]: PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:31:21.008034 ignition[1227]: PUT result: OK Oct 8 19:31:21.020098 ignition[1227]: disks: disks passed Oct 8 19:31:21.021672 ignition[1227]: Ignition finished successfully Oct 8 19:31:21.026293 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 8 19:31:21.030079 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 8 19:31:21.035678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 8 19:31:21.039293 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:31:21.047682 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:31:21.050044 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:31:21.067807 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 8 19:31:21.121377 systemd-fsck[1237]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 8 19:31:21.128125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 8 19:31:21.140678 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 8 19:31:21.256558 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fbf53fb2-c32f-44fa-a235-3100e56d8882 r/w with ordered data mode. Quota mode: none. Oct 8 19:31:21.258479 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 8 19:31:21.263632 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 8 19:31:21.274813 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:31:21.290719 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 8 19:31:21.298627 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 8 19:31:21.298867 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 8 19:31:21.298975 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:31:21.327996 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 8 19:31:21.338632 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1256) Oct 8 19:31:21.343074 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:31:21.343155 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:31:21.343897 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 8 19:31:21.347304 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:31:21.356582 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:31:21.359633 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:31:21.798413 initrd-setup-root[1280]: cut: /sysroot/etc/passwd: No such file or directory Oct 8 19:31:21.818316 initrd-setup-root[1287]: cut: /sysroot/etc/group: No such file or directory Oct 8 19:31:21.829780 initrd-setup-root[1294]: cut: /sysroot/etc/shadow: No such file or directory Oct 8 19:31:21.848950 initrd-setup-root[1301]: cut: /sysroot/etc/gshadow: No such file or directory Oct 8 19:31:22.182697 systemd-networkd[1201]: eth0: Gained IPv6LL Oct 8 19:31:22.255459 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 8 19:31:22.268928 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 8 19:31:22.281159 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 8 19:31:22.304826 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:31:22.306167 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 8 19:31:22.373882 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 8 19:31:22.379138 ignition[1369]: INFO : Ignition 2.18.0 Oct 8 19:31:22.379138 ignition[1369]: INFO : Stage: mount Oct 8 19:31:22.384303 ignition[1369]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:31:22.384303 ignition[1369]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:31:22.384303 ignition[1369]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:31:22.392957 ignition[1369]: INFO : PUT result: OK Oct 8 19:31:22.398219 ignition[1369]: INFO : mount: mount passed Oct 8 19:31:22.398219 ignition[1369]: INFO : Ignition finished successfully Oct 8 19:31:22.403980 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 8 19:31:22.416050 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 8 19:31:22.457287 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 8 19:31:22.505534 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1381) Oct 8 19:31:22.510261 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 95ed8f66-d8c4-4374-b329-28c20748d95f Oct 8 19:31:22.510377 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Oct 8 19:31:22.510407 kernel: BTRFS info (device nvme0n1p6): using free space tree Oct 8 19:31:22.517560 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Oct 8 19:31:22.522212 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 8 19:31:22.566639 ignition[1398]: INFO : Ignition 2.18.0 Oct 8 19:31:22.566639 ignition[1398]: INFO : Stage: files Oct 8 19:31:22.569846 ignition[1398]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:31:22.569846 ignition[1398]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:31:22.569846 ignition[1398]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:31:22.576757 ignition[1398]: INFO : PUT result: OK Oct 8 19:31:22.581933 ignition[1398]: DEBUG : files: compiled without relabeling support, skipping Oct 8 19:31:22.585318 ignition[1398]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 8 19:31:22.585318 ignition[1398]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 8 19:31:22.621898 ignition[1398]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 8 19:31:22.625836 ignition[1398]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 8 19:31:22.629438 unknown[1398]: wrote ssh authorized keys file for user: core Oct 8 19:31:22.632799 ignition[1398]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 8 19:31:22.637684 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:31:22.641956 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Oct 8 19:31:22.750025 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 8 19:31:22.915322 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Oct 8 19:31:22.919311 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 8 19:31:22.922667 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 8 19:31:22.922667 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:31:22.929597 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 8 19:31:22.929597 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:31:22.936939 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 8 19:31:22.936939 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:31:22.944013 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 8 19:31:22.948043 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:31:22.951931 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 8 19:31:22.955539 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:31:22.961626 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:31:22.961626 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:31:22.961626 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 Oct 8 19:31:23.454476 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 8 19:31:23.879225 ignition[1398]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" Oct 8 19:31:23.879225 ignition[1398]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 8 19:31:23.886279 ignition[1398]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:31:23.886279 ignition[1398]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 8 19:31:23.886279 ignition[1398]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 8 19:31:23.886279 ignition[1398]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 8 19:31:23.886279 ignition[1398]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 8 19:31:23.886279 ignition[1398]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:31:23.886279 ignition[1398]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 8 19:31:23.886279 ignition[1398]: INFO : files: files passed Oct 8 19:31:23.886279 ignition[1398]: INFO : Ignition finished successfully Oct 8 19:31:23.897531 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 8 19:31:23.940753 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 8 19:31:23.949888 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 8 19:31:23.956169 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 8 19:31:23.958127 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 8 19:31:23.994984 initrd-setup-root-after-ignition[1427]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:31:23.994984 initrd-setup-root-after-ignition[1427]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:31:24.002316 initrd-setup-root-after-ignition[1431]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 8 19:31:24.008195 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:31:24.012197 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 8 19:31:24.025997 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 8 19:31:24.123580 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 8 19:31:24.125789 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 8 19:31:24.131223 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 8 19:31:24.135647 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 8 19:31:24.140286 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 8 19:31:24.150182 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 8 19:31:24.207067 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:31:24.230860 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 8 19:31:24.263401 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:31:24.269667 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:31:24.273784 systemd[1]: Stopped target timers.target - Timer Units. Oct 8 19:31:24.277794 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 8 19:31:24.278370 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 8 19:31:24.291121 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 8 19:31:24.293878 systemd[1]: Stopped target basic.target - Basic System. Oct 8 19:31:24.296354 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 8 19:31:24.303447 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 8 19:31:24.303864 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 8 19:31:24.306918 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 8 19:31:24.319138 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 8 19:31:24.322657 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 8 19:31:24.327305 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 8 19:31:24.330842 systemd[1]: Stopped target swap.target - Swaps. Oct 8 19:31:24.334029 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 8 19:31:24.334333 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 8 19:31:24.340258 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:31:24.342673 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:31:24.345184 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 8 19:31:24.345581 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:31:24.348437 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 8 19:31:24.348779 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 8 19:31:24.357183 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 8 19:31:24.357845 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 8 19:31:24.364765 systemd[1]: ignition-files.service: Deactivated successfully. Oct 8 19:31:24.365098 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 8 19:31:24.386126 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 8 19:31:24.388172 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 8 19:31:24.388540 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:31:24.397181 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 8 19:31:24.401590 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 8 19:31:24.402168 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:31:24.410634 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 8 19:31:24.411528 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 8 19:31:24.435454 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 8 19:31:24.439729 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 8 19:31:24.460453 ignition[1451]: INFO : Ignition 2.18.0 Oct 8 19:31:24.460453 ignition[1451]: INFO : Stage: umount Oct 8 19:31:24.465049 ignition[1451]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 8 19:31:24.465049 ignition[1451]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Oct 8 19:31:24.465049 ignition[1451]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Oct 8 19:31:24.475182 ignition[1451]: INFO : PUT result: OK Oct 8 19:31:24.480371 ignition[1451]: INFO : umount: umount passed Oct 8 19:31:24.480371 ignition[1451]: INFO : Ignition finished successfully Oct 8 19:31:24.483158 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 8 19:31:24.489759 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 8 19:31:24.491636 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 8 19:31:24.499006 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 8 19:31:24.500725 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 8 19:31:24.508038 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 8 19:31:24.508195 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 8 19:31:24.510580 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 8 19:31:24.510817 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 8 19:31:24.514954 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 8 19:31:24.515096 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 8 19:31:24.521043 systemd[1]: Stopped target network.target - Network. Oct 8 19:31:24.522917 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 8 19:31:24.523141 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 8 19:31:24.526069 systemd[1]: Stopped target paths.target - Path Units. Oct 8 19:31:24.529333 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 8 19:31:24.533634 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:31:24.544286 systemd[1]: Stopped target slices.target - Slice Units. Oct 8 19:31:24.552949 systemd[1]: Stopped target sockets.target - Socket Units. Oct 8 19:31:24.555621 systemd[1]: iscsid.socket: Deactivated successfully. Oct 8 19:31:24.556319 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 8 19:31:24.559439 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 8 19:31:24.559552 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 8 19:31:24.569725 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 8 19:31:24.569854 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 8 19:31:24.571950 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 8 19:31:24.572072 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 8 19:31:24.574321 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 8 19:31:24.574440 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 8 19:31:24.577561 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 8 19:31:24.585107 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 8 19:31:24.594629 systemd-networkd[1201]: eth0: DHCPv6 lease lost Oct 8 19:31:24.599105 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 8 19:31:24.599414 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 8 19:31:24.608929 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 8 19:31:24.609446 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 8 19:31:24.617290 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 8 19:31:24.617442 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:31:24.638215 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 8 19:31:24.641579 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 8 19:31:24.641738 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 8 19:31:24.646319 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 8 19:31:24.646442 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:31:24.648877 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 8 19:31:24.649014 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 8 19:31:24.651709 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 8 19:31:24.651904 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:31:24.658683 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:31:24.702662 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 8 19:31:24.704567 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:31:24.712025 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 8 19:31:24.712211 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 8 19:31:24.719908 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 8 19:31:24.719996 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:31:24.722063 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 8 19:31:24.722163 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 8 19:31:24.734437 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 8 19:31:24.734627 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 8 19:31:24.739933 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 8 19:31:24.740045 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 8 19:31:24.753797 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 8 19:31:24.758829 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 8 19:31:24.760249 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:31:24.767373 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 8 19:31:24.767819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:31:24.778665 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 8 19:31:24.780332 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 8 19:31:24.804157 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 8 19:31:24.805008 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 8 19:31:24.814446 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 8 19:31:24.832218 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 8 19:31:24.860300 systemd[1]: Switching root. Oct 8 19:31:24.907971 systemd-journald[251]: Journal stopped Oct 8 19:31:27.725540 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Oct 8 19:31:27.725686 kernel: SELinux: policy capability network_peer_controls=1 Oct 8 19:31:27.725730 kernel: SELinux: policy capability open_perms=1 Oct 8 19:31:27.725768 kernel: SELinux: policy capability extended_socket_class=1 Oct 8 19:31:27.725799 kernel: SELinux: policy capability always_check_network=0 Oct 8 19:31:27.725838 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 8 19:31:27.725869 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 8 19:31:27.725899 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 8 19:31:27.725928 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 8 19:31:27.725959 kernel: audit: type=1403 audit(1728415885.557:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 8 19:31:27.725991 systemd[1]: Successfully loaded SELinux policy in 73.892ms. Oct 8 19:31:27.726040 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.036ms. Oct 8 19:31:27.726075 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 8 19:31:27.726106 systemd[1]: Detected virtualization amazon. Oct 8 19:31:27.726138 systemd[1]: Detected architecture arm64. Oct 8 19:31:27.726169 systemd[1]: Detected first boot. Oct 8 19:31:27.726203 systemd[1]: Initializing machine ID from VM UUID. Oct 8 19:31:27.726234 zram_generator::config[1494]: No configuration found. Oct 8 19:31:27.726270 systemd[1]: Populated /etc with preset unit settings. Oct 8 19:31:27.726302 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 8 19:31:27.726337 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 8 19:31:27.726371 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 8 19:31:27.726408 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 8 19:31:27.726441 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 8 19:31:27.726475 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 8 19:31:27.735585 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 8 19:31:27.735632 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 8 19:31:27.735666 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 8 19:31:27.735705 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 8 19:31:27.735737 systemd[1]: Created slice user.slice - User and Session Slice. Oct 8 19:31:27.735786 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 8 19:31:27.735820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 8 19:31:27.735851 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 8 19:31:27.735883 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 8 19:31:27.735918 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 8 19:31:27.735952 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 8 19:31:27.735982 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 8 19:31:27.736020 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 8 19:31:27.736052 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 8 19:31:27.736082 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 8 19:31:27.736115 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 8 19:31:27.736151 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 8 19:31:27.736183 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 8 19:31:27.736219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 8 19:31:27.736252 systemd[1]: Reached target slices.target - Slice Units. Oct 8 19:31:27.736287 systemd[1]: Reached target swap.target - Swaps. Oct 8 19:31:27.736317 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 8 19:31:27.736362 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 8 19:31:27.736393 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 8 19:31:27.736422 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 8 19:31:27.736452 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 8 19:31:27.736500 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 8 19:31:27.736535 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 8 19:31:27.736569 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 8 19:31:27.736604 systemd[1]: Mounting media.mount - External Media Directory... Oct 8 19:31:27.736634 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 8 19:31:27.736669 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 8 19:31:27.736701 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 8 19:31:27.736731 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 8 19:31:27.736763 systemd[1]: Reached target machines.target - Containers. Oct 8 19:31:27.736793 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 8 19:31:27.736822 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:31:27.736856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 8 19:31:27.736888 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 8 19:31:27.736918 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:31:27.736947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:31:27.736977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:31:27.737006 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 8 19:31:27.737036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:31:27.737066 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 8 19:31:27.737099 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 8 19:31:27.737133 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 8 19:31:27.737163 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 8 19:31:27.737192 systemd[1]: Stopped systemd-fsck-usr.service. Oct 8 19:31:27.737224 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 8 19:31:27.737253 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 8 19:31:27.737285 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 8 19:31:27.737317 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 8 19:31:27.737347 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 8 19:31:27.737379 systemd[1]: verity-setup.service: Deactivated successfully. Oct 8 19:31:27.737413 systemd[1]: Stopped verity-setup.service. Oct 8 19:31:27.737443 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 8 19:31:27.737472 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 8 19:31:27.741582 systemd[1]: Mounted media.mount - External Media Directory. Oct 8 19:31:27.741626 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 8 19:31:27.741655 kernel: loop: module loaded Oct 8 19:31:27.741685 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 8 19:31:27.741719 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 8 19:31:27.741750 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 8 19:31:27.741790 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 8 19:31:27.741820 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 8 19:31:27.741852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:31:27.741882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:31:27.741912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:31:27.741950 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:31:27.741983 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:31:27.742014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:31:27.742043 kernel: fuse: init (API version 7.39) Oct 8 19:31:27.742076 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 8 19:31:27.742161 systemd-journald[1575]: Collecting audit messages is disabled. Oct 8 19:31:27.742230 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 8 19:31:27.742268 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 8 19:31:27.742299 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 8 19:31:27.742334 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 8 19:31:27.742365 systemd-journald[1575]: Journal started Oct 8 19:31:27.742423 systemd-journald[1575]: Runtime Journal (/run/log/journal/ec2ae716018237f6470479d515d94e32) is 8.0M, max 75.3M, 67.3M free. Oct 8 19:31:26.958967 systemd[1]: Queued start job for default target multi-user.target. Oct 8 19:31:27.023966 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Oct 8 19:31:27.025090 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 8 19:31:27.748423 systemd[1]: Started systemd-journald.service - Journal Service. Oct 8 19:31:27.771672 kernel: ACPI: bus type drm_connector registered Oct 8 19:31:27.773672 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:31:27.774220 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:31:27.790231 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 8 19:31:27.805820 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 8 19:31:27.820790 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 8 19:31:27.826742 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 8 19:31:27.826821 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 8 19:31:27.834514 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 8 19:31:27.845894 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 8 19:31:27.865691 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 8 19:31:27.868190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:31:27.874002 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 8 19:31:27.894060 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 8 19:31:27.896838 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:31:27.905282 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 8 19:31:27.910223 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:31:27.921304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 8 19:31:27.932017 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 8 19:31:27.937547 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 8 19:31:27.940749 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 8 19:31:27.944301 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 8 19:31:27.947525 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 8 19:31:27.979002 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 8 19:31:28.028402 systemd-journald[1575]: Time spent on flushing to /var/log/journal/ec2ae716018237f6470479d515d94e32 is 74.285ms for 906 entries. Oct 8 19:31:28.028402 systemd-journald[1575]: System Journal (/var/log/journal/ec2ae716018237f6470479d515d94e32) is 8.0M, max 195.6M, 187.6M free. Oct 8 19:31:28.133218 systemd-journald[1575]: Received client request to flush runtime journal. Oct 8 19:31:28.133328 kernel: loop0: detected capacity change from 0 to 59688 Oct 8 19:31:28.133365 kernel: block loop0: the capability attribute has been deprecated. Oct 8 19:31:28.057254 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 8 19:31:28.077959 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 8 19:31:28.091620 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 8 19:31:28.095090 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 8 19:31:28.110884 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 8 19:31:28.141269 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 8 19:31:28.166069 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 8 19:31:28.222834 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 8 19:31:28.235480 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 8 19:31:28.250793 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 8 19:31:28.266322 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 8 19:31:28.270743 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 8 19:31:28.278654 udevadm[1630]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 8 19:31:28.286611 kernel: loop1: detected capacity change from 0 to 189592 Oct 8 19:31:28.399036 kernel: loop2: detected capacity change from 0 to 51896 Oct 8 19:31:28.409644 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Oct 8 19:31:28.409701 systemd-tmpfiles[1640]: ACLs are not supported, ignoring. Oct 8 19:31:28.455620 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 8 19:31:28.483936 kernel: loop3: detected capacity change from 0 to 113672 Oct 8 19:31:28.587080 kernel: loop4: detected capacity change from 0 to 59688 Oct 8 19:31:28.614828 kernel: loop5: detected capacity change from 0 to 189592 Oct 8 19:31:28.662349 kernel: loop6: detected capacity change from 0 to 51896 Oct 8 19:31:28.682556 kernel: loop7: detected capacity change from 0 to 113672 Oct 8 19:31:28.703196 (sd-merge)[1647]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Oct 8 19:31:28.706565 (sd-merge)[1647]: Merged extensions into '/usr'. Oct 8 19:31:28.722934 systemd[1]: Reloading requested from client PID 1621 ('systemd-sysext') (unit systemd-sysext.service)... Oct 8 19:31:28.722986 systemd[1]: Reloading... Oct 8 19:31:28.913600 zram_generator::config[1668]: No configuration found. Oct 8 19:31:29.262136 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:31:29.394166 systemd[1]: Reloading finished in 669 ms. Oct 8 19:31:29.439554 ldconfig[1616]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 8 19:31:29.447580 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 8 19:31:29.463980 systemd[1]: Starting ensure-sysext.service... Oct 8 19:31:29.474584 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Oct 8 19:31:29.516616 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 8 19:31:29.517429 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 8 19:31:29.519852 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 8 19:31:29.520454 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Oct 8 19:31:29.520642 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Oct 8 19:31:29.567778 systemd[1]: Reloading requested from client PID 1722 ('systemctl') (unit ensure-sysext.service)... Oct 8 19:31:29.568130 systemd[1]: Reloading... Oct 8 19:31:29.570218 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:31:29.570242 systemd-tmpfiles[1723]: Skipping /boot Oct 8 19:31:29.623981 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Oct 8 19:31:29.624014 systemd-tmpfiles[1723]: Skipping /boot Oct 8 19:31:29.801558 zram_generator::config[1758]: No configuration found. Oct 8 19:31:30.075640 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:31:30.199093 systemd[1]: Reloading finished in 629 ms. Oct 8 19:31:30.227860 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 8 19:31:30.230945 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 8 19:31:30.240676 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Oct 8 19:31:30.269228 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:31:30.289456 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 8 19:31:30.298181 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 8 19:31:30.311402 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 8 19:31:30.319016 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 8 19:31:30.327958 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 8 19:31:30.339896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:31:30.355308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 8 19:31:30.375675 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 8 19:31:30.381983 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 8 19:31:30.384932 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:31:30.391706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:31:30.392309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:31:30.406541 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 8 19:31:30.416245 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 8 19:31:30.418694 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 8 19:31:30.419164 systemd[1]: Reached target time-set.target - System Time Set. Oct 8 19:31:30.437685 systemd[1]: Finished ensure-sysext.service. Oct 8 19:31:30.445535 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 8 19:31:30.471126 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 8 19:31:30.477246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 8 19:31:30.477720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 8 19:31:30.515556 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 8 19:31:30.523288 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 8 19:31:30.525801 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 8 19:31:30.529159 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 8 19:31:30.546233 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 8 19:31:30.570400 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 8 19:31:30.572263 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 8 19:31:30.601284 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 8 19:31:30.603111 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 8 19:31:30.610165 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 8 19:31:30.622429 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 8 19:31:30.625800 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 8 19:31:30.649102 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 8 19:31:30.670241 systemd-udevd[1810]: Using default interface naming scheme 'v255'. Oct 8 19:31:30.673447 augenrules[1840]: No rules Oct 8 19:31:30.677590 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:31:30.704192 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 8 19:31:30.744000 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 8 19:31:30.763283 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 8 19:31:30.910903 systemd-networkd[1854]: lo: Link UP Oct 8 19:31:30.911710 systemd-networkd[1854]: lo: Gained carrier Oct 8 19:31:30.912978 systemd-networkd[1854]: Enumeration completed Oct 8 19:31:30.913549 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 8 19:31:30.926324 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 8 19:31:30.962192 systemd-resolved[1809]: Positive Trust Anchors: Oct 8 19:31:30.962808 systemd-resolved[1809]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 8 19:31:30.962886 systemd-resolved[1809]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Oct 8 19:31:30.975458 systemd-resolved[1809]: Defaulting to hostname 'linux'. Oct 8 19:31:30.979421 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 8 19:31:30.981990 systemd[1]: Reached target network.target - Network. Oct 8 19:31:30.983802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 8 19:31:31.013254 (udev-worker)[1860]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:31:31.025670 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 8 19:31:31.065686 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1859) Oct 8 19:31:31.099401 systemd-networkd[1854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:31:31.101138 systemd-networkd[1854]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 8 19:31:31.106761 systemd-networkd[1854]: eth0: Link UP Oct 8 19:31:31.107733 systemd-networkd[1854]: eth0: Gained carrier Oct 8 19:31:31.108355 systemd-networkd[1854]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 8 19:31:31.127779 systemd-networkd[1854]: eth0: DHCPv4 address 172.31.19.2/20, gateway 172.31.16.1 acquired from 172.31.16.1 Oct 8 19:31:31.217557 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1865) Oct 8 19:31:31.448208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 8 19:31:31.515195 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Oct 8 19:31:31.519684 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 8 19:31:31.533682 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 8 19:31:31.546237 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 8 19:31:31.585999 lvm[1971]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:31:31.619366 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 8 19:31:31.634273 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 8 19:31:31.637877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 8 19:31:31.652336 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 8 19:31:31.662372 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 8 19:31:31.667026 systemd[1]: Reached target sysinit.target - System Initialization. Oct 8 19:31:31.669684 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 8 19:31:31.672283 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 8 19:31:31.675328 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 8 19:31:31.677915 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 8 19:31:31.681283 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 8 19:31:31.698117 lvm[1977]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 8 19:31:31.696137 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 8 19:31:31.696225 systemd[1]: Reached target paths.target - Path Units. Oct 8 19:31:31.699333 systemd[1]: Reached target timers.target - Timer Units. Oct 8 19:31:31.704453 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 8 19:31:31.711057 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 8 19:31:31.725302 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 8 19:31:31.730116 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 8 19:31:31.733347 systemd[1]: Reached target sockets.target - Socket Units. Oct 8 19:31:31.736294 systemd[1]: Reached target basic.target - Basic System. Oct 8 19:31:31.739392 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:31:31.739517 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 8 19:31:31.758021 systemd[1]: Starting containerd.service - containerd container runtime... Oct 8 19:31:31.775360 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 8 19:31:31.791625 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 8 19:31:31.805210 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 8 19:31:31.822422 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 8 19:31:31.825925 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 8 19:31:31.836958 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 8 19:31:31.843908 systemd[1]: Started ntpd.service - Network Time Service. Oct 8 19:31:31.854784 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 8 19:31:31.865907 systemd[1]: Starting setup-oem.service - Setup OEM... Oct 8 19:31:31.876071 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 8 19:31:31.885283 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 8 19:31:31.902864 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 8 19:31:31.909157 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 8 19:31:31.910762 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 8 19:31:31.915012 systemd[1]: Starting update-engine.service - Update Engine... Oct 8 19:31:31.928464 jq[1985]: false Oct 8 19:31:31.944916 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 8 19:31:31.952113 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 8 19:31:31.963354 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 8 19:31:31.964142 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 8 19:31:31.999160 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:46:09 UTC 2024 (1): Starting Oct 8 19:31:32.008797 ntpd[1990]: 8 Oct 19:31:31 ntpd[1990]: ntpd 4.2.8p17@1.4004-o Tue Oct 8 17:46:09 UTC 2024 (1): Starting Oct 8 19:31:32.008797 ntpd[1990]: 8 Oct 19:31:31 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:31:32.008797 ntpd[1990]: 8 Oct 19:31:31 ntpd[1990]: ---------------------------------------------------- Oct 8 19:31:32.008797 ntpd[1990]: 8 Oct 19:31:31 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:31:32.008797 ntpd[1990]: 8 Oct 19:31:31 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:31:32.008797 ntpd[1990]: 8 Oct 19:31:31 ntpd[1990]: corporation. Support and training for ntp-4 are Oct 8 19:31:32.008797 ntpd[1990]: 8 Oct 19:31:31 ntpd[1990]: available at https://www.nwtime.org/support Oct 8 19:31:32.008797 ntpd[1990]: 8 Oct 19:31:31 ntpd[1990]: ---------------------------------------------------- Oct 8 19:31:31.999246 ntpd[1990]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Oct 8 19:31:31.999269 ntpd[1990]: ---------------------------------------------------- Oct 8 19:31:32.018853 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: proto: precision = 0.108 usec (-23) Oct 8 19:31:32.018853 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: basedate set to 2024-09-26 Oct 8 19:31:32.018853 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: gps base set to 2024-09-29 (week 2334) Oct 8 19:31:31.999290 ntpd[1990]: ntp-4 is maintained by Network Time Foundation, Oct 8 19:31:31.999310 ntpd[1990]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Oct 8 19:31:31.999336 ntpd[1990]: corporation. Support and training for ntp-4 are Oct 8 19:31:31.999357 ntpd[1990]: available at https://www.nwtime.org/support Oct 8 19:31:31.999375 ntpd[1990]: ---------------------------------------------------- Oct 8 19:31:32.011277 ntpd[1990]: proto: precision = 0.108 usec (-23) Oct 8 19:31:32.014917 ntpd[1990]: basedate set to 2024-09-26 Oct 8 19:31:32.014962 ntpd[1990]: gps base set to 2024-09-29 (week 2334) Oct 8 19:31:32.025901 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:31:32.028747 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: Listen and drop on 0 v6wildcard [::]:123 Oct 8 19:31:32.030153 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:31:32.031805 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Oct 8 19:31:32.031805 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:31:32.031805 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: Listen normally on 3 eth0 172.31.19.2:123 Oct 8 19:31:32.031805 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: Listen normally on 4 lo [::1]:123 Oct 8 19:31:32.031805 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: bind(21) AF_INET6 fe80::468:c3ff:fea8:2083%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 19:31:32.031805 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: unable to create socket on eth0 (5) for fe80::468:c3ff:fea8:2083%2#123 Oct 8 19:31:32.031805 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: failed to init interface for address fe80::468:c3ff:fea8:2083%2 Oct 8 19:31:32.031805 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Oct 8 19:31:32.030481 ntpd[1990]: Listen normally on 2 lo 127.0.0.1:123 Oct 8 19:31:32.030629 ntpd[1990]: Listen normally on 3 eth0 172.31.19.2:123 Oct 8 19:31:32.030724 ntpd[1990]: Listen normally on 4 lo [::1]:123 Oct 8 19:31:32.030859 ntpd[1990]: bind(21) AF_INET6 fe80::468:c3ff:fea8:2083%2#123 flags 0x11 failed: Cannot assign requested address Oct 8 19:31:32.030916 ntpd[1990]: unable to create socket on eth0 (5) for fe80::468:c3ff:fea8:2083%2#123 Oct 8 19:31:32.030949 ntpd[1990]: failed to init interface for address fe80::468:c3ff:fea8:2083%2 Oct 8 19:31:32.031028 ntpd[1990]: Listening on routing socket on fd #21 for interface updates Oct 8 19:31:32.038362 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 8 19:31:32.042372 jq[1998]: true Oct 8 19:31:32.038877 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 8 19:31:32.056612 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:31:32.057835 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:31:32.057835 ntpd[1990]: 8 Oct 19:31:32 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:31:32.056691 ntpd[1990]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Oct 8 19:31:32.103268 update_engine[1997]: I1008 19:31:32.103117 1997 main.cc:92] Flatcar Update Engine starting Oct 8 19:31:32.133973 extend-filesystems[1986]: Found loop4 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found loop5 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found loop6 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found loop7 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found nvme0n1 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found nvme0n1p1 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found nvme0n1p2 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found nvme0n1p3 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found usr Oct 8 19:31:32.133973 extend-filesystems[1986]: Found nvme0n1p4 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found nvme0n1p6 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found nvme0n1p7 Oct 8 19:31:32.133973 extend-filesystems[1986]: Found nvme0n1p9 Oct 8 19:31:32.133973 extend-filesystems[1986]: Checking size of /dev/nvme0n1p9 Oct 8 19:31:32.121290 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 8 19:31:32.153695 dbus-daemon[1984]: [system] SELinux support is enabled Oct 8 19:31:32.261846 update_engine[1997]: I1008 19:31:32.197162 1997 update_check_scheduler.cc:74] Next update check in 2m50s Oct 8 19:31:32.262020 jq[2011]: true Oct 8 19:31:32.166728 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 8 19:31:32.176928 dbus-daemon[1984]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1854 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 8 19:31:32.283827 extend-filesystems[1986]: Resized partition /dev/nvme0n1p9 Oct 8 19:31:32.192551 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 8 19:31:32.201371 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 8 19:31:32.293226 extend-filesystems[2030]: resize2fs 1.47.0 (5-Feb-2023) Oct 8 19:31:32.192700 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 8 19:31:32.204872 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 8 19:31:32.204924 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 8 19:31:32.321943 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Oct 8 19:31:32.245044 systemd[1]: Started update-engine.service - Update Engine. Oct 8 19:31:32.261179 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 8 19:31:32.297008 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 8 19:31:32.306019 (ntainerd)[2024]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 8 19:31:32.316729 systemd[1]: motdgen.service: Deactivated successfully. Oct 8 19:31:32.319598 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 8 19:31:32.338444 tar[2007]: linux-arm64/helm Oct 8 19:31:32.497544 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Oct 8 19:31:32.528622 extend-filesystems[2030]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Oct 8 19:31:32.528622 extend-filesystems[2030]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 8 19:31:32.528622 extend-filesystems[2030]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Oct 8 19:31:32.540753 extend-filesystems[1986]: Resized filesystem in /dev/nvme0n1p9 Oct 8 19:31:32.539885 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 8 19:31:32.566069 bash[2054]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:31:32.543233 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 8 19:31:32.555616 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 8 19:31:32.561427 systemd[1]: Finished setup-oem.service - Setup OEM. Oct 8 19:31:32.592912 coreos-metadata[1983]: Oct 08 19:31:32.574 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:31:32.589955 systemd[1]: Starting sshkeys.service... Oct 8 19:31:32.599944 coreos-metadata[1983]: Oct 08 19:31:32.595 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Oct 8 19:31:32.608435 coreos-metadata[1983]: Oct 08 19:31:32.605 INFO Fetch successful Oct 8 19:31:32.608435 coreos-metadata[1983]: Oct 08 19:31:32.605 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Oct 8 19:31:32.616899 coreos-metadata[1983]: Oct 08 19:31:32.613 INFO Fetch successful Oct 8 19:31:32.616899 coreos-metadata[1983]: Oct 08 19:31:32.613 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Oct 8 19:31:32.619662 systemd-networkd[1854]: eth0: Gained IPv6LL Oct 8 19:31:32.629896 coreos-metadata[1983]: Oct 08 19:31:32.621 INFO Fetch successful Oct 8 19:31:32.629896 coreos-metadata[1983]: Oct 08 19:31:32.621 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Oct 8 19:31:32.629896 coreos-metadata[1983]: Oct 08 19:31:32.628 INFO Fetch successful Oct 8 19:31:32.629896 coreos-metadata[1983]: Oct 08 19:31:32.629 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Oct 8 19:31:32.633536 coreos-metadata[1983]: Oct 08 19:31:32.631 INFO Fetch failed with 404: resource not found Oct 8 19:31:32.639560 coreos-metadata[1983]: Oct 08 19:31:32.635 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Oct 8 19:31:32.642390 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1865) Oct 8 19:31:32.647424 coreos-metadata[1983]: Oct 08 19:31:32.644 INFO Fetch successful Oct 8 19:31:32.647424 coreos-metadata[1983]: Oct 08 19:31:32.644 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Oct 8 19:31:32.648620 systemd-logind[1996]: Watching system buttons on /dev/input/event0 (Power Button) Oct 8 19:31:32.648676 systemd-logind[1996]: Watching system buttons on /dev/input/event1 (Sleep Button) Oct 8 19:31:32.659889 coreos-metadata[1983]: Oct 08 19:31:32.651 INFO Fetch successful Oct 8 19:31:32.659889 coreos-metadata[1983]: Oct 08 19:31:32.651 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Oct 8 19:31:32.654991 systemd-logind[1996]: New seat seat0. Oct 8 19:31:32.667175 coreos-metadata[1983]: Oct 08 19:31:32.666 INFO Fetch successful Oct 8 19:31:32.667175 coreos-metadata[1983]: Oct 08 19:31:32.666 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Oct 8 19:31:32.668908 coreos-metadata[1983]: Oct 08 19:31:32.668 INFO Fetch successful Oct 8 19:31:32.668908 coreos-metadata[1983]: Oct 08 19:31:32.668 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Oct 8 19:31:32.675858 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 8 19:31:32.679768 systemd[1]: Started systemd-logind.service - User Login Management. Oct 8 19:31:32.686983 systemd[1]: Reached target network-online.target - Network is Online. Oct 8 19:31:32.695136 coreos-metadata[1983]: Oct 08 19:31:32.684 INFO Fetch successful Oct 8 19:31:32.713046 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Oct 8 19:31:32.771959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:31:32.786739 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 8 19:31:32.880706 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 8 19:31:32.892261 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 8 19:31:33.052661 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 8 19:31:33.057233 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 8 19:31:33.118980 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 8 19:31:33.155380 amazon-ssm-agent[2076]: Initializing new seelog logger Oct 8 19:31:33.155380 amazon-ssm-agent[2076]: New Seelog Logger Creation Complete Oct 8 19:31:33.155380 amazon-ssm-agent[2076]: 2024/10/08 19:31:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:31:33.155380 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:31:33.167529 amazon-ssm-agent[2076]: 2024/10/08 19:31:33 processing appconfig overrides Oct 8 19:31:33.167529 amazon-ssm-agent[2076]: 2024/10/08 19:31:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:31:33.167529 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:31:33.167529 amazon-ssm-agent[2076]: 2024/10/08 19:31:33 processing appconfig overrides Oct 8 19:31:33.167529 amazon-ssm-agent[2076]: 2024/10/08 19:31:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:31:33.167529 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:31:33.167529 amazon-ssm-agent[2076]: 2024/10/08 19:31:33 processing appconfig overrides Oct 8 19:31:33.172628 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO Proxy environment variables: Oct 8 19:31:33.200196 amazon-ssm-agent[2076]: 2024/10/08 19:31:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:31:33.203996 amazon-ssm-agent[2076]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Oct 8 19:31:33.203996 amazon-ssm-agent[2076]: 2024/10/08 19:31:33 processing appconfig overrides Oct 8 19:31:33.210752 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 8 19:31:33.211032 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 8 19:31:33.223382 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2029 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 8 19:31:33.260464 systemd[1]: Starting polkit.service - Authorization Manager... Oct 8 19:31:33.279277 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO https_proxy: Oct 8 19:31:33.365454 coreos-metadata[2101]: Oct 08 19:31:33.361 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Oct 8 19:31:33.368768 polkitd[2162]: Started polkitd version 121 Oct 8 19:31:33.371842 coreos-metadata[2101]: Oct 08 19:31:33.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Oct 8 19:31:33.380595 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO http_proxy: Oct 8 19:31:33.381387 coreos-metadata[2101]: Oct 08 19:31:33.381 INFO Fetch successful Oct 8 19:31:33.381387 coreos-metadata[2101]: Oct 08 19:31:33.381 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 8 19:31:33.386654 coreos-metadata[2101]: Oct 08 19:31:33.382 INFO Fetch successful Oct 8 19:31:33.389213 locksmithd[2032]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 8 19:31:33.393977 unknown[2101]: wrote ssh authorized keys file for user: core Oct 8 19:31:33.395107 polkitd[2162]: Loading rules from directory /etc/polkit-1/rules.d Oct 8 19:31:33.395261 polkitd[2162]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 8 19:31:33.398858 polkitd[2162]: Finished loading, compiling and executing 2 rules Oct 8 19:31:33.404906 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 8 19:31:33.405279 systemd[1]: Started polkit.service - Authorization Manager. Oct 8 19:31:33.407464 polkitd[2162]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 8 19:31:33.482241 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO no_proxy: Oct 8 19:31:33.585409 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO Checking if agent identity type OnPrem can be assumed Oct 8 19:31:33.583147 systemd-hostnamed[2029]: Hostname set to (transient) Oct 8 19:31:33.583309 systemd-resolved[1809]: System hostname changed to 'ip-172-31-19-2'. Oct 8 19:31:33.601698 update-ssh-keys[2176]: Updated "/home/core/.ssh/authorized_keys" Oct 8 19:31:33.600953 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 8 19:31:33.614567 systemd[1]: Finished sshkeys.service. Oct 8 19:31:33.684602 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO Checking if agent identity type EC2 can be assumed Oct 8 19:31:33.783944 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO Agent will take identity from EC2 Oct 8 19:31:33.828684 containerd[2024]: time="2024-10-08T19:31:33.828272739Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Oct 8 19:31:33.886754 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:31:33.982611 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:31:33.983012 containerd[2024]: time="2024-10-08T19:31:33.982956867Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 8 19:31:33.984748 containerd[2024]: time="2024-10-08T19:31:33.984684231Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:31:33.997559 containerd[2024]: time="2024-10-08T19:31:33.994286379Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.54-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:31:33.997559 containerd[2024]: time="2024-10-08T19:31:33.994365627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:31:33.997559 containerd[2024]: time="2024-10-08T19:31:33.994929639Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:31:33.997559 containerd[2024]: time="2024-10-08T19:31:33.994979583Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 8 19:31:33.997559 containerd[2024]: time="2024-10-08T19:31:33.995198799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 8 19:31:33.997559 containerd[2024]: time="2024-10-08T19:31:33.995330511Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:31:33.997559 containerd[2024]: time="2024-10-08T19:31:33.995358447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 8 19:31:33.998284 containerd[2024]: time="2024-10-08T19:31:33.998221047Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:31:33.999411 containerd[2024]: time="2024-10-08T19:31:33.999350559Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 8 19:31:34.001691 containerd[2024]: time="2024-10-08T19:31:34.001551023Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 8 19:31:34.001691 containerd[2024]: time="2024-10-08T19:31:34.001609547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 8 19:31:34.003882 containerd[2024]: time="2024-10-08T19:31:34.002176391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 8 19:31:34.003882 containerd[2024]: time="2024-10-08T19:31:34.002237111Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 8 19:31:34.004141 containerd[2024]: time="2024-10-08T19:31:34.004090559Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 8 19:31:34.004246 containerd[2024]: time="2024-10-08T19:31:34.004218839Z" level=info msg="metadata content store policy set" policy=shared Oct 8 19:31:34.015301 containerd[2024]: time="2024-10-08T19:31:34.015055175Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 8 19:31:34.015301 containerd[2024]: time="2024-10-08T19:31:34.015180335Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 8 19:31:34.015301 containerd[2024]: time="2024-10-08T19:31:34.015224015Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 8 19:31:34.015661 containerd[2024]: time="2024-10-08T19:31:34.015617135Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 8 19:31:34.015921 containerd[2024]: time="2024-10-08T19:31:34.015887231Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 8 19:31:34.016047 containerd[2024]: time="2024-10-08T19:31:34.016018055Z" level=info msg="NRI interface is disabled by configuration." Oct 8 19:31:34.016174 containerd[2024]: time="2024-10-08T19:31:34.016145207Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 8 19:31:34.016691 containerd[2024]: time="2024-10-08T19:31:34.016644743Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.017580359Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.017647427Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.019669175Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.019747259Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.019834355Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.019873211Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.019919471Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.019977047Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.020018735Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.020055887Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 8 19:31:34.020564 containerd[2024]: time="2024-10-08T19:31:34.020090279Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 8 19:31:34.031468 containerd[2024]: time="2024-10-08T19:31:34.030178152Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 8 19:31:34.031468 containerd[2024]: time="2024-10-08T19:31:34.031175508Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 8 19:31:34.031468 containerd[2024]: time="2024-10-08T19:31:34.031267752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.031468 containerd[2024]: time="2024-10-08T19:31:34.031316460Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 8 19:31:34.031468 containerd[2024]: time="2024-10-08T19:31:34.031388808Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 8 19:31:34.038929 containerd[2024]: time="2024-10-08T19:31:34.038092728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.044532 containerd[2024]: time="2024-10-08T19:31:34.041975472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.044532 containerd[2024]: time="2024-10-08T19:31:34.042106224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.044532 containerd[2024]: time="2024-10-08T19:31:34.042164280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.044532 containerd[2024]: time="2024-10-08T19:31:34.042209220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.044532 containerd[2024]: time="2024-10-08T19:31:34.042251508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.044532 containerd[2024]: time="2024-10-08T19:31:34.042294396Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.044532 containerd[2024]: time="2024-10-08T19:31:34.042334104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.044532 containerd[2024]: time="2024-10-08T19:31:34.042384684Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 8 19:31:34.047034 containerd[2024]: time="2024-10-08T19:31:34.046971864Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.051517 containerd[2024]: time="2024-10-08T19:31:34.049575516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.051517 containerd[2024]: time="2024-10-08T19:31:34.049639968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.051517 containerd[2024]: time="2024-10-08T19:31:34.049676088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.051517 containerd[2024]: time="2024-10-08T19:31:34.049712880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.051517 containerd[2024]: time="2024-10-08T19:31:34.049753248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.051517 containerd[2024]: time="2024-10-08T19:31:34.049784436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.051517 containerd[2024]: time="2024-10-08T19:31:34.049812996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 8 19:31:34.052006 containerd[2024]: time="2024-10-08T19:31:34.050285904Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 8 19:31:34.054632 containerd[2024]: time="2024-10-08T19:31:34.050478792Z" level=info msg="Connect containerd service" Oct 8 19:31:34.054632 containerd[2024]: time="2024-10-08T19:31:34.053606784Z" level=info msg="using legacy CRI server" Oct 8 19:31:34.054632 containerd[2024]: time="2024-10-08T19:31:34.053634132Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 8 19:31:34.054632 containerd[2024]: time="2024-10-08T19:31:34.053841516Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 8 19:31:34.059900 containerd[2024]: time="2024-10-08T19:31:34.059201700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:31:34.060349 containerd[2024]: time="2024-10-08T19:31:34.060249768Z" level=info msg="Start subscribing containerd event" Oct 8 19:31:34.061166 containerd[2024]: time="2024-10-08T19:31:34.060597168Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 8 19:31:34.061385 containerd[2024]: time="2024-10-08T19:31:34.061347960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 8 19:31:34.065686 containerd[2024]: time="2024-10-08T19:31:34.063250488Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 8 19:31:34.065686 containerd[2024]: time="2024-10-08T19:31:34.063305748Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 8 19:31:34.065686 containerd[2024]: time="2024-10-08T19:31:34.061464072Z" level=info msg="Start recovering state" Oct 8 19:31:34.065686 containerd[2024]: time="2024-10-08T19:31:34.063964620Z" level=info msg="Start event monitor" Oct 8 19:31:34.065686 containerd[2024]: time="2024-10-08T19:31:34.064000908Z" level=info msg="Start snapshots syncer" Oct 8 19:31:34.065686 containerd[2024]: time="2024-10-08T19:31:34.064029012Z" level=info msg="Start cni network conf syncer for default" Oct 8 19:31:34.065686 containerd[2024]: time="2024-10-08T19:31:34.064051764Z" level=info msg="Start streaming server" Oct 8 19:31:34.066865 containerd[2024]: time="2024-10-08T19:31:34.066632388Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 8 19:31:34.072710 containerd[2024]: time="2024-10-08T19:31:34.069905796Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 8 19:31:34.077251 systemd[1]: Started containerd.service - containerd container runtime. Oct 8 19:31:34.082165 containerd[2024]: time="2024-10-08T19:31:34.080081652Z" level=info msg="containerd successfully booted in 0.264228s" Oct 8 19:31:34.085544 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Oct 8 19:31:34.190072 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Oct 8 19:31:34.292536 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Oct 8 19:31:34.321783 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO [amazon-ssm-agent] Starting Core Agent Oct 8 19:31:34.321783 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration Oct 8 19:31:34.321783 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO [Registrar] Starting registrar module Oct 8 19:31:34.321783 amazon-ssm-agent[2076]: 2024-10-08 19:31:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Oct 8 19:31:34.321783 amazon-ssm-agent[2076]: 2024-10-08 19:31:34 INFO [EC2Identity] EC2 registration was successful. Oct 8 19:31:34.321783 amazon-ssm-agent[2076]: 2024-10-08 19:31:34 INFO [CredentialRefresher] credentialRefresher has started Oct 8 19:31:34.321783 amazon-ssm-agent[2076]: 2024-10-08 19:31:34 INFO [CredentialRefresher] Starting credentials refresher loop Oct 8 19:31:34.321783 amazon-ssm-agent[2076]: 2024-10-08 19:31:34 INFO EC2RoleProvider Successfully connected with instance profile role credentials Oct 8 19:31:34.391243 amazon-ssm-agent[2076]: 2024-10-08 19:31:34 INFO [CredentialRefresher] Next credential rotation will be in 32.38330505813333 minutes Oct 8 19:31:34.467227 sshd_keygen[2019]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 8 19:31:34.576272 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 8 19:31:34.595875 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 8 19:31:34.608509 systemd[1]: Started sshd@0-172.31.19.2:22-139.178.68.195:41384.service - OpenSSH per-connection server daemon (139.178.68.195:41384). Oct 8 19:31:34.649474 systemd[1]: issuegen.service: Deactivated successfully. Oct 8 19:31:34.652702 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 8 19:31:34.669134 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 8 19:31:34.704574 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 8 19:31:34.718078 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 8 19:31:34.729215 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 8 19:31:34.732146 systemd[1]: Reached target getty.target - Login Prompts. Oct 8 19:31:34.801783 tar[2007]: linux-arm64/LICENSE Oct 8 19:31:34.802751 tar[2007]: linux-arm64/README.md Oct 8 19:31:34.839941 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 8 19:31:34.904121 sshd[2219]: Accepted publickey for core from 139.178.68.195 port 41384 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:34.909364 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:34.944597 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 8 19:31:34.969082 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 8 19:31:34.988760 systemd-logind[1996]: New session 1 of user core. Oct 8 19:31:35.000699 ntpd[1990]: Listen normally on 6 eth0 [fe80::468:c3ff:fea8:2083%2]:123 Oct 8 19:31:35.003271 ntpd[1990]: 8 Oct 19:31:35 ntpd[1990]: Listen normally on 6 eth0 [fe80::468:c3ff:fea8:2083%2]:123 Oct 8 19:31:35.027615 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 8 19:31:35.048946 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 8 19:31:35.079882 (systemd)[2233]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:35.369976 systemd[2233]: Queued start job for default target default.target. Oct 8 19:31:35.378447 systemd[2233]: Created slice app.slice - User Application Slice. Oct 8 19:31:35.381794 amazon-ssm-agent[2076]: 2024-10-08 19:31:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Oct 8 19:31:35.381553 systemd[2233]: Reached target paths.target - Paths. Oct 8 19:31:35.381655 systemd[2233]: Reached target timers.target - Timers. Oct 8 19:31:35.395099 systemd[2233]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 8 19:31:35.461278 systemd[2233]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 8 19:31:35.461617 systemd[2233]: Reached target sockets.target - Sockets. Oct 8 19:31:35.461653 systemd[2233]: Reached target basic.target - Basic System. Oct 8 19:31:35.461743 systemd[2233]: Reached target default.target - Main User Target. Oct 8 19:31:35.461808 systemd[2233]: Startup finished in 362ms. Oct 8 19:31:35.463523 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 8 19:31:35.475029 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 8 19:31:35.482231 amazon-ssm-agent[2076]: 2024-10-08 19:31:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2240) started Oct 8 19:31:35.509124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:31:35.516615 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 8 19:31:35.524698 systemd[1]: Startup finished in 1.287s (kernel) + 9.727s (initrd) + 10.038s (userspace) = 21.053s. Oct 8 19:31:35.529141 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:31:35.583092 amazon-ssm-agent[2076]: 2024-10-08 19:31:35 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Oct 8 19:31:35.661288 systemd[1]: Started sshd@1-172.31.19.2:22-139.178.68.195:47368.service - OpenSSH per-connection server daemon (139.178.68.195:47368). Oct 8 19:31:35.887560 sshd[2261]: Accepted publickey for core from 139.178.68.195 port 47368 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:35.892036 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:35.903673 systemd-logind[1996]: New session 2 of user core. Oct 8 19:31:35.915188 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 8 19:31:36.060127 sshd[2261]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:36.068583 systemd[1]: session-2.scope: Deactivated successfully. Oct 8 19:31:36.070443 systemd[1]: sshd@1-172.31.19.2:22-139.178.68.195:47368.service: Deactivated successfully. Oct 8 19:31:36.078252 systemd-logind[1996]: Session 2 logged out. Waiting for processes to exit. Oct 8 19:31:36.104995 systemd[1]: Started sshd@2-172.31.19.2:22-139.178.68.195:47370.service - OpenSSH per-connection server daemon (139.178.68.195:47370). Oct 8 19:31:36.111056 systemd-logind[1996]: Removed session 2. Oct 8 19:31:36.302687 sshd[2275]: Accepted publickey for core from 139.178.68.195 port 47370 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:36.304200 sshd[2275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:36.316946 systemd-logind[1996]: New session 3 of user core. Oct 8 19:31:36.324037 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 8 19:31:36.453084 sshd[2275]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:36.466817 systemd[1]: sshd@2-172.31.19.2:22-139.178.68.195:47370.service: Deactivated successfully. Oct 8 19:31:36.475338 systemd[1]: session-3.scope: Deactivated successfully. Oct 8 19:31:36.480184 systemd-logind[1996]: Session 3 logged out. Waiting for processes to exit. Oct 8 19:31:36.505519 systemd[1]: Started sshd@3-172.31.19.2:22-139.178.68.195:47374.service - OpenSSH per-connection server daemon (139.178.68.195:47374). Oct 8 19:31:36.511459 systemd-logind[1996]: Removed session 3. Oct 8 19:31:36.648665 kubelet[2251]: E1008 19:31:36.648601 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:31:36.654389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:31:36.655199 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:31:36.657673 systemd[1]: kubelet.service: Consumed 1.439s CPU time. Oct 8 19:31:36.695157 sshd[2283]: Accepted publickey for core from 139.178.68.195 port 47374 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:36.698607 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:36.710261 systemd-logind[1996]: New session 4 of user core. Oct 8 19:31:36.717005 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 8 19:31:36.855299 sshd[2283]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:36.863251 systemd-logind[1996]: Session 4 logged out. Waiting for processes to exit. Oct 8 19:31:36.867317 systemd[1]: sshd@3-172.31.19.2:22-139.178.68.195:47374.service: Deactivated successfully. Oct 8 19:31:36.872912 systemd[1]: session-4.scope: Deactivated successfully. Oct 8 19:31:36.875612 systemd-logind[1996]: Removed session 4. Oct 8 19:31:36.902246 systemd[1]: Started sshd@4-172.31.19.2:22-139.178.68.195:47378.service - OpenSSH per-connection server daemon (139.178.68.195:47378). Oct 8 19:31:37.095874 sshd[2292]: Accepted publickey for core from 139.178.68.195 port 47378 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:37.099816 sshd[2292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:37.111087 systemd-logind[1996]: New session 5 of user core. Oct 8 19:31:37.122924 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 8 19:31:37.251162 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 8 19:31:37.251779 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:31:37.274821 sudo[2295]: pam_unix(sudo:session): session closed for user root Oct 8 19:31:37.300685 sshd[2292]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:37.309067 systemd[1]: sshd@4-172.31.19.2:22-139.178.68.195:47378.service: Deactivated successfully. Oct 8 19:31:37.313222 systemd[1]: session-5.scope: Deactivated successfully. Oct 8 19:31:37.315412 systemd-logind[1996]: Session 5 logged out. Waiting for processes to exit. Oct 8 19:31:37.319744 systemd-logind[1996]: Removed session 5. Oct 8 19:31:37.346087 systemd[1]: Started sshd@5-172.31.19.2:22-139.178.68.195:47390.service - OpenSSH per-connection server daemon (139.178.68.195:47390). Oct 8 19:31:37.531785 sshd[2300]: Accepted publickey for core from 139.178.68.195 port 47390 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:37.535806 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:37.546763 systemd-logind[1996]: New session 6 of user core. Oct 8 19:31:37.553915 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 8 19:31:37.666801 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 8 19:31:37.667568 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:31:37.675020 sudo[2304]: pam_unix(sudo:session): session closed for user root Oct 8 19:31:37.685651 sudo[2303]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 8 19:31:37.686218 sudo[2303]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:31:37.712290 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 8 19:31:37.722015 auditctl[2307]: No rules Oct 8 19:31:37.725159 systemd[1]: audit-rules.service: Deactivated successfully. Oct 8 19:31:37.725807 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 8 19:31:37.737301 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 8 19:31:37.813692 augenrules[2325]: No rules Oct 8 19:31:37.815372 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 8 19:31:37.819101 sudo[2303]: pam_unix(sudo:session): session closed for user root Oct 8 19:31:37.842252 sshd[2300]: pam_unix(sshd:session): session closed for user core Oct 8 19:31:37.848445 systemd[1]: sshd@5-172.31.19.2:22-139.178.68.195:47390.service: Deactivated successfully. Oct 8 19:31:37.853651 systemd[1]: session-6.scope: Deactivated successfully. Oct 8 19:31:37.855170 systemd-logind[1996]: Session 6 logged out. Waiting for processes to exit. Oct 8 19:31:37.857536 systemd-logind[1996]: Removed session 6. Oct 8 19:31:37.877897 systemd[1]: Started sshd@6-172.31.19.2:22-139.178.68.195:47404.service - OpenSSH per-connection server daemon (139.178.68.195:47404). Oct 8 19:31:38.067827 sshd[2333]: Accepted publickey for core from 139.178.68.195 port 47404 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:31:38.070795 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:31:38.080624 systemd-logind[1996]: New session 7 of user core. Oct 8 19:31:38.092016 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 8 19:31:38.203439 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 8 19:31:38.204156 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 8 19:31:38.443446 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 8 19:31:38.455367 (dockerd)[2346]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 8 19:31:38.870666 dockerd[2346]: time="2024-10-08T19:31:38.870173276Z" level=info msg="Starting up" Oct 8 19:31:38.594763 systemd-resolved[1809]: Clock change detected. Flushing caches. Oct 8 19:31:38.650912 systemd-journald[1575]: Time jumped backwards, rotating. Oct 8 19:31:38.664097 systemd[1]: var-lib-docker-metacopy\x2dcheck787118541-merged.mount: Deactivated successfully. Oct 8 19:31:38.687269 dockerd[2346]: time="2024-10-08T19:31:38.686251740Z" level=info msg="Loading containers: start." Oct 8 19:31:38.941045 kernel: Initializing XFRM netlink socket Oct 8 19:31:38.991888 (udev-worker)[2361]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:31:39.105133 systemd-networkd[1854]: docker0: Link UP Oct 8 19:31:39.131676 dockerd[2346]: time="2024-10-08T19:31:39.131539402Z" level=info msg="Loading containers: done." Oct 8 19:31:39.228040 dockerd[2346]: time="2024-10-08T19:31:39.227822399Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 8 19:31:39.228539 dockerd[2346]: time="2024-10-08T19:31:39.228462443Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Oct 8 19:31:39.228777 dockerd[2346]: time="2024-10-08T19:31:39.228736223Z" level=info msg="Daemon has completed initialization" Oct 8 19:31:39.291455 dockerd[2346]: time="2024-10-08T19:31:39.291020327Z" level=info msg="API listen on /run/docker.sock" Oct 8 19:31:39.291961 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 8 19:31:40.412235 containerd[2024]: time="2024-10-08T19:31:40.411604861Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\"" Oct 8 19:31:41.156715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884022525.mount: Deactivated successfully. Oct 8 19:31:42.702262 containerd[2024]: time="2024-10-08T19:31:42.702194656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:42.703304 containerd[2024]: time="2024-10-08T19:31:42.703221088Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.0: active requests=0, bytes read=25691521" Oct 8 19:31:42.705416 containerd[2024]: time="2024-10-08T19:31:42.705335476Z" level=info msg="ImageCreate event name:\"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:42.711396 containerd[2024]: time="2024-10-08T19:31:42.711306508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:42.713943 containerd[2024]: time="2024-10-08T19:31:42.713877592Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.0\" with image id \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.0\", repo digest \"registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf\", size \"25688321\" in 2.302192427s" Oct 8 19:31:42.714085 containerd[2024]: time="2024-10-08T19:31:42.713942308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.0\" returns image reference \"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388\"" Oct 8 19:31:42.715265 containerd[2024]: time="2024-10-08T19:31:42.715190704Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\"" Oct 8 19:31:44.590114 containerd[2024]: time="2024-10-08T19:31:44.589187585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:44.592296 containerd[2024]: time="2024-10-08T19:31:44.592213133Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.0: active requests=0, bytes read=22460086" Oct 8 19:31:44.593720 containerd[2024]: time="2024-10-08T19:31:44.593521193Z" level=info msg="ImageCreate event name:\"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:44.604037 containerd[2024]: time="2024-10-08T19:31:44.602204969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:44.611372 containerd[2024]: time="2024-10-08T19:31:44.611256941Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.0\" with image id \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.0\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d\", size \"23947353\" in 1.895982933s" Oct 8 19:31:44.611372 containerd[2024]: time="2024-10-08T19:31:44.611373929Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.0\" returns image reference \"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd\"" Oct 8 19:31:44.614296 containerd[2024]: time="2024-10-08T19:31:44.614184317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\"" Oct 8 19:31:46.374562 containerd[2024]: time="2024-10-08T19:31:46.374476914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:46.377158 containerd[2024]: time="2024-10-08T19:31:46.377053242Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.0: active requests=0, bytes read=17018558" Oct 8 19:31:46.378433 containerd[2024]: time="2024-10-08T19:31:46.378353058Z" level=info msg="ImageCreate event name:\"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:46.384424 containerd[2024]: time="2024-10-08T19:31:46.384333078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:46.387416 containerd[2024]: time="2024-10-08T19:31:46.386876082Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.0\" with image id \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.0\", repo digest \"registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808\", size \"18505843\" in 1.772591697s" Oct 8 19:31:46.387416 containerd[2024]: time="2024-10-08T19:31:46.386942286Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.0\" returns image reference \"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb\"" Oct 8 19:31:46.387721 containerd[2024]: time="2024-10-08T19:31:46.387662310Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\"" Oct 8 19:31:46.480495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 8 19:31:46.494685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:31:46.925472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:31:46.929214 (kubelet)[2545]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:31:47.073029 kubelet[2545]: E1008 19:31:47.071099 2545 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:31:47.080335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:31:47.080665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:31:47.902836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4284469646.mount: Deactivated successfully. Oct 8 19:31:48.564869 containerd[2024]: time="2024-10-08T19:31:48.564396609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:48.566380 containerd[2024]: time="2024-10-08T19:31:48.566116533Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.0: active requests=0, bytes read=26753315" Oct 8 19:31:48.567856 containerd[2024]: time="2024-10-08T19:31:48.567758145Z" level=info msg="ImageCreate event name:\"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:48.573662 containerd[2024]: time="2024-10-08T19:31:48.573556473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:48.575957 containerd[2024]: time="2024-10-08T19:31:48.575574861Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.0\" with image id \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.0\", repo digest \"registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe\", size \"26752334\" in 2.187842963s" Oct 8 19:31:48.575957 containerd[2024]: time="2024-10-08T19:31:48.575659593Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.0\" returns image reference \"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89\"" Oct 8 19:31:48.577160 containerd[2024]: time="2024-10-08T19:31:48.576592005Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Oct 8 19:31:49.153908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389235431.mount: Deactivated successfully. Oct 8 19:31:50.548494 containerd[2024]: time="2024-10-08T19:31:50.548299907Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:50.552369 containerd[2024]: time="2024-10-08T19:31:50.552192203Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Oct 8 19:31:50.553715 containerd[2024]: time="2024-10-08T19:31:50.552898307Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:50.561879 containerd[2024]: time="2024-10-08T19:31:50.561727259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:50.565548 containerd[2024]: time="2024-10-08T19:31:50.565043711Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.988291638s" Oct 8 19:31:50.565548 containerd[2024]: time="2024-10-08T19:31:50.565176527Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Oct 8 19:31:50.566177 containerd[2024]: time="2024-10-08T19:31:50.566132039Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 8 19:31:51.216748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290675059.mount: Deactivated successfully. Oct 8 19:31:51.226389 containerd[2024]: time="2024-10-08T19:31:51.226277134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:51.228916 containerd[2024]: time="2024-10-08T19:31:51.228422134Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Oct 8 19:31:51.230511 containerd[2024]: time="2024-10-08T19:31:51.230385046Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:51.238672 containerd[2024]: time="2024-10-08T19:31:51.238527874Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:51.241168 containerd[2024]: time="2024-10-08T19:31:51.240809266Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 673.635435ms" Oct 8 19:31:51.241168 containerd[2024]: time="2024-10-08T19:31:51.240878734Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 8 19:31:51.242579 containerd[2024]: time="2024-10-08T19:31:51.241790542Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Oct 8 19:31:51.769646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount690040365.mount: Deactivated successfully. Oct 8 19:31:54.659909 containerd[2024]: time="2024-10-08T19:31:54.659822835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:54.662064 containerd[2024]: time="2024-10-08T19:31:54.661913955Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=65868192" Oct 8 19:31:54.665020 containerd[2024]: time="2024-10-08T19:31:54.664414743Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:54.670604 containerd[2024]: time="2024-10-08T19:31:54.670535511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:31:54.673672 containerd[2024]: time="2024-10-08T19:31:54.673602327Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 3.431677853s" Oct 8 19:31:54.673672 containerd[2024]: time="2024-10-08T19:31:54.673665759Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Oct 8 19:31:57.230691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 8 19:31:57.241735 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:31:57.647576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:31:57.660903 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 8 19:31:57.763309 kubelet[2689]: E1008 19:31:57.763181 2689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 8 19:31:57.767358 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 8 19:31:57.767931 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 8 19:32:03.213609 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 8 19:32:04.085716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:32:04.103360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:32:04.168947 systemd[1]: Reloading requested from client PID 2706 ('systemctl') (unit session-7.scope)... Oct 8 19:32:04.169256 systemd[1]: Reloading... Oct 8 19:32:04.455107 zram_generator::config[2747]: No configuration found. Oct 8 19:32:04.718841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:32:04.917724 systemd[1]: Reloading finished in 747 ms. Oct 8 19:32:05.027202 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 8 19:32:05.027418 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 8 19:32:05.029197 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:32:05.039850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:32:05.483423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:32:05.494903 (kubelet)[2806]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:32:05.584345 kubelet[2806]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:32:05.586150 kubelet[2806]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:32:05.586150 kubelet[2806]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:32:05.586150 kubelet[2806]: I1008 19:32:05.585434 2806 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:32:07.598051 kubelet[2806]: I1008 19:32:07.596875 2806 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:32:07.598051 kubelet[2806]: I1008 19:32:07.596944 2806 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:32:07.598051 kubelet[2806]: I1008 19:32:07.597608 2806 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:32:07.658115 kubelet[2806]: E1008 19:32:07.658047 2806 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:07.661065 kubelet[2806]: I1008 19:32:07.660955 2806 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:32:07.676431 kubelet[2806]: E1008 19:32:07.676345 2806 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:32:07.676431 kubelet[2806]: I1008 19:32:07.676413 2806 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:32:07.685088 kubelet[2806]: I1008 19:32:07.684938 2806 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:32:07.685511 kubelet[2806]: I1008 19:32:07.685451 2806 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:32:07.686130 kubelet[2806]: I1008 19:32:07.685919 2806 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:32:07.686645 kubelet[2806]: I1008 19:32:07.686101 2806 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:32:07.687133 kubelet[2806]: I1008 19:32:07.686778 2806 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:32:07.687133 kubelet[2806]: I1008 19:32:07.686841 2806 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:32:07.687322 kubelet[2806]: I1008 19:32:07.687291 2806 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:32:07.693330 kubelet[2806]: I1008 19:32:07.693153 2806 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:32:07.693330 kubelet[2806]: I1008 19:32:07.693343 2806 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:32:07.693706 kubelet[2806]: I1008 19:32:07.693443 2806 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:32:07.693706 kubelet[2806]: I1008 19:32:07.693482 2806 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:32:07.699080 kubelet[2806]: W1008 19:32:07.697848 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-2&limit=500&resourceVersion=0": dial tcp 172.31.19.2:6443: connect: connection refused Oct 8 19:32:07.699080 kubelet[2806]: E1008 19:32:07.698125 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-2&limit=500&resourceVersion=0\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:07.700263 kubelet[2806]: W1008 19:32:07.700083 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.2:6443: connect: connection refused Oct 8 19:32:07.701137 kubelet[2806]: E1008 19:32:07.700577 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:07.701137 kubelet[2806]: I1008 19:32:07.700787 2806 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:32:07.705168 kubelet[2806]: I1008 19:32:07.705106 2806 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:32:07.706934 kubelet[2806]: W1008 19:32:07.706856 2806 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 8 19:32:07.713111 kubelet[2806]: I1008 19:32:07.712556 2806 server.go:1269] "Started kubelet" Oct 8 19:32:07.713111 kubelet[2806]: I1008 19:32:07.712809 2806 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:32:07.715701 kubelet[2806]: I1008 19:32:07.715570 2806 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:32:07.729126 kubelet[2806]: I1008 19:32:07.728808 2806 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:32:07.735019 kubelet[2806]: I1008 19:32:07.733819 2806 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:32:07.736617 kubelet[2806]: I1008 19:32:07.736542 2806 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:32:07.739741 kubelet[2806]: I1008 19:32:07.739681 2806 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:32:07.750547 kubelet[2806]: I1008 19:32:07.742228 2806 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:32:07.752624 kubelet[2806]: E1008 19:32:07.749819 2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.2:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.2:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-2.17fc9120f36e03c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-2,UID:ip-172-31-19-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-2,},FirstTimestamp:2024-10-08 19:32:07.712482244 +0000 UTC m=+2.209466160,LastTimestamp:2024-10-08 19:32:07.712482244 +0000 UTC m=+2.209466160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-2,}" Oct 8 19:32:07.753148 kubelet[2806]: E1008 19:32:07.742387 2806 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-2\" not found" Oct 8 19:32:07.755256 kubelet[2806]: W1008 19:32:07.749671 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.2:6443: connect: connection refused Oct 8 19:32:07.755256 kubelet[2806]: I1008 19:32:07.754862 2806 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:32:07.755256 kubelet[2806]: E1008 19:32:07.754879 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:07.755256 kubelet[2806]: I1008 19:32:07.742275 2806 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:32:07.755256 kubelet[2806]: I1008 19:32:07.755119 2806 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:32:07.756325 kubelet[2806]: I1008 19:32:07.752957 2806 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:32:07.759913 kubelet[2806]: E1008 19:32:07.759793 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-2?timeout=10s\": dial tcp 172.31.19.2:6443: connect: connection refused" interval="200ms" Oct 8 19:32:07.761033 kubelet[2806]: I1008 19:32:07.760305 2806 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:32:07.762175 kubelet[2806]: E1008 19:32:07.762130 2806 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:32:07.805293 kubelet[2806]: I1008 19:32:07.805257 2806 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:32:07.806439 kubelet[2806]: I1008 19:32:07.805969 2806 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:32:07.807055 kubelet[2806]: I1008 19:32:07.806401 2806 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:32:07.807199 kubelet[2806]: I1008 19:32:07.806206 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:32:07.811275 kubelet[2806]: I1008 19:32:07.811188 2806 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:32:07.813387 kubelet[2806]: I1008 19:32:07.811452 2806 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:32:07.813387 kubelet[2806]: I1008 19:32:07.811505 2806 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:32:07.813387 kubelet[2806]: E1008 19:32:07.811620 2806 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:32:07.815287 kubelet[2806]: W1008 19:32:07.815203 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.2:6443: connect: connection refused Oct 8 19:32:07.815556 kubelet[2806]: E1008 19:32:07.815337 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:07.816062 kubelet[2806]: I1008 19:32:07.815819 2806 policy_none.go:49] "None policy: Start" Oct 8 19:32:07.818626 kubelet[2806]: I1008 19:32:07.818564 2806 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:32:07.820064 kubelet[2806]: I1008 19:32:07.819303 2806 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:32:07.841103 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 8 19:32:07.855500 kubelet[2806]: E1008 19:32:07.855233 2806 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-2\" not found" Oct 8 19:32:07.864629 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 8 19:32:07.881069 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 8 19:32:07.886898 kubelet[2806]: I1008 19:32:07.885139 2806 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:32:07.886898 kubelet[2806]: I1008 19:32:07.885490 2806 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:32:07.886898 kubelet[2806]: I1008 19:32:07.885515 2806 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:32:07.886898 kubelet[2806]: I1008 19:32:07.886603 2806 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:32:07.891965 kubelet[2806]: E1008 19:32:07.891909 2806 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-19-2\" not found" Oct 8 19:32:07.942425 systemd[1]: Created slice kubepods-burstable-pod696ed05ada0db362b4b0f977ee4153f9.slice - libcontainer container kubepods-burstable-pod696ed05ada0db362b4b0f977ee4153f9.slice. Oct 8 19:32:07.956581 kubelet[2806]: I1008 19:32:07.956500 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:07.956581 kubelet[2806]: I1008 19:32:07.956580 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:07.956847 kubelet[2806]: I1008 19:32:07.956637 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c24fc409ba75379103ed019c37bf47ff-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-2\" (UID: \"c24fc409ba75379103ed019c37bf47ff\") " pod="kube-system/kube-scheduler-ip-172-31-19-2" Oct 8 19:32:07.956847 kubelet[2806]: I1008 19:32:07.956677 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/696ed05ada0db362b4b0f977ee4153f9-ca-certs\") pod \"kube-apiserver-ip-172-31-19-2\" (UID: \"696ed05ada0db362b4b0f977ee4153f9\") " pod="kube-system/kube-apiserver-ip-172-31-19-2" Oct 8 19:32:07.956847 kubelet[2806]: I1008 19:32:07.956716 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/696ed05ada0db362b4b0f977ee4153f9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-2\" (UID: \"696ed05ada0db362b4b0f977ee4153f9\") " pod="kube-system/kube-apiserver-ip-172-31-19-2" Oct 8 19:32:07.956847 kubelet[2806]: I1008 19:32:07.956751 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:07.956847 kubelet[2806]: I1008 19:32:07.956787 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:07.957226 kubelet[2806]: I1008 19:32:07.956822 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:07.957226 kubelet[2806]: I1008 19:32:07.956870 2806 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/696ed05ada0db362b4b0f977ee4153f9-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-2\" (UID: \"696ed05ada0db362b4b0f977ee4153f9\") " pod="kube-system/kube-apiserver-ip-172-31-19-2" Oct 8 19:32:07.960617 kubelet[2806]: E1008 19:32:07.960501 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-2?timeout=10s\": dial tcp 172.31.19.2:6443: connect: connection refused" interval="400ms" Oct 8 19:32:07.966469 systemd[1]: Created slice kubepods-burstable-podd56bbf1e6a46e298c39d93ccf65c362b.slice - libcontainer container kubepods-burstable-podd56bbf1e6a46e298c39d93ccf65c362b.slice. Oct 8 19:32:07.986588 systemd[1]: Created slice kubepods-burstable-podc24fc409ba75379103ed019c37bf47ff.slice - libcontainer container kubepods-burstable-podc24fc409ba75379103ed019c37bf47ff.slice. Oct 8 19:32:07.994394 kubelet[2806]: I1008 19:32:07.994328 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-2" Oct 8 19:32:07.995061 kubelet[2806]: E1008 19:32:07.995016 2806 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.2:6443/api/v1/nodes\": dial tcp 172.31.19.2:6443: connect: connection refused" node="ip-172-31-19-2" Oct 8 19:32:08.200225 kubelet[2806]: I1008 19:32:08.199978 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-2" Oct 8 19:32:08.201316 kubelet[2806]: E1008 19:32:08.201184 2806 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.2:6443/api/v1/nodes\": dial tcp 172.31.19.2:6443: connect: connection refused" node="ip-172-31-19-2" Oct 8 19:32:08.257447 containerd[2024]: time="2024-10-08T19:32:08.257370063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-2,Uid:696ed05ada0db362b4b0f977ee4153f9,Namespace:kube-system,Attempt:0,}" Oct 8 19:32:08.282961 containerd[2024]: time="2024-10-08T19:32:08.282894795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-2,Uid:d56bbf1e6a46e298c39d93ccf65c362b,Namespace:kube-system,Attempt:0,}" Oct 8 19:32:08.295101 containerd[2024]: time="2024-10-08T19:32:08.294824079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-2,Uid:c24fc409ba75379103ed019c37bf47ff,Namespace:kube-system,Attempt:0,}" Oct 8 19:32:08.362941 kubelet[2806]: E1008 19:32:08.362567 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-2?timeout=10s\": dial tcp 172.31.19.2:6443: connect: connection refused" interval="800ms" Oct 8 19:32:08.604023 kubelet[2806]: I1008 19:32:08.603931 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-2" Oct 8 19:32:08.605361 kubelet[2806]: E1008 19:32:08.605242 2806 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.2:6443/api/v1/nodes\": dial tcp 172.31.19.2:6443: connect: connection refused" node="ip-172-31-19-2" Oct 8 19:32:08.778380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533756683.mount: Deactivated successfully. Oct 8 19:32:08.786682 containerd[2024]: time="2024-10-08T19:32:08.786580469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:32:08.792311 containerd[2024]: time="2024-10-08T19:32:08.792243473Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Oct 8 19:32:08.794089 containerd[2024]: time="2024-10-08T19:32:08.793332977Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:32:08.795541 containerd[2024]: time="2024-10-08T19:32:08.795371790Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:32:08.798483 containerd[2024]: time="2024-10-08T19:32:08.798402978Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:32:08.799040 containerd[2024]: time="2024-10-08T19:32:08.798899766Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:32:08.799746 containerd[2024]: time="2024-10-08T19:32:08.799649562Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 8 19:32:08.806557 containerd[2024]: time="2024-10-08T19:32:08.806481714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 8 19:32:08.813072 containerd[2024]: time="2024-10-08T19:32:08.811552506Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 528.484947ms" Oct 8 19:32:08.815320 containerd[2024]: time="2024-10-08T19:32:08.815195166Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 557.659239ms" Oct 8 19:32:08.816777 containerd[2024]: time="2024-10-08T19:32:08.816697626Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.732979ms" Oct 8 19:32:08.837278 kubelet[2806]: W1008 19:32:08.837143 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.19.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-2&limit=500&resourceVersion=0": dial tcp 172.31.19.2:6443: connect: connection refused Oct 8 19:32:08.837821 kubelet[2806]: E1008 19:32:08.837782 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.19.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-19-2&limit=500&resourceVersion=0\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:08.877244 kubelet[2806]: E1008 19:32:08.876897 2806 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.19.2:6443/api/v1/namespaces/default/events\": dial tcp 172.31.19.2:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-19-2.17fc9120f36e03c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-19-2,UID:ip-172-31-19-2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-19-2,},FirstTimestamp:2024-10-08 19:32:07.712482244 +0000 UTC m=+2.209466160,LastTimestamp:2024-10-08 19:32:07.712482244 +0000 UTC m=+2.209466160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-19-2,}" Oct 8 19:32:08.895267 kubelet[2806]: W1008 19:32:08.895177 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.19.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.19.2:6443: connect: connection refused Oct 8 19:32:08.895710 kubelet[2806]: E1008 19:32:08.895620 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.19.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:09.061723 kubelet[2806]: W1008 19:32:09.061588 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.19.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.19.2:6443: connect: connection refused Oct 8 19:32:09.062751 kubelet[2806]: E1008 19:32:09.062609 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.19.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:09.098613 containerd[2024]: time="2024-10-08T19:32:09.097548135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:32:09.098613 containerd[2024]: time="2024-10-08T19:32:09.098478615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:09.098613 containerd[2024]: time="2024-10-08T19:32:09.098556567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:32:09.099273 containerd[2024]: time="2024-10-08T19:32:09.099121131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:09.101204 containerd[2024]: time="2024-10-08T19:32:09.100621383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:32:09.101204 containerd[2024]: time="2024-10-08T19:32:09.100761639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:09.101204 containerd[2024]: time="2024-10-08T19:32:09.100801263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:32:09.101204 containerd[2024]: time="2024-10-08T19:32:09.100828455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:09.113024 containerd[2024]: time="2024-10-08T19:32:09.112682595Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:32:09.113024 containerd[2024]: time="2024-10-08T19:32:09.112863795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:09.115799 containerd[2024]: time="2024-10-08T19:32:09.112977315Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:32:09.115799 containerd[2024]: time="2024-10-08T19:32:09.115183047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:09.133545 kubelet[2806]: W1008 19:32:09.133223 2806 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.19.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.19.2:6443: connect: connection refused Oct 8 19:32:09.133545 kubelet[2806]: E1008 19:32:09.133339 2806 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.19.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:09.164849 kubelet[2806]: E1008 19:32:09.163869 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.19.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-2?timeout=10s\": dial tcp 172.31.19.2:6443: connect: connection refused" interval="1.6s" Oct 8 19:32:09.185200 systemd[1]: Started cri-containerd-a00708579ce53c9441b6e7bab68f0811f8f762913406216bceef926fc547954d.scope - libcontainer container a00708579ce53c9441b6e7bab68f0811f8f762913406216bceef926fc547954d. Oct 8 19:32:09.198047 systemd[1]: Started cri-containerd-b5ef534596df88ecde3c6eda9358b275cc4c9b3063696ecfed4ddb24018246fa.scope - libcontainer container b5ef534596df88ecde3c6eda9358b275cc4c9b3063696ecfed4ddb24018246fa. Oct 8 19:32:09.225290 systemd[1]: Started cri-containerd-6bd5bd36620cf2f446e6197213aad07cac08e76a41427120c4921cb5d26ab783.scope - libcontainer container 6bd5bd36620cf2f446e6197213aad07cac08e76a41427120c4921cb5d26ab783. Oct 8 19:32:09.403568 containerd[2024]: time="2024-10-08T19:32:09.401710349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-19-2,Uid:696ed05ada0db362b4b0f977ee4153f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"a00708579ce53c9441b6e7bab68f0811f8f762913406216bceef926fc547954d\"" Oct 8 19:32:09.417047 kubelet[2806]: I1008 19:32:09.416069 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-2" Oct 8 19:32:09.419586 kubelet[2806]: E1008 19:32:09.419469 2806 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.19.2:6443/api/v1/nodes\": dial tcp 172.31.19.2:6443: connect: connection refused" node="ip-172-31-19-2" Oct 8 19:32:09.421109 containerd[2024]: time="2024-10-08T19:32:09.420884285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-19-2,Uid:c24fc409ba75379103ed019c37bf47ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5ef534596df88ecde3c6eda9358b275cc4c9b3063696ecfed4ddb24018246fa\"" Oct 8 19:32:09.427474 containerd[2024]: time="2024-10-08T19:32:09.427369205Z" level=info msg="CreateContainer within sandbox \"a00708579ce53c9441b6e7bab68f0811f8f762913406216bceef926fc547954d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 8 19:32:09.431126 containerd[2024]: time="2024-10-08T19:32:09.431035337Z" level=info msg="CreateContainer within sandbox \"b5ef534596df88ecde3c6eda9358b275cc4c9b3063696ecfed4ddb24018246fa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 8 19:32:09.452965 containerd[2024]: time="2024-10-08T19:32:09.452508929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-19-2,Uid:d56bbf1e6a46e298c39d93ccf65c362b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bd5bd36620cf2f446e6197213aad07cac08e76a41427120c4921cb5d26ab783\"" Oct 8 19:32:09.462392 containerd[2024]: time="2024-10-08T19:32:09.462334481Z" level=info msg="CreateContainer within sandbox \"6bd5bd36620cf2f446e6197213aad07cac08e76a41427120c4921cb5d26ab783\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 8 19:32:09.466658 containerd[2024]: time="2024-10-08T19:32:09.466553357Z" level=info msg="CreateContainer within sandbox \"a00708579ce53c9441b6e7bab68f0811f8f762913406216bceef926fc547954d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"569776393ef080f1d19d4ba99786dea2143492f2d7eddf1e1e83aca3a74d8abf\"" Oct 8 19:32:09.469738 containerd[2024]: time="2024-10-08T19:32:09.469607381Z" level=info msg="StartContainer for \"569776393ef080f1d19d4ba99786dea2143492f2d7eddf1e1e83aca3a74d8abf\"" Oct 8 19:32:09.475723 containerd[2024]: time="2024-10-08T19:32:09.475643105Z" level=info msg="CreateContainer within sandbox \"b5ef534596df88ecde3c6eda9358b275cc4c9b3063696ecfed4ddb24018246fa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae\"" Oct 8 19:32:09.478109 containerd[2024]: time="2024-10-08T19:32:09.477831761Z" level=info msg="StartContainer for \"37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae\"" Oct 8 19:32:09.512034 containerd[2024]: time="2024-10-08T19:32:09.511250309Z" level=info msg="CreateContainer within sandbox \"6bd5bd36620cf2f446e6197213aad07cac08e76a41427120c4921cb5d26ab783\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a\"" Oct 8 19:32:09.512743 containerd[2024]: time="2024-10-08T19:32:09.512332973Z" level=info msg="StartContainer for \"a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a\"" Oct 8 19:32:09.569020 systemd[1]: Started cri-containerd-569776393ef080f1d19d4ba99786dea2143492f2d7eddf1e1e83aca3a74d8abf.scope - libcontainer container 569776393ef080f1d19d4ba99786dea2143492f2d7eddf1e1e83aca3a74d8abf. Oct 8 19:32:09.585385 systemd[1]: Started cri-containerd-37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae.scope - libcontainer container 37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae. Oct 8 19:32:09.615365 systemd[1]: Started cri-containerd-a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a.scope - libcontainer container a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a. Oct 8 19:32:09.699335 containerd[2024]: time="2024-10-08T19:32:09.698827242Z" level=info msg="StartContainer for \"569776393ef080f1d19d4ba99786dea2143492f2d7eddf1e1e83aca3a74d8abf\" returns successfully" Oct 8 19:32:09.760251 containerd[2024]: time="2024-10-08T19:32:09.760177650Z" level=info msg="StartContainer for \"37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae\" returns successfully" Oct 8 19:32:09.806114 kubelet[2806]: E1008 19:32:09.804245 2806 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.19.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.19.2:6443: connect: connection refused" logger="UnhandledError" Oct 8 19:32:09.813424 containerd[2024]: time="2024-10-08T19:32:09.813334915Z" level=info msg="StartContainer for \"a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a\" returns successfully" Oct 8 19:32:11.023062 kubelet[2806]: I1008 19:32:11.022805 2806 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-2" Oct 8 19:32:14.038806 kubelet[2806]: I1008 19:32:14.038723 2806 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-2" Oct 8 19:32:14.038806 kubelet[2806]: E1008 19:32:14.038792 2806 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ip-172-31-19-2\": node \"ip-172-31-19-2\" not found" Oct 8 19:32:14.168687 kubelet[2806]: E1008 19:32:14.168559 2806 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Oct 8 19:32:14.706948 kubelet[2806]: I1008 19:32:14.706876 2806 apiserver.go:52] "Watching apiserver" Oct 8 19:32:14.757380 kubelet[2806]: I1008 19:32:14.757273 2806 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:32:16.845511 systemd[1]: Reloading requested from client PID 3078 ('systemctl') (unit session-7.scope)... Oct 8 19:32:16.845557 systemd[1]: Reloading... Oct 8 19:32:17.077844 update_engine[1997]: I1008 19:32:17.074158 1997 update_attempter.cc:509] Updating boot flags... Oct 8 19:32:17.108092 zram_generator::config[3116]: No configuration found. Oct 8 19:32:17.219187 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3155) Oct 8 19:32:17.533488 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 8 19:32:17.612262 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3151) Oct 8 19:32:17.829346 systemd[1]: Reloading finished in 982 ms. Oct 8 19:32:18.126372 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:32:18.173792 systemd[1]: kubelet.service: Deactivated successfully. Oct 8 19:32:18.176263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:32:18.176426 systemd[1]: kubelet.service: Consumed 3.175s CPU time, 119.3M memory peak, 0B memory swap peak. Oct 8 19:32:18.205766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 8 19:32:18.689403 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 8 19:32:18.707372 (kubelet)[3361]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 8 19:32:18.818435 kubelet[3361]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:32:18.818435 kubelet[3361]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 8 19:32:18.818435 kubelet[3361]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 8 19:32:18.819760 kubelet[3361]: I1008 19:32:18.818718 3361 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 8 19:32:18.859034 kubelet[3361]: I1008 19:32:18.858604 3361 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Oct 8 19:32:18.859034 kubelet[3361]: I1008 19:32:18.858701 3361 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 8 19:32:18.859848 kubelet[3361]: I1008 19:32:18.859308 3361 server.go:929] "Client rotation is on, will bootstrap in background" Oct 8 19:32:18.864020 kubelet[3361]: I1008 19:32:18.863930 3361 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 8 19:32:18.873336 kubelet[3361]: I1008 19:32:18.872893 3361 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 8 19:32:18.885926 kubelet[3361]: E1008 19:32:18.884551 3361 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 8 19:32:18.886525 kubelet[3361]: I1008 19:32:18.886320 3361 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 8 19:32:18.895241 kubelet[3361]: I1008 19:32:18.895181 3361 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 8 19:32:18.896214 kubelet[3361]: I1008 19:32:18.895827 3361 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 8 19:32:18.896361 kubelet[3361]: I1008 19:32:18.896230 3361 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 8 19:32:18.896761 kubelet[3361]: I1008 19:32:18.896306 3361 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-19-2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 8 19:32:18.897154 kubelet[3361]: I1008 19:32:18.896796 3361 topology_manager.go:138] "Creating topology manager with none policy" Oct 8 19:32:18.897154 kubelet[3361]: I1008 19:32:18.896827 3361 container_manager_linux.go:300] "Creating device plugin manager" Oct 8 19:32:18.897154 kubelet[3361]: I1008 19:32:18.896920 3361 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:32:18.897686 kubelet[3361]: I1008 19:32:18.897261 3361 kubelet.go:408] "Attempting to sync node with API server" Oct 8 19:32:18.901662 kubelet[3361]: I1008 19:32:18.899071 3361 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 8 19:32:18.901662 kubelet[3361]: I1008 19:32:18.899692 3361 kubelet.go:314] "Adding apiserver pod source" Oct 8 19:32:18.901662 kubelet[3361]: I1008 19:32:18.899921 3361 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 8 19:32:18.913801 kubelet[3361]: I1008 19:32:18.913707 3361 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Oct 8 19:32:18.917246 kubelet[3361]: I1008 19:32:18.917138 3361 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 8 19:32:18.923899 kubelet[3361]: I1008 19:32:18.923588 3361 server.go:1269] "Started kubelet" Oct 8 19:32:18.943624 kubelet[3361]: I1008 19:32:18.941719 3361 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 8 19:32:18.951238 kubelet[3361]: I1008 19:32:18.950912 3361 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 8 19:32:18.963735 kubelet[3361]: I1008 19:32:18.963655 3361 server.go:460] "Adding debug handlers to kubelet server" Oct 8 19:32:18.969112 kubelet[3361]: I1008 19:32:18.968834 3361 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 8 19:32:18.969657 kubelet[3361]: I1008 19:32:18.969570 3361 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 8 19:32:18.971096 kubelet[3361]: I1008 19:32:18.970329 3361 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 8 19:32:18.976862 kubelet[3361]: I1008 19:32:18.976740 3361 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 8 19:32:18.977979 kubelet[3361]: E1008 19:32:18.977514 3361 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-19-2\" not found" Oct 8 19:32:19.028197 kubelet[3361]: I1008 19:32:19.021764 3361 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 8 19:32:19.028197 kubelet[3361]: I1008 19:32:19.022345 3361 reconciler.go:26] "Reconciler: start to sync state" Oct 8 19:32:19.029757 kubelet[3361]: I1008 19:32:19.029077 3361 factory.go:221] Registration of the systemd container factory successfully Oct 8 19:32:19.032338 kubelet[3361]: I1008 19:32:19.032069 3361 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 8 19:32:19.048583 kubelet[3361]: I1008 19:32:19.048468 3361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 8 19:32:19.054937 kubelet[3361]: I1008 19:32:19.052965 3361 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 8 19:32:19.054937 kubelet[3361]: I1008 19:32:19.053039 3361 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 8 19:32:19.054937 kubelet[3361]: I1008 19:32:19.053071 3361 kubelet.go:2321] "Starting kubelet main sync loop" Oct 8 19:32:19.054937 kubelet[3361]: E1008 19:32:19.053162 3361 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 8 19:32:19.067778 kubelet[3361]: I1008 19:32:19.065462 3361 factory.go:221] Registration of the containerd container factory successfully Oct 8 19:32:19.094325 kubelet[3361]: E1008 19:32:19.094229 3361 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 8 19:32:19.154399 kubelet[3361]: E1008 19:32:19.154284 3361 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 8 19:32:19.212824 kubelet[3361]: I1008 19:32:19.212668 3361 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 8 19:32:19.214442 kubelet[3361]: I1008 19:32:19.213963 3361 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 8 19:32:19.214442 kubelet[3361]: I1008 19:32:19.214033 3361 state_mem.go:36] "Initialized new in-memory state store" Oct 8 19:32:19.215212 kubelet[3361]: I1008 19:32:19.215077 3361 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 8 19:32:19.215351 kubelet[3361]: I1008 19:32:19.215151 3361 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 8 19:32:19.215351 kubelet[3361]: I1008 19:32:19.215319 3361 policy_none.go:49] "None policy: Start" Oct 8 19:32:19.217547 kubelet[3361]: I1008 19:32:19.217464 3361 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 8 19:32:19.218500 kubelet[3361]: I1008 19:32:19.217865 3361 state_mem.go:35] "Initializing new in-memory state store" Oct 8 19:32:19.218500 kubelet[3361]: I1008 19:32:19.218273 3361 state_mem.go:75] "Updated machine memory state" Oct 8 19:32:19.231842 kubelet[3361]: I1008 19:32:19.231772 3361 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 8 19:32:19.232913 kubelet[3361]: I1008 19:32:19.232162 3361 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 8 19:32:19.232913 kubelet[3361]: I1008 19:32:19.232200 3361 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 8 19:32:19.232913 kubelet[3361]: I1008 19:32:19.232677 3361 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 8 19:32:19.352637 kubelet[3361]: I1008 19:32:19.352561 3361 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-19-2" Oct 8 19:32:19.386857 kubelet[3361]: I1008 19:32:19.386743 3361 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-19-2" Oct 8 19:32:19.387197 kubelet[3361]: I1008 19:32:19.386943 3361 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-19-2" Oct 8 19:32:19.387735 kubelet[3361]: E1008 19:32:19.387648 3361 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-19-2\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:19.423370 kubelet[3361]: I1008 19:32:19.423285 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-ca-certs\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:19.423635 kubelet[3361]: I1008 19:32:19.423383 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-k8s-certs\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:19.423635 kubelet[3361]: I1008 19:32:19.423436 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/696ed05ada0db362b4b0f977ee4153f9-ca-certs\") pod \"kube-apiserver-ip-172-31-19-2\" (UID: \"696ed05ada0db362b4b0f977ee4153f9\") " pod="kube-system/kube-apiserver-ip-172-31-19-2" Oct 8 19:32:19.423635 kubelet[3361]: I1008 19:32:19.423476 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/696ed05ada0db362b4b0f977ee4153f9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-19-2\" (UID: \"696ed05ada0db362b4b0f977ee4153f9\") " pod="kube-system/kube-apiserver-ip-172-31-19-2" Oct 8 19:32:19.423635 kubelet[3361]: I1008 19:32:19.423532 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:19.423635 kubelet[3361]: I1008 19:32:19.423577 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-kubeconfig\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:19.424121 kubelet[3361]: I1008 19:32:19.423621 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d56bbf1e6a46e298c39d93ccf65c362b-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-19-2\" (UID: \"d56bbf1e6a46e298c39d93ccf65c362b\") " pod="kube-system/kube-controller-manager-ip-172-31-19-2" Oct 8 19:32:19.424121 kubelet[3361]: I1008 19:32:19.423670 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c24fc409ba75379103ed019c37bf47ff-kubeconfig\") pod \"kube-scheduler-ip-172-31-19-2\" (UID: \"c24fc409ba75379103ed019c37bf47ff\") " pod="kube-system/kube-scheduler-ip-172-31-19-2" Oct 8 19:32:19.424121 kubelet[3361]: I1008 19:32:19.423741 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/696ed05ada0db362b4b0f977ee4153f9-k8s-certs\") pod \"kube-apiserver-ip-172-31-19-2\" (UID: \"696ed05ada0db362b4b0f977ee4153f9\") " pod="kube-system/kube-apiserver-ip-172-31-19-2" Oct 8 19:32:19.921810 kubelet[3361]: I1008 19:32:19.921711 3361 apiserver.go:52] "Watching apiserver" Oct 8 19:32:20.022427 kubelet[3361]: I1008 19:32:20.022317 3361 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 8 19:32:20.053385 kubelet[3361]: I1008 19:32:20.053122 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-19-2" podStartSLOduration=1.053056705 podStartE2EDuration="1.053056705s" podCreationTimestamp="2024-10-08 19:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:32:20.019408549 +0000 UTC m=+1.304307523" watchObservedRunningTime="2024-10-08 19:32:20.053056705 +0000 UTC m=+1.337955571" Oct 8 19:32:20.134197 kubelet[3361]: I1008 19:32:20.133981 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-19-2" podStartSLOduration=1.133954082 podStartE2EDuration="1.133954082s" podCreationTimestamp="2024-10-08 19:32:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:32:20.057637849 +0000 UTC m=+1.342536727" watchObservedRunningTime="2024-10-08 19:32:20.133954082 +0000 UTC m=+1.418852960" Oct 8 19:32:20.182497 kubelet[3361]: I1008 19:32:20.180920 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-19-2" podStartSLOduration=2.180873278 podStartE2EDuration="2.180873278s" podCreationTimestamp="2024-10-08 19:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:32:20.14461439 +0000 UTC m=+1.429513268" watchObservedRunningTime="2024-10-08 19:32:20.180873278 +0000 UTC m=+1.465772132" Oct 8 19:32:23.484692 kubelet[3361]: I1008 19:32:23.484591 3361 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 8 19:32:23.487859 containerd[2024]: time="2024-10-08T19:32:23.487607178Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 8 19:32:23.488806 kubelet[3361]: I1008 19:32:23.488152 3361 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 8 19:32:24.191763 systemd[1]: Created slice kubepods-besteffort-pod5237a270_f632_4c6c_8f15_749976f7b3d2.slice - libcontainer container kubepods-besteffort-pod5237a270_f632_4c6c_8f15_749976f7b3d2.slice. Oct 8 19:32:24.362065 kubelet[3361]: I1008 19:32:24.360190 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5237a270-f632-4c6c-8f15-749976f7b3d2-lib-modules\") pod \"kube-proxy-rn8dn\" (UID: \"5237a270-f632-4c6c-8f15-749976f7b3d2\") " pod="kube-system/kube-proxy-rn8dn" Oct 8 19:32:24.362065 kubelet[3361]: I1008 19:32:24.360377 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5237a270-f632-4c6c-8f15-749976f7b3d2-kube-proxy\") pod \"kube-proxy-rn8dn\" (UID: \"5237a270-f632-4c6c-8f15-749976f7b3d2\") " pod="kube-system/kube-proxy-rn8dn" Oct 8 19:32:24.362065 kubelet[3361]: I1008 19:32:24.360547 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5237a270-f632-4c6c-8f15-749976f7b3d2-xtables-lock\") pod \"kube-proxy-rn8dn\" (UID: \"5237a270-f632-4c6c-8f15-749976f7b3d2\") " pod="kube-system/kube-proxy-rn8dn" Oct 8 19:32:24.362065 kubelet[3361]: I1008 19:32:24.360651 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jggb4\" (UniqueName: \"kubernetes.io/projected/5237a270-f632-4c6c-8f15-749976f7b3d2-kube-api-access-jggb4\") pod \"kube-proxy-rn8dn\" (UID: \"5237a270-f632-4c6c-8f15-749976f7b3d2\") " pod="kube-system/kube-proxy-rn8dn" Oct 8 19:32:24.615290 systemd[1]: Created slice kubepods-besteffort-podf3e32cb9_4975_423b_b2d5_3d08e3f6d8ba.slice - libcontainer container kubepods-besteffort-podf3e32cb9_4975_423b_b2d5_3d08e3f6d8ba.slice. Oct 8 19:32:24.662769 kubelet[3361]: I1008 19:32:24.662530 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3e32cb9-4975-423b-b2d5-3d08e3f6d8ba-var-lib-calico\") pod \"tigera-operator-55748b469f-ndzwt\" (UID: \"f3e32cb9-4975-423b-b2d5-3d08e3f6d8ba\") " pod="tigera-operator/tigera-operator-55748b469f-ndzwt" Oct 8 19:32:24.662769 kubelet[3361]: I1008 19:32:24.662613 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dvwj\" (UniqueName: \"kubernetes.io/projected/f3e32cb9-4975-423b-b2d5-3d08e3f6d8ba-kube-api-access-8dvwj\") pod \"tigera-operator-55748b469f-ndzwt\" (UID: \"f3e32cb9-4975-423b-b2d5-3d08e3f6d8ba\") " pod="tigera-operator/tigera-operator-55748b469f-ndzwt" Oct 8 19:32:24.817260 containerd[2024]: time="2024-10-08T19:32:24.816506385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rn8dn,Uid:5237a270-f632-4c6c-8f15-749976f7b3d2,Namespace:kube-system,Attempt:0,}" Oct 8 19:32:24.929774 containerd[2024]: time="2024-10-08T19:32:24.929140054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-ndzwt,Uid:f3e32cb9-4975-423b-b2d5-3d08e3f6d8ba,Namespace:tigera-operator,Attempt:0,}" Oct 8 19:32:24.943796 containerd[2024]: time="2024-10-08T19:32:24.940523050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:32:24.943796 containerd[2024]: time="2024-10-08T19:32:24.940630798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:24.943796 containerd[2024]: time="2024-10-08T19:32:24.940694542Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:32:24.943796 containerd[2024]: time="2024-10-08T19:32:24.940739698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:25.056337 systemd[1]: Started cri-containerd-a0ef9a276d5217e94bb7d82f2740cb1db5ab90c50173764f7a859cc2e115d31d.scope - libcontainer container a0ef9a276d5217e94bb7d82f2740cb1db5ab90c50173764f7a859cc2e115d31d. Oct 8 19:32:25.126561 containerd[2024]: time="2024-10-08T19:32:25.124686319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:32:25.126561 containerd[2024]: time="2024-10-08T19:32:25.124792507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:25.126561 containerd[2024]: time="2024-10-08T19:32:25.124825339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:32:25.126561 containerd[2024]: time="2024-10-08T19:32:25.124850647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:25.240278 systemd[1]: Started cri-containerd-85385d8593afe428bdecb106bbd171b8daf4d18ce718d3b46320f49513bc5556.scope - libcontainer container 85385d8593afe428bdecb106bbd171b8daf4d18ce718d3b46320f49513bc5556. Oct 8 19:32:25.521068 containerd[2024]: time="2024-10-08T19:32:25.520509501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rn8dn,Uid:5237a270-f632-4c6c-8f15-749976f7b3d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0ef9a276d5217e94bb7d82f2740cb1db5ab90c50173764f7a859cc2e115d31d\"" Oct 8 19:32:25.538194 containerd[2024]: time="2024-10-08T19:32:25.537939561Z" level=info msg="CreateContainer within sandbox \"a0ef9a276d5217e94bb7d82f2740cb1db5ab90c50173764f7a859cc2e115d31d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 8 19:32:25.544846 containerd[2024]: time="2024-10-08T19:32:25.544792725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-55748b469f-ndzwt,Uid:f3e32cb9-4975-423b-b2d5-3d08e3f6d8ba,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"85385d8593afe428bdecb106bbd171b8daf4d18ce718d3b46320f49513bc5556\"" Oct 8 19:32:25.549720 containerd[2024]: time="2024-10-08T19:32:25.549267525Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\"" Oct 8 19:32:25.575450 containerd[2024]: time="2024-10-08T19:32:25.572951253Z" level=info msg="CreateContainer within sandbox \"a0ef9a276d5217e94bb7d82f2740cb1db5ab90c50173764f7a859cc2e115d31d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"933d93179145e7f0eb588361174fdf7d3d64bd05b47f18d1aee5cf35ad0f0450\"" Oct 8 19:32:25.576405 containerd[2024]: time="2024-10-08T19:32:25.576327753Z" level=info msg="StartContainer for \"933d93179145e7f0eb588361174fdf7d3d64bd05b47f18d1aee5cf35ad0f0450\"" Oct 8 19:32:25.708765 systemd[1]: Started cri-containerd-933d93179145e7f0eb588361174fdf7d3d64bd05b47f18d1aee5cf35ad0f0450.scope - libcontainer container 933d93179145e7f0eb588361174fdf7d3d64bd05b47f18d1aee5cf35ad0f0450. Oct 8 19:32:25.824569 containerd[2024]: time="2024-10-08T19:32:25.822677446Z" level=info msg="StartContainer for \"933d93179145e7f0eb588361174fdf7d3d64bd05b47f18d1aee5cf35ad0f0450\" returns successfully" Oct 8 19:32:26.699493 sudo[2337]: pam_unix(sudo:session): session closed for user root Oct 8 19:32:26.726645 sshd[2333]: pam_unix(sshd:session): session closed for user core Oct 8 19:32:26.743806 systemd[1]: sshd@6-172.31.19.2:22-139.178.68.195:47404.service: Deactivated successfully. Oct 8 19:32:26.757534 systemd[1]: session-7.scope: Deactivated successfully. Oct 8 19:32:26.759114 systemd[1]: session-7.scope: Consumed 13.239s CPU time, 100.1M memory peak, 0B memory swap peak. Oct 8 19:32:26.766277 systemd-logind[1996]: Session 7 logged out. Waiting for processes to exit. Oct 8 19:32:26.772802 systemd-logind[1996]: Removed session 7. Oct 8 19:32:27.073191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount943173602.mount: Deactivated successfully. Oct 8 19:32:28.030108 containerd[2024]: time="2024-10-08T19:32:28.030022137Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:28.031898 containerd[2024]: time="2024-10-08T19:32:28.031793697Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.3: active requests=0, bytes read=19485887" Oct 8 19:32:28.033179 containerd[2024]: time="2024-10-08T19:32:28.033074925Z" level=info msg="ImageCreate event name:\"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:28.038859 containerd[2024]: time="2024-10-08T19:32:28.038747289Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:28.041737 containerd[2024]: time="2024-10-08T19:32:28.041172765Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.3\" with image id \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\", repo tag \"quay.io/tigera/operator:v1.34.3\", repo digest \"quay.io/tigera/operator@sha256:2cc4de6ad019ccc3abbd2615c159d0dcfb2ecdab90dc5805f08837d7c014d458\", size \"19480102\" in 2.491812924s" Oct 8 19:32:28.041737 containerd[2024]: time="2024-10-08T19:32:28.041244993Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.3\" returns image reference \"sha256:2fd8a2c22d96f6b41bf5709bd6ebbb915093532073f7039d03ab056b4e148f56\"" Oct 8 19:32:28.046944 containerd[2024]: time="2024-10-08T19:32:28.046836993Z" level=info msg="CreateContainer within sandbox \"85385d8593afe428bdecb106bbd171b8daf4d18ce718d3b46320f49513bc5556\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 8 19:32:28.071474 containerd[2024]: time="2024-10-08T19:32:28.071318409Z" level=info msg="CreateContainer within sandbox \"85385d8593afe428bdecb106bbd171b8daf4d18ce718d3b46320f49513bc5556\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e\"" Oct 8 19:32:28.074785 containerd[2024]: time="2024-10-08T19:32:28.074717481Z" level=info msg="StartContainer for \"a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e\"" Oct 8 19:32:28.133312 systemd[1]: Started cri-containerd-a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e.scope - libcontainer container a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e. Oct 8 19:32:28.191450 containerd[2024]: time="2024-10-08T19:32:28.191271094Z" level=info msg="StartContainer for \"a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e\" returns successfully" Oct 8 19:32:28.235506 kubelet[3361]: I1008 19:32:28.235406 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rn8dn" podStartSLOduration=5.235380598 podStartE2EDuration="5.235380598s" podCreationTimestamp="2024-10-08 19:32:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:32:26.276580016 +0000 UTC m=+7.561478906" watchObservedRunningTime="2024-10-08 19:32:28.235380598 +0000 UTC m=+9.520279440" Oct 8 19:32:29.080320 kubelet[3361]: I1008 19:32:29.080203 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-55748b469f-ndzwt" podStartSLOduration=2.584649286 podStartE2EDuration="5.08017207s" podCreationTimestamp="2024-10-08 19:32:24 +0000 UTC" firstStartedPulling="2024-10-08 19:32:25.548131233 +0000 UTC m=+6.833030087" lastFinishedPulling="2024-10-08 19:32:28.043654017 +0000 UTC m=+9.328552871" observedRunningTime="2024-10-08 19:32:28.237661906 +0000 UTC m=+9.522560772" watchObservedRunningTime="2024-10-08 19:32:29.08017207 +0000 UTC m=+10.365070924" Oct 8 19:32:32.527691 kubelet[3361]: W1008 19:32:32.527619 3361 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-19-2" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-19-2' and this object Oct 8 19:32:32.528883 kubelet[3361]: E1008 19:32:32.527705 3361 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-19-2\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-19-2' and this object" logger="UnhandledError" Oct 8 19:32:32.544084 kubelet[3361]: I1008 19:32:32.542494 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/69a6a54d-6c55-487d-ab98-8f3aad4d97a0-typha-certs\") pod \"calico-typha-8f9b76566-zjsst\" (UID: \"69a6a54d-6c55-487d-ab98-8f3aad4d97a0\") " pod="calico-system/calico-typha-8f9b76566-zjsst" Oct 8 19:32:32.544084 kubelet[3361]: I1008 19:32:32.542673 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcmq2\" (UniqueName: \"kubernetes.io/projected/69a6a54d-6c55-487d-ab98-8f3aad4d97a0-kube-api-access-zcmq2\") pod \"calico-typha-8f9b76566-zjsst\" (UID: \"69a6a54d-6c55-487d-ab98-8f3aad4d97a0\") " pod="calico-system/calico-typha-8f9b76566-zjsst" Oct 8 19:32:32.544084 kubelet[3361]: I1008 19:32:32.542797 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69a6a54d-6c55-487d-ab98-8f3aad4d97a0-tigera-ca-bundle\") pod \"calico-typha-8f9b76566-zjsst\" (UID: \"69a6a54d-6c55-487d-ab98-8f3aad4d97a0\") " pod="calico-system/calico-typha-8f9b76566-zjsst" Oct 8 19:32:32.543900 systemd[1]: Created slice kubepods-besteffort-pod69a6a54d_6c55_487d_ab98_8f3aad4d97a0.slice - libcontainer container kubepods-besteffort-pod69a6a54d_6c55_487d_ab98_8f3aad4d97a0.slice. Oct 8 19:32:32.785039 systemd[1]: Created slice kubepods-besteffort-pod0daeb714_d582_45dd_ac22_0347fcf825d6.slice - libcontainer container kubepods-besteffort-pod0daeb714_d582_45dd_ac22_0347fcf825d6.slice. Oct 8 19:32:32.846322 kubelet[3361]: I1008 19:32:32.844532 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0daeb714-d582-45dd-ac22-0347fcf825d6-node-certs\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.846322 kubelet[3361]: I1008 19:32:32.844638 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0daeb714-d582-45dd-ac22-0347fcf825d6-cni-bin-dir\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.846322 kubelet[3361]: I1008 19:32:32.844691 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2m2s\" (UniqueName: \"kubernetes.io/projected/0daeb714-d582-45dd-ac22-0347fcf825d6-kube-api-access-l2m2s\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.846322 kubelet[3361]: I1008 19:32:32.844743 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0daeb714-d582-45dd-ac22-0347fcf825d6-var-lib-calico\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.846322 kubelet[3361]: I1008 19:32:32.844788 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0daeb714-d582-45dd-ac22-0347fcf825d6-cni-log-dir\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.846922 kubelet[3361]: I1008 19:32:32.844838 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0daeb714-d582-45dd-ac22-0347fcf825d6-var-run-calico\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.846922 kubelet[3361]: I1008 19:32:32.844874 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0daeb714-d582-45dd-ac22-0347fcf825d6-flexvol-driver-host\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.846922 kubelet[3361]: I1008 19:32:32.844912 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0daeb714-d582-45dd-ac22-0347fcf825d6-cni-net-dir\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.846922 kubelet[3361]: I1008 19:32:32.844959 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0daeb714-d582-45dd-ac22-0347fcf825d6-lib-modules\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.846922 kubelet[3361]: I1008 19:32:32.845035 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0daeb714-d582-45dd-ac22-0347fcf825d6-xtables-lock\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.847389 kubelet[3361]: I1008 19:32:32.845077 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0daeb714-d582-45dd-ac22-0347fcf825d6-policysync\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.847389 kubelet[3361]: I1008 19:32:32.845113 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0daeb714-d582-45dd-ac22-0347fcf825d6-tigera-ca-bundle\") pod \"calico-node-zxpdp\" (UID: \"0daeb714-d582-45dd-ac22-0347fcf825d6\") " pod="calico-system/calico-node-zxpdp" Oct 8 19:32:32.970033 kubelet[3361]: E1008 19:32:32.969388 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-svwzg" podUID="4da6ef7d-86ec-4a74-b95b-c04030a59fa2" Oct 8 19:32:32.993676 kubelet[3361]: E1008 19:32:32.991120 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:32.993676 kubelet[3361]: W1008 19:32:32.993382 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:32.997915 kubelet[3361]: E1008 19:32:32.997373 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:32.997915 kubelet[3361]: W1008 19:32:32.997505 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:32.997915 kubelet[3361]: E1008 19:32:32.997578 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:32.999253 kubelet[3361]: E1008 19:32:32.994246 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.046341 kubelet[3361]: E1008 19:32:33.044179 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.046341 kubelet[3361]: W1008 19:32:33.044249 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.046341 kubelet[3361]: E1008 19:32:33.045858 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.051314 kubelet[3361]: E1008 19:32:33.051264 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.051747 kubelet[3361]: W1008 19:32:33.051584 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.051747 kubelet[3361]: E1008 19:32:33.051632 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.052436 kubelet[3361]: E1008 19:32:33.052400 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.052781 kubelet[3361]: W1008 19:32:33.052622 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.052781 kubelet[3361]: E1008 19:32:33.052670 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.054431 kubelet[3361]: E1008 19:32:33.054284 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.054944 kubelet[3361]: W1008 19:32:33.054699 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.054944 kubelet[3361]: E1008 19:32:33.054870 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.057619 kubelet[3361]: E1008 19:32:33.057573 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.058356 kubelet[3361]: W1008 19:32:33.057869 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.058356 kubelet[3361]: E1008 19:32:33.057939 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.060033 kubelet[3361]: E1008 19:32:33.058757 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.060576 kubelet[3361]: W1008 19:32:33.060215 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.060576 kubelet[3361]: E1008 19:32:33.060273 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.060576 kubelet[3361]: I1008 19:32:33.060322 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4da6ef7d-86ec-4a74-b95b-c04030a59fa2-varrun\") pod \"csi-node-driver-svwzg\" (UID: \"4da6ef7d-86ec-4a74-b95b-c04030a59fa2\") " pod="calico-system/csi-node-driver-svwzg" Oct 8 19:32:33.061599 kubelet[3361]: E1008 19:32:33.061506 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.062357 kubelet[3361]: W1008 19:32:33.062097 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.062357 kubelet[3361]: E1008 19:32:33.062307 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.063337 kubelet[3361]: E1008 19:32:33.063169 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.063337 kubelet[3361]: W1008 19:32:33.063203 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.063337 kubelet[3361]: E1008 19:32:33.063259 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.065780 kubelet[3361]: E1008 19:32:33.065554 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.065780 kubelet[3361]: W1008 19:32:33.065589 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.066712 kubelet[3361]: E1008 19:32:33.066385 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.067397 kubelet[3361]: E1008 19:32:33.066676 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.067853 kubelet[3361]: W1008 19:32:33.067232 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.068223 kubelet[3361]: E1008 19:32:33.067810 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.072248 kubelet[3361]: E1008 19:32:33.071518 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.072248 kubelet[3361]: W1008 19:32:33.071588 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.072248 kubelet[3361]: E1008 19:32:33.071667 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.073390 kubelet[3361]: E1008 19:32:33.073149 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.073390 kubelet[3361]: W1008 19:32:33.073183 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.073390 kubelet[3361]: E1008 19:32:33.073216 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.076825 kubelet[3361]: E1008 19:32:33.076505 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.076825 kubelet[3361]: W1008 19:32:33.076646 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.077420 kubelet[3361]: E1008 19:32:33.076688 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.080817 kubelet[3361]: E1008 19:32:33.080627 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.081532 kubelet[3361]: W1008 19:32:33.080673 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.081532 kubelet[3361]: E1008 19:32:33.081370 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.083122 kubelet[3361]: E1008 19:32:33.082585 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.083671 kubelet[3361]: W1008 19:32:33.083069 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.083868 kubelet[3361]: E1008 19:32:33.083482 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.084529 kubelet[3361]: E1008 19:32:33.084415 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.084529 kubelet[3361]: W1008 19:32:33.084466 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.084529 kubelet[3361]: E1008 19:32:33.084495 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.085870 kubelet[3361]: E1008 19:32:33.085635 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.085870 kubelet[3361]: W1008 19:32:33.085691 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.085870 kubelet[3361]: E1008 19:32:33.085723 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.087028 kubelet[3361]: E1008 19:32:33.086764 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.087028 kubelet[3361]: W1008 19:32:33.086795 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.087028 kubelet[3361]: E1008 19:32:33.086895 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.089451 kubelet[3361]: E1008 19:32:33.088370 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.089451 kubelet[3361]: W1008 19:32:33.088403 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.089451 kubelet[3361]: E1008 19:32:33.088434 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.090325 kubelet[3361]: E1008 19:32:33.090068 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.090325 kubelet[3361]: W1008 19:32:33.090125 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.090325 kubelet[3361]: E1008 19:32:33.090162 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.091369 kubelet[3361]: E1008 19:32:33.091209 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.091369 kubelet[3361]: W1008 19:32:33.091281 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.091894 kubelet[3361]: E1008 19:32:33.091336 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.092842 kubelet[3361]: E1008 19:32:33.092549 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.092842 kubelet[3361]: W1008 19:32:33.092580 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.092842 kubelet[3361]: E1008 19:32:33.092610 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.094838 kubelet[3361]: E1008 19:32:33.094390 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.094838 kubelet[3361]: W1008 19:32:33.094430 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.094838 kubelet[3361]: E1008 19:32:33.094462 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.096117 kubelet[3361]: E1008 19:32:33.095500 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.096117 kubelet[3361]: W1008 19:32:33.095545 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.096117 kubelet[3361]: E1008 19:32:33.095599 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.097603 kubelet[3361]: E1008 19:32:33.096473 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.097603 kubelet[3361]: W1008 19:32:33.096500 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.097603 kubelet[3361]: E1008 19:32:33.096528 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.098855 containerd[2024]: time="2024-10-08T19:32:33.098790482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxpdp,Uid:0daeb714-d582-45dd-ac22-0347fcf825d6,Namespace:calico-system,Attempt:0,}" Oct 8 19:32:33.179289 kubelet[3361]: E1008 19:32:33.179210 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.179289 kubelet[3361]: W1008 19:32:33.179271 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.179289 kubelet[3361]: E1008 19:32:33.179320 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.179289 kubelet[3361]: I1008 19:32:33.179397 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4da6ef7d-86ec-4a74-b95b-c04030a59fa2-registration-dir\") pod \"csi-node-driver-svwzg\" (UID: \"4da6ef7d-86ec-4a74-b95b-c04030a59fa2\") " pod="calico-system/csi-node-driver-svwzg" Oct 8 19:32:33.185336 kubelet[3361]: E1008 19:32:33.185234 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.185336 kubelet[3361]: W1008 19:32:33.185313 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.185336 kubelet[3361]: E1008 19:32:33.185392 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.187220 kubelet[3361]: E1008 19:32:33.187173 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.187220 kubelet[3361]: W1008 19:32:33.187206 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.189161 kubelet[3361]: E1008 19:32:33.187703 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.189161 kubelet[3361]: I1008 19:32:33.188182 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4da6ef7d-86ec-4a74-b95b-c04030a59fa2-kubelet-dir\") pod \"csi-node-driver-svwzg\" (UID: \"4da6ef7d-86ec-4a74-b95b-c04030a59fa2\") " pod="calico-system/csi-node-driver-svwzg" Oct 8 19:32:33.190921 kubelet[3361]: E1008 19:32:33.190629 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.190921 kubelet[3361]: W1008 19:32:33.190697 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.190921 kubelet[3361]: E1008 19:32:33.190784 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.192853 kubelet[3361]: E1008 19:32:33.192717 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.192853 kubelet[3361]: W1008 19:32:33.192791 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.193430 kubelet[3361]: E1008 19:32:33.193091 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.196903 kubelet[3361]: E1008 19:32:33.196816 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.196903 kubelet[3361]: W1008 19:32:33.196872 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.197887 kubelet[3361]: E1008 19:32:33.197105 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.200536 kubelet[3361]: E1008 19:32:33.200429 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.200536 kubelet[3361]: W1008 19:32:33.200508 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.201939 kubelet[3361]: E1008 19:32:33.200597 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.203624 kubelet[3361]: E1008 19:32:33.203548 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.203624 kubelet[3361]: W1008 19:32:33.203607 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.204225 kubelet[3361]: E1008 19:32:33.203805 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.205766 kubelet[3361]: E1008 19:32:33.205691 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.205766 kubelet[3361]: W1008 19:32:33.205746 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.206595 kubelet[3361]: E1008 19:32:33.206303 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.206595 kubelet[3361]: I1008 19:32:33.206374 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4da6ef7d-86ec-4a74-b95b-c04030a59fa2-socket-dir\") pod \"csi-node-driver-svwzg\" (UID: \"4da6ef7d-86ec-4a74-b95b-c04030a59fa2\") " pod="calico-system/csi-node-driver-svwzg" Oct 8 19:32:33.208372 kubelet[3361]: E1008 19:32:33.208245 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.208372 kubelet[3361]: W1008 19:32:33.208295 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.208372 kubelet[3361]: E1008 19:32:33.208363 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.209221 kubelet[3361]: E1008 19:32:33.208875 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.209221 kubelet[3361]: W1008 19:32:33.208903 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.209221 kubelet[3361]: E1008 19:32:33.208978 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.213595 kubelet[3361]: E1008 19:32:33.213537 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.213595 kubelet[3361]: W1008 19:32:33.213580 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.214388 kubelet[3361]: E1008 19:32:33.213750 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.214388 kubelet[3361]: I1008 19:32:33.213864 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdgcq\" (UniqueName: \"kubernetes.io/projected/4da6ef7d-86ec-4a74-b95b-c04030a59fa2-kube-api-access-jdgcq\") pod \"csi-node-driver-svwzg\" (UID: \"4da6ef7d-86ec-4a74-b95b-c04030a59fa2\") " pod="calico-system/csi-node-driver-svwzg" Oct 8 19:32:33.214388 kubelet[3361]: E1008 19:32:33.214278 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.214388 kubelet[3361]: W1008 19:32:33.214309 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.215194 kubelet[3361]: E1008 19:32:33.214703 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.215194 kubelet[3361]: W1008 19:32:33.214723 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.215338 kubelet[3361]: E1008 19:32:33.215209 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.215338 kubelet[3361]: W1008 19:32:33.215235 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.215338 kubelet[3361]: E1008 19:32:33.215269 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.216850 kubelet[3361]: E1008 19:32:33.215802 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.216850 kubelet[3361]: W1008 19:32:33.215856 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.216850 kubelet[3361]: E1008 19:32:33.215920 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.216850 kubelet[3361]: E1008 19:32:33.215957 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.216850 kubelet[3361]: E1008 19:32:33.216020 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.218801 kubelet[3361]: E1008 19:32:33.218593 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.219755 kubelet[3361]: W1008 19:32:33.219442 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.219755 kubelet[3361]: E1008 19:32:33.219685 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.225646 containerd[2024]: time="2024-10-08T19:32:33.222967323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:32:33.225646 containerd[2024]: time="2024-10-08T19:32:33.223271019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:33.225646 containerd[2024]: time="2024-10-08T19:32:33.223337223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:32:33.225646 containerd[2024]: time="2024-10-08T19:32:33.223375095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:33.226557 kubelet[3361]: E1008 19:32:33.225318 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.226557 kubelet[3361]: W1008 19:32:33.225395 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.226557 kubelet[3361]: E1008 19:32:33.225462 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.319933 kubelet[3361]: E1008 19:32:33.319563 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.320584 kubelet[3361]: W1008 19:32:33.320534 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.322731 kubelet[3361]: E1008 19:32:33.321614 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.323439 kubelet[3361]: E1008 19:32:33.323395 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.323724 kubelet[3361]: W1008 19:32:33.323665 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.323963 kubelet[3361]: E1008 19:32:33.323911 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.324713 systemd[1]: Started cri-containerd-490116c8771654c12252370751ccd9f74a085f95e91576ae05de3abe5bb85d21.scope - libcontainer container 490116c8771654c12252370751ccd9f74a085f95e91576ae05de3abe5bb85d21. Oct 8 19:32:33.329398 kubelet[3361]: E1008 19:32:33.328960 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.329398 kubelet[3361]: W1008 19:32:33.329063 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.329398 kubelet[3361]: E1008 19:32:33.329120 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.333373 kubelet[3361]: E1008 19:32:33.332569 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.333373 kubelet[3361]: W1008 19:32:33.332614 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.333373 kubelet[3361]: E1008 19:32:33.332651 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.337114 kubelet[3361]: E1008 19:32:33.336105 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.337114 kubelet[3361]: W1008 19:32:33.336316 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.337114 kubelet[3361]: E1008 19:32:33.336929 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.340598 kubelet[3361]: E1008 19:32:33.339929 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.340598 kubelet[3361]: W1008 19:32:33.340460 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.341414 kubelet[3361]: E1008 19:32:33.340857 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.343953 kubelet[3361]: E1008 19:32:33.343580 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.343953 kubelet[3361]: W1008 19:32:33.343626 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.344618 kubelet[3361]: E1008 19:32:33.344218 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.347289 kubelet[3361]: E1008 19:32:33.346720 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.347289 kubelet[3361]: W1008 19:32:33.346757 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.347289 kubelet[3361]: E1008 19:32:33.346885 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.350366 kubelet[3361]: E1008 19:32:33.349733 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.350366 kubelet[3361]: W1008 19:32:33.349914 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.350366 kubelet[3361]: E1008 19:32:33.350024 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.353944 kubelet[3361]: E1008 19:32:33.353140 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.353944 kubelet[3361]: W1008 19:32:33.353455 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.355069 kubelet[3361]: E1008 19:32:33.353762 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.356852 kubelet[3361]: E1008 19:32:33.356299 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.356852 kubelet[3361]: W1008 19:32:33.356357 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.356852 kubelet[3361]: E1008 19:32:33.356464 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.358289 kubelet[3361]: E1008 19:32:33.358013 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.358289 kubelet[3361]: W1008 19:32:33.358130 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.358289 kubelet[3361]: E1008 19:32:33.358256 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.360223 kubelet[3361]: E1008 19:32:33.359618 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.360223 kubelet[3361]: W1008 19:32:33.359767 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.360495 kubelet[3361]: E1008 19:32:33.360253 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.362172 kubelet[3361]: E1008 19:32:33.361616 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.362172 kubelet[3361]: W1008 19:32:33.361676 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.363510 kubelet[3361]: E1008 19:32:33.362642 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.364274 kubelet[3361]: E1008 19:32:33.364020 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.366502 kubelet[3361]: W1008 19:32:33.364058 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.368857 kubelet[3361]: E1008 19:32:33.368587 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.369885 kubelet[3361]: E1008 19:32:33.369260 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.369885 kubelet[3361]: W1008 19:32:33.369309 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.374700 kubelet[3361]: E1008 19:32:33.373421 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.375107 kubelet[3361]: E1008 19:32:33.374956 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.375107 kubelet[3361]: W1008 19:32:33.375023 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.375646 kubelet[3361]: E1008 19:32:33.375472 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.377294 kubelet[3361]: E1008 19:32:33.376979 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.377294 kubelet[3361]: W1008 19:32:33.377043 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.378236 kubelet[3361]: E1008 19:32:33.377526 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.381051 kubelet[3361]: E1008 19:32:33.379563 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.381772 kubelet[3361]: W1008 19:32:33.380900 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.382425 kubelet[3361]: E1008 19:32:33.382274 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.382425 kubelet[3361]: E1008 19:32:33.382353 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.382425 kubelet[3361]: W1008 19:32:33.382377 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.382918 kubelet[3361]: E1008 19:32:33.382636 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.384854 kubelet[3361]: E1008 19:32:33.384698 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.384854 kubelet[3361]: W1008 19:32:33.384734 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.384854 kubelet[3361]: E1008 19:32:33.384810 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.423614 kubelet[3361]: E1008 19:32:33.423321 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.423614 kubelet[3361]: W1008 19:32:33.423367 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.423614 kubelet[3361]: E1008 19:32:33.423417 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.446231 kubelet[3361]: E1008 19:32:33.445975 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.446231 kubelet[3361]: W1008 19:32:33.446073 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.446231 kubelet[3361]: E1008 19:32:33.446110 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.482193 containerd[2024]: time="2024-10-08T19:32:33.481854244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxpdp,Uid:0daeb714-d582-45dd-ac22-0347fcf825d6,Namespace:calico-system,Attempt:0,} returns sandbox id \"490116c8771654c12252370751ccd9f74a085f95e91576ae05de3abe5bb85d21\"" Oct 8 19:32:33.489409 containerd[2024]: time="2024-10-08T19:32:33.489179824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\"" Oct 8 19:32:33.548528 kubelet[3361]: E1008 19:32:33.548472 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.549189 kubelet[3361]: W1008 19:32:33.548754 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.549189 kubelet[3361]: E1008 19:32:33.548794 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.645315 kubelet[3361]: E1008 19:32:33.645176 3361 secret.go:188] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Oct 8 19:32:33.645434 kubelet[3361]: E1008 19:32:33.645316 3361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/69a6a54d-6c55-487d-ab98-8f3aad4d97a0-typha-certs podName:69a6a54d-6c55-487d-ab98-8f3aad4d97a0 nodeName:}" failed. No retries permitted until 2024-10-08 19:32:34.145271013 +0000 UTC m=+15.430169867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/69a6a54d-6c55-487d-ab98-8f3aad4d97a0-typha-certs") pod "calico-typha-8f9b76566-zjsst" (UID: "69a6a54d-6c55-487d-ab98-8f3aad4d97a0") : failed to sync secret cache: timed out waiting for the condition Oct 8 19:32:33.649660 kubelet[3361]: E1008 19:32:33.649608 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.649660 kubelet[3361]: W1008 19:32:33.649647 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.649873 kubelet[3361]: E1008 19:32:33.649680 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.752459 kubelet[3361]: E1008 19:32:33.752299 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.752459 kubelet[3361]: W1008 19:32:33.752337 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.752459 kubelet[3361]: E1008 19:32:33.752371 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.854108 kubelet[3361]: E1008 19:32:33.854057 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.854108 kubelet[3361]: W1008 19:32:33.854103 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.854730 kubelet[3361]: E1008 19:32:33.854141 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:33.955864 kubelet[3361]: E1008 19:32:33.955578 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:33.955864 kubelet[3361]: W1008 19:32:33.955744 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:33.956376 kubelet[3361]: E1008 19:32:33.956337 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:34.058929 kubelet[3361]: E1008 19:32:34.058673 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:34.058929 kubelet[3361]: W1008 19:32:34.058727 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:34.058929 kubelet[3361]: E1008 19:32:34.058780 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:34.160185 kubelet[3361]: E1008 19:32:34.160119 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:34.160185 kubelet[3361]: W1008 19:32:34.160171 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:34.160420 kubelet[3361]: E1008 19:32:34.160214 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:34.162438 kubelet[3361]: E1008 19:32:34.162362 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:34.162438 kubelet[3361]: W1008 19:32:34.162425 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:34.162653 kubelet[3361]: E1008 19:32:34.162483 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:34.163167 kubelet[3361]: E1008 19:32:34.163093 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:34.163167 kubelet[3361]: W1008 19:32:34.163146 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:34.163514 kubelet[3361]: E1008 19:32:34.163211 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:34.165041 kubelet[3361]: E1008 19:32:34.164831 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:34.165183 kubelet[3361]: W1008 19:32:34.165031 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:34.165183 kubelet[3361]: E1008 19:32:34.165120 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:34.167611 kubelet[3361]: E1008 19:32:34.167529 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:34.167611 kubelet[3361]: W1008 19:32:34.167592 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:34.167786 kubelet[3361]: E1008 19:32:34.167630 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:34.197194 kubelet[3361]: E1008 19:32:34.193457 3361 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 8 19:32:34.197194 kubelet[3361]: W1008 19:32:34.193599 3361 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 8 19:32:34.197194 kubelet[3361]: E1008 19:32:34.193761 3361 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 8 19:32:34.362393 containerd[2024]: time="2024-10-08T19:32:34.361335232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8f9b76566-zjsst,Uid:69a6a54d-6c55-487d-ab98-8f3aad4d97a0,Namespace:calico-system,Attempt:0,}" Oct 8 19:32:34.455300 containerd[2024]: time="2024-10-08T19:32:34.454410305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:32:34.455300 containerd[2024]: time="2024-10-08T19:32:34.454576205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:34.455300 containerd[2024]: time="2024-10-08T19:32:34.454628945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:32:34.455300 containerd[2024]: time="2024-10-08T19:32:34.454752341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:32:34.542972 systemd[1]: Started cri-containerd-0e876119b3c98b5bba05f69e1b0e887ba6918240f8cecbce597fca19e716655a.scope - libcontainer container 0e876119b3c98b5bba05f69e1b0e887ba6918240f8cecbce597fca19e716655a. Oct 8 19:32:34.887056 containerd[2024]: time="2024-10-08T19:32:34.884977147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:34.893645 containerd[2024]: time="2024-10-08T19:32:34.893326207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1: active requests=0, bytes read=4916957" Oct 8 19:32:34.897170 containerd[2024]: time="2024-10-08T19:32:34.895860475Z" level=info msg="ImageCreate event name:\"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:34.912482 containerd[2024]: time="2024-10-08T19:32:34.910718095Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:34.921777 containerd[2024]: time="2024-10-08T19:32:34.921635107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" with image id \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:7938ad0cb2b49a32937962cc40dd826ad5858999c603bdf5fbf2092a4d50cf01\", size \"6284436\" in 1.432237927s" Oct 8 19:32:34.921777 containerd[2024]: time="2024-10-08T19:32:34.921757339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.1\" returns image reference \"sha256:20b54f73684933653d4a4b8b63c59211e3c828f94251ecf4d1bff2a334ff4ba0\"" Oct 8 19:32:34.931507 containerd[2024]: time="2024-10-08T19:32:34.931229251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8f9b76566-zjsst,Uid:69a6a54d-6c55-487d-ab98-8f3aad4d97a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e876119b3c98b5bba05f69e1b0e887ba6918240f8cecbce597fca19e716655a\"" Oct 8 19:32:34.936931 containerd[2024]: time="2024-10-08T19:32:34.936579331Z" level=info msg="CreateContainer within sandbox \"490116c8771654c12252370751ccd9f74a085f95e91576ae05de3abe5bb85d21\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 8 19:32:34.943965 containerd[2024]: time="2024-10-08T19:32:34.943456879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\"" Oct 8 19:32:34.988432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1197941233.mount: Deactivated successfully. Oct 8 19:32:34.993935 containerd[2024]: time="2024-10-08T19:32:34.993226664Z" level=info msg="CreateContainer within sandbox \"490116c8771654c12252370751ccd9f74a085f95e91576ae05de3abe5bb85d21\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd\"" Oct 8 19:32:34.996484 containerd[2024]: time="2024-10-08T19:32:34.996219236Z" level=info msg="StartContainer for \"e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd\"" Oct 8 19:32:35.057060 kubelet[3361]: E1008 19:32:35.054578 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-svwzg" podUID="4da6ef7d-86ec-4a74-b95b-c04030a59fa2" Oct 8 19:32:35.134127 systemd[1]: run-containerd-runc-k8s.io-e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd-runc.ATsRrj.mount: Deactivated successfully. Oct 8 19:32:35.156431 systemd[1]: Started cri-containerd-e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd.scope - libcontainer container e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd. Oct 8 19:32:35.315020 containerd[2024]: time="2024-10-08T19:32:35.314806301Z" level=info msg="StartContainer for \"e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd\" returns successfully" Oct 8 19:32:35.428835 systemd[1]: cri-containerd-e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd.scope: Deactivated successfully. Oct 8 19:32:35.702229 containerd[2024]: time="2024-10-08T19:32:35.698744647Z" level=info msg="shim disconnected" id=e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd namespace=k8s.io Oct 8 19:32:35.702229 containerd[2024]: time="2024-10-08T19:32:35.699341083Z" level=warning msg="cleaning up after shim disconnected" id=e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd namespace=k8s.io Oct 8 19:32:35.702229 containerd[2024]: time="2024-10-08T19:32:35.699419575Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:32:35.972033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9f1db406045fc017ac88805761c092a943a3b70ee4f4a4cf71c5d19be2e4bfd-rootfs.mount: Deactivated successfully. Oct 8 19:32:37.056204 kubelet[3361]: E1008 19:32:37.053945 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-svwzg" podUID="4da6ef7d-86ec-4a74-b95b-c04030a59fa2" Oct 8 19:32:37.889012 containerd[2024]: time="2024-10-08T19:32:37.887367286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:37.890889 containerd[2024]: time="2024-10-08T19:32:37.890664322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.1: active requests=0, bytes read=27474479" Oct 8 19:32:37.892160 containerd[2024]: time="2024-10-08T19:32:37.891887398Z" level=info msg="ImageCreate event name:\"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:37.901611 containerd[2024]: time="2024-10-08T19:32:37.901535938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:37.904421 containerd[2024]: time="2024-10-08T19:32:37.903150682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.1\" with image id \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d97114d8e1e5186f1180fc8ef5f1309e0a8bf97efce35e0a0223d057d78d95fb\", size \"28841990\" in 2.959610991s" Oct 8 19:32:37.904421 containerd[2024]: time="2024-10-08T19:32:37.903221170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.1\" returns image reference \"sha256:c1d0081df1580fc17ebf95ca7499d2e1af1b1ab8c75835172213221419018924\"" Oct 8 19:32:37.909541 containerd[2024]: time="2024-10-08T19:32:37.909488014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\"" Oct 8 19:32:37.965135 containerd[2024]: time="2024-10-08T19:32:37.965042194Z" level=info msg="CreateContainer within sandbox \"0e876119b3c98b5bba05f69e1b0e887ba6918240f8cecbce597fca19e716655a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 8 19:32:38.017881 containerd[2024]: time="2024-10-08T19:32:38.017782147Z" level=info msg="CreateContainer within sandbox \"0e876119b3c98b5bba05f69e1b0e887ba6918240f8cecbce597fca19e716655a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b3f96dd5e1cf1a24bcdea19fa2819a855fe06ae03b1648296063541dea1c568d\"" Oct 8 19:32:38.023305 containerd[2024]: time="2024-10-08T19:32:38.022414219Z" level=info msg="StartContainer for \"b3f96dd5e1cf1a24bcdea19fa2819a855fe06ae03b1648296063541dea1c568d\"" Oct 8 19:32:38.158923 systemd[1]: Started cri-containerd-b3f96dd5e1cf1a24bcdea19fa2819a855fe06ae03b1648296063541dea1c568d.scope - libcontainer container b3f96dd5e1cf1a24bcdea19fa2819a855fe06ae03b1648296063541dea1c568d. Oct 8 19:32:38.381027 containerd[2024]: time="2024-10-08T19:32:38.377605304Z" level=info msg="StartContainer for \"b3f96dd5e1cf1a24bcdea19fa2819a855fe06ae03b1648296063541dea1c568d\" returns successfully" Oct 8 19:32:39.058365 kubelet[3361]: E1008 19:32:39.058234 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-svwzg" podUID="4da6ef7d-86ec-4a74-b95b-c04030a59fa2" Oct 8 19:32:39.443142 kubelet[3361]: I1008 19:32:39.439496 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8f9b76566-zjsst" podStartSLOduration=4.472668607 podStartE2EDuration="7.439460494s" podCreationTimestamp="2024-10-08 19:32:32 +0000 UTC" firstStartedPulling="2024-10-08 19:32:34.939870067 +0000 UTC m=+16.224768909" lastFinishedPulling="2024-10-08 19:32:37.90666193 +0000 UTC m=+19.191560796" observedRunningTime="2024-10-08 19:32:39.436448578 +0000 UTC m=+20.721347444" watchObservedRunningTime="2024-10-08 19:32:39.439460494 +0000 UTC m=+20.724359336" Oct 8 19:32:40.384685 kubelet[3361]: I1008 19:32:40.383133 3361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:32:41.054407 kubelet[3361]: E1008 19:32:41.054166 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-svwzg" podUID="4da6ef7d-86ec-4a74-b95b-c04030a59fa2" Oct 8 19:32:43.055883 kubelet[3361]: E1008 19:32:43.054158 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-svwzg" podUID="4da6ef7d-86ec-4a74-b95b-c04030a59fa2" Oct 8 19:32:43.703157 containerd[2024]: time="2024-10-08T19:32:43.703094919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:43.705866 containerd[2024]: time="2024-10-08T19:32:43.705705219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.1: active requests=0, bytes read=86859887" Oct 8 19:32:43.707217 containerd[2024]: time="2024-10-08T19:32:43.707109231Z" level=info msg="ImageCreate event name:\"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:43.720883 containerd[2024]: time="2024-10-08T19:32:43.720798831Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:43.723397 containerd[2024]: time="2024-10-08T19:32:43.723076575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.1\" with image id \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:1cf32b2159ec9f938e747b82b9b7c74e26e17eb220e002a6a1bd6b5b1266e1fa\", size \"88227406\" in 5.812889813s" Oct 8 19:32:43.723397 containerd[2024]: time="2024-10-08T19:32:43.723168699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.1\" returns image reference \"sha256:6123e515001d9cafdf3dbe8f8dc8b5ae1c56165013052b8cbc7d27f3395cfd85\"" Oct 8 19:32:43.728356 containerd[2024]: time="2024-10-08T19:32:43.728295903Z" level=info msg="CreateContainer within sandbox \"490116c8771654c12252370751ccd9f74a085f95e91576ae05de3abe5bb85d21\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 8 19:32:43.762051 containerd[2024]: time="2024-10-08T19:32:43.761918451Z" level=info msg="CreateContainer within sandbox \"490116c8771654c12252370751ccd9f74a085f95e91576ae05de3abe5bb85d21\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924\"" Oct 8 19:32:43.764944 containerd[2024]: time="2024-10-08T19:32:43.763908423Z" level=info msg="StartContainer for \"610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924\"" Oct 8 19:32:43.846959 systemd[1]: run-containerd-runc-k8s.io-610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924-runc.OGstNd.mount: Deactivated successfully. Oct 8 19:32:43.861565 systemd[1]: Started cri-containerd-610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924.scope - libcontainer container 610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924. Oct 8 19:32:43.933652 containerd[2024]: time="2024-10-08T19:32:43.933442300Z" level=info msg="StartContainer for \"610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924\" returns successfully" Oct 8 19:32:45.048375 containerd[2024]: time="2024-10-08T19:32:45.048296846Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 8 19:32:45.055144 kubelet[3361]: E1008 19:32:45.055059 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-svwzg" podUID="4da6ef7d-86ec-4a74-b95b-c04030a59fa2" Oct 8 19:32:45.058638 systemd[1]: cri-containerd-610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924.scope: Deactivated successfully. Oct 8 19:32:45.062500 systemd[1]: cri-containerd-610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924.scope: Consumed 1.020s CPU time. Oct 8 19:32:45.127260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924-rootfs.mount: Deactivated successfully. Oct 8 19:32:45.141498 kubelet[3361]: I1008 19:32:45.139879 3361 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Oct 8 19:32:45.251528 systemd[1]: Created slice kubepods-besteffort-pod5dc824f0_1019_4436_8135_184fe45fe379.slice - libcontainer container kubepods-besteffort-pod5dc824f0_1019_4436_8135_184fe45fe379.slice. Oct 8 19:32:45.274143 kubelet[3361]: I1008 19:32:45.272028 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf9af281-d393-45ad-bbb2-55616503d436-config-volume\") pod \"coredns-6f6b679f8f-5gqd7\" (UID: \"bf9af281-d393-45ad-bbb2-55616503d436\") " pod="kube-system/coredns-6f6b679f8f-5gqd7" Oct 8 19:32:45.274143 kubelet[3361]: I1008 19:32:45.272167 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltz9l\" (UniqueName: \"kubernetes.io/projected/bf9af281-d393-45ad-bbb2-55616503d436-kube-api-access-ltz9l\") pod \"coredns-6f6b679f8f-5gqd7\" (UID: \"bf9af281-d393-45ad-bbb2-55616503d436\") " pod="kube-system/coredns-6f6b679f8f-5gqd7" Oct 8 19:32:45.274143 kubelet[3361]: I1008 19:32:45.272249 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbwn9\" (UniqueName: \"kubernetes.io/projected/5dc824f0-1019-4436-8135-184fe45fe379-kube-api-access-dbwn9\") pod \"calico-kube-controllers-858bc65f54-9m4dk\" (UID: \"5dc824f0-1019-4436-8135-184fe45fe379\") " pod="calico-system/calico-kube-controllers-858bc65f54-9m4dk" Oct 8 19:32:45.281346 kubelet[3361]: I1008 19:32:45.272317 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5dc824f0-1019-4436-8135-184fe45fe379-tigera-ca-bundle\") pod \"calico-kube-controllers-858bc65f54-9m4dk\" (UID: \"5dc824f0-1019-4436-8135-184fe45fe379\") " pod="calico-system/calico-kube-controllers-858bc65f54-9m4dk" Oct 8 19:32:45.287355 systemd[1]: Created slice kubepods-burstable-podbf9af281_d393_45ad_bbb2_55616503d436.slice - libcontainer container kubepods-burstable-podbf9af281_d393_45ad_bbb2_55616503d436.slice. Oct 8 19:32:45.316139 systemd[1]: Created slice kubepods-burstable-podd686f5b9_ae61_4c98_9562_0d2b1ca6daa8.slice - libcontainer container kubepods-burstable-podd686f5b9_ae61_4c98_9562_0d2b1ca6daa8.slice. Oct 8 19:32:45.387027 kubelet[3361]: I1008 19:32:45.382087 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d686f5b9-ae61-4c98-9562-0d2b1ca6daa8-config-volume\") pod \"coredns-6f6b679f8f-msl8r\" (UID: \"d686f5b9-ae61-4c98-9562-0d2b1ca6daa8\") " pod="kube-system/coredns-6f6b679f8f-msl8r" Oct 8 19:32:45.387027 kubelet[3361]: I1008 19:32:45.382163 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd2wm\" (UniqueName: \"kubernetes.io/projected/d686f5b9-ae61-4c98-9562-0d2b1ca6daa8-kube-api-access-vd2wm\") pod \"coredns-6f6b679f8f-msl8r\" (UID: \"d686f5b9-ae61-4c98-9562-0d2b1ca6daa8\") " pod="kube-system/coredns-6f6b679f8f-msl8r" Oct 8 19:32:45.566698 containerd[2024]: time="2024-10-08T19:32:45.566586520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bc65f54-9m4dk,Uid:5dc824f0-1019-4436-8135-184fe45fe379,Namespace:calico-system,Attempt:0,}" Oct 8 19:32:45.613844 containerd[2024]: time="2024-10-08T19:32:45.613588024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5gqd7,Uid:bf9af281-d393-45ad-bbb2-55616503d436,Namespace:kube-system,Attempt:0,}" Oct 8 19:32:45.638272 containerd[2024]: time="2024-10-08T19:32:45.638184089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-msl8r,Uid:d686f5b9-ae61-4c98-9562-0d2b1ca6daa8,Namespace:kube-system,Attempt:0,}" Oct 8 19:32:46.176405 containerd[2024]: time="2024-10-08T19:32:46.176159271Z" level=info msg="shim disconnected" id=610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924 namespace=k8s.io Oct 8 19:32:46.176405 containerd[2024]: time="2024-10-08T19:32:46.176262063Z" level=warning msg="cleaning up after shim disconnected" id=610094e72d961f72e99f613310108dc429d7bf7f999551ec1d243cb55ea7e924 namespace=k8s.io Oct 8 19:32:46.176405 containerd[2024]: time="2024-10-08T19:32:46.176283351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:32:46.437067 containerd[2024]: time="2024-10-08T19:32:46.436387972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\"" Oct 8 19:32:46.531552 containerd[2024]: time="2024-10-08T19:32:46.531434393Z" level=error msg="Failed to destroy network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.534826 containerd[2024]: time="2024-10-08T19:32:46.534736937Z" level=error msg="encountered an error cleaning up failed sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.535104 containerd[2024]: time="2024-10-08T19:32:46.534851153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5gqd7,Uid:bf9af281-d393-45ad-bbb2-55616503d436,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.535300 kubelet[3361]: E1008 19:32:46.535219 3361 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.536189 kubelet[3361]: E1008 19:32:46.535350 3361 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5gqd7" Oct 8 19:32:46.536189 kubelet[3361]: E1008 19:32:46.535389 3361 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-5gqd7" Oct 8 19:32:46.536189 kubelet[3361]: E1008 19:32:46.535464 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-5gqd7_kube-system(bf9af281-d393-45ad-bbb2-55616503d436)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-5gqd7_kube-system(bf9af281-d393-45ad-bbb2-55616503d436)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5gqd7" podUID="bf9af281-d393-45ad-bbb2-55616503d436" Oct 8 19:32:46.569082 containerd[2024]: time="2024-10-08T19:32:46.568970501Z" level=error msg="Failed to destroy network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.570629 containerd[2024]: time="2024-10-08T19:32:46.570546521Z" level=error msg="encountered an error cleaning up failed sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.570962 containerd[2024]: time="2024-10-08T19:32:46.570773801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bc65f54-9m4dk,Uid:5dc824f0-1019-4436-8135-184fe45fe379,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.572305 kubelet[3361]: E1008 19:32:46.571448 3361 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.572305 kubelet[3361]: E1008 19:32:46.571544 3361 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-858bc65f54-9m4dk" Oct 8 19:32:46.572305 kubelet[3361]: E1008 19:32:46.571579 3361 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-858bc65f54-9m4dk" Oct 8 19:32:46.572757 kubelet[3361]: E1008 19:32:46.571678 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-858bc65f54-9m4dk_calico-system(5dc824f0-1019-4436-8135-184fe45fe379)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-858bc65f54-9m4dk_calico-system(5dc824f0-1019-4436-8135-184fe45fe379)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-858bc65f54-9m4dk" podUID="5dc824f0-1019-4436-8135-184fe45fe379" Oct 8 19:32:46.576170 containerd[2024]: time="2024-10-08T19:32:46.576097829Z" level=error msg="Failed to destroy network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.577528 containerd[2024]: time="2024-10-08T19:32:46.577383617Z" level=error msg="encountered an error cleaning up failed sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.577690 containerd[2024]: time="2024-10-08T19:32:46.577603565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-msl8r,Uid:d686f5b9-ae61-4c98-9562-0d2b1ca6daa8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.578290 kubelet[3361]: E1008 19:32:46.578068 3361 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:46.578290 kubelet[3361]: E1008 19:32:46.578180 3361 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-msl8r" Oct 8 19:32:46.578290 kubelet[3361]: E1008 19:32:46.578218 3361 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-msl8r" Oct 8 19:32:46.578563 kubelet[3361]: E1008 19:32:46.578292 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-msl8r_kube-system(d686f5b9-ae61-4c98-9562-0d2b1ca6daa8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-msl8r_kube-system(d686f5b9-ae61-4c98-9562-0d2b1ca6daa8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-msl8r" podUID="d686f5b9-ae61-4c98-9562-0d2b1ca6daa8" Oct 8 19:32:47.071376 systemd[1]: Created slice kubepods-besteffort-pod4da6ef7d_86ec_4a74_b95b_c04030a59fa2.slice - libcontainer container kubepods-besteffort-pod4da6ef7d_86ec_4a74_b95b_c04030a59fa2.slice. Oct 8 19:32:47.078070 containerd[2024]: time="2024-10-08T19:32:47.077813668Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-svwzg,Uid:4da6ef7d-86ec-4a74-b95b-c04030a59fa2,Namespace:calico-system,Attempt:0,}" Oct 8 19:32:47.191741 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c-shm.mount: Deactivated successfully. Oct 8 19:32:47.192502 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1-shm.mount: Deactivated successfully. Oct 8 19:32:47.193190 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4-shm.mount: Deactivated successfully. Oct 8 19:32:47.232404 containerd[2024]: time="2024-10-08T19:32:47.232311424Z" level=error msg="Failed to destroy network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:47.237194 containerd[2024]: time="2024-10-08T19:32:47.234280648Z" level=error msg="encountered an error cleaning up failed sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:47.237194 containerd[2024]: time="2024-10-08T19:32:47.234386416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-svwzg,Uid:4da6ef7d-86ec-4a74-b95b-c04030a59fa2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:47.237364 kubelet[3361]: E1008 19:32:47.234817 3361 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:47.237364 kubelet[3361]: E1008 19:32:47.234926 3361 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-svwzg" Oct 8 19:32:47.237364 kubelet[3361]: E1008 19:32:47.234981 3361 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-svwzg" Oct 8 19:32:47.237577 kubelet[3361]: E1008 19:32:47.235144 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-svwzg_calico-system(4da6ef7d-86ec-4a74-b95b-c04030a59fa2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-svwzg_calico-system(4da6ef7d-86ec-4a74-b95b-c04030a59fa2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-svwzg" podUID="4da6ef7d-86ec-4a74-b95b-c04030a59fa2" Oct 8 19:32:47.242689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b-shm.mount: Deactivated successfully. Oct 8 19:32:47.436514 kubelet[3361]: I1008 19:32:47.435600 3361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:32:47.438332 containerd[2024]: time="2024-10-08T19:32:47.438211721Z" level=info msg="StopPodSandbox for \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\"" Oct 8 19:32:47.438912 containerd[2024]: time="2024-10-08T19:32:47.438661181Z" level=info msg="Ensure that sandbox 1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c in task-service has been cleanup successfully" Oct 8 19:32:47.441398 kubelet[3361]: I1008 19:32:47.441154 3361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:32:47.443047 containerd[2024]: time="2024-10-08T19:32:47.442951301Z" level=info msg="StopPodSandbox for \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\"" Oct 8 19:32:47.443777 containerd[2024]: time="2024-10-08T19:32:47.443498813Z" level=info msg="Ensure that sandbox e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4 in task-service has been cleanup successfully" Oct 8 19:32:47.447400 kubelet[3361]: I1008 19:32:47.447323 3361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:32:47.455301 containerd[2024]: time="2024-10-08T19:32:47.454605954Z" level=info msg="StopPodSandbox for \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\"" Oct 8 19:32:47.455788 kubelet[3361]: I1008 19:32:47.454977 3361 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:32:47.458408 containerd[2024]: time="2024-10-08T19:32:47.458157378Z" level=info msg="Ensure that sandbox 6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b in task-service has been cleanup successfully" Oct 8 19:32:47.463760 containerd[2024]: time="2024-10-08T19:32:47.463652226Z" level=info msg="StopPodSandbox for \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\"" Oct 8 19:32:47.469079 containerd[2024]: time="2024-10-08T19:32:47.467627562Z" level=info msg="Ensure that sandbox 98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1 in task-service has been cleanup successfully" Oct 8 19:32:47.627966 containerd[2024]: time="2024-10-08T19:32:47.627830694Z" level=error msg="StopPodSandbox for \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\" failed" error="failed to destroy network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:47.628788 kubelet[3361]: E1008 19:32:47.628695 3361 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:32:47.630497 kubelet[3361]: E1008 19:32:47.628790 3361 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1"} Oct 8 19:32:47.630497 kubelet[3361]: E1008 19:32:47.628891 3361 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf9af281-d393-45ad-bbb2-55616503d436\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:32:47.630497 kubelet[3361]: E1008 19:32:47.629015 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf9af281-d393-45ad-bbb2-55616503d436\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-5gqd7" podUID="bf9af281-d393-45ad-bbb2-55616503d436" Oct 8 19:32:47.633820 containerd[2024]: time="2024-10-08T19:32:47.633055326Z" level=error msg="StopPodSandbox for \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\" failed" error="failed to destroy network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:47.634107 kubelet[3361]: E1008 19:32:47.633497 3361 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:32:47.634107 kubelet[3361]: E1008 19:32:47.633570 3361 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4"} Oct 8 19:32:47.634107 kubelet[3361]: E1008 19:32:47.633627 3361 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5dc824f0-1019-4436-8135-184fe45fe379\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:32:47.634107 kubelet[3361]: E1008 19:32:47.633667 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5dc824f0-1019-4436-8135-184fe45fe379\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-858bc65f54-9m4dk" podUID="5dc824f0-1019-4436-8135-184fe45fe379" Oct 8 19:32:47.641381 containerd[2024]: time="2024-10-08T19:32:47.641251878Z" level=error msg="StopPodSandbox for \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\" failed" error="failed to destroy network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:47.641747 kubelet[3361]: E1008 19:32:47.641628 3361 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:32:47.641747 kubelet[3361]: E1008 19:32:47.641730 3361 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c"} Oct 8 19:32:47.642389 kubelet[3361]: E1008 19:32:47.641814 3361 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d686f5b9-ae61-4c98-9562-0d2b1ca6daa8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:32:47.642389 kubelet[3361]: E1008 19:32:47.641862 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d686f5b9-ae61-4c98-9562-0d2b1ca6daa8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-msl8r" podUID="d686f5b9-ae61-4c98-9562-0d2b1ca6daa8" Oct 8 19:32:47.655449 containerd[2024]: time="2024-10-08T19:32:47.655052407Z" level=error msg="StopPodSandbox for \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\" failed" error="failed to destroy network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 8 19:32:47.655623 kubelet[3361]: E1008 19:32:47.655441 3361 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:32:47.655623 kubelet[3361]: E1008 19:32:47.655556 3361 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b"} Oct 8 19:32:47.655823 kubelet[3361]: E1008 19:32:47.655628 3361 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4da6ef7d-86ec-4a74-b95b-c04030a59fa2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 8 19:32:47.655823 kubelet[3361]: E1008 19:32:47.655677 3361 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4da6ef7d-86ec-4a74-b95b-c04030a59fa2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-svwzg" podUID="4da6ef7d-86ec-4a74-b95b-c04030a59fa2" Oct 8 19:32:53.166486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102522432.mount: Deactivated successfully. Oct 8 19:32:53.230104 containerd[2024]: time="2024-10-08T19:32:53.229655938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:53.232330 containerd[2024]: time="2024-10-08T19:32:53.231555370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.1: active requests=0, bytes read=113057300" Oct 8 19:32:53.233345 containerd[2024]: time="2024-10-08T19:32:53.233141638Z" level=info msg="ImageCreate event name:\"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:53.237913 containerd[2024]: time="2024-10-08T19:32:53.237733486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:32:53.241050 containerd[2024]: time="2024-10-08T19:32:53.239806582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.1\" with image id \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:47908d8b3046dadd6fbea273ac5b0b9bb803cc7b58b9114c50bf7591767d2744\", size \"113057162\" in 6.80304919s" Oct 8 19:32:53.241050 containerd[2024]: time="2024-10-08T19:32:53.239926570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.1\" returns image reference \"sha256:373272045e41e00ebf8da7ce9fc6b26d326fb8b3e665d9f78bb121976f83b1dc\"" Oct 8 19:32:53.279067 containerd[2024]: time="2024-10-08T19:32:53.278714602Z" level=info msg="CreateContainer within sandbox \"490116c8771654c12252370751ccd9f74a085f95e91576ae05de3abe5bb85d21\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 8 19:32:53.327898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1421455095.mount: Deactivated successfully. Oct 8 19:32:53.335344 containerd[2024]: time="2024-10-08T19:32:53.334695287Z" level=info msg="CreateContainer within sandbox \"490116c8771654c12252370751ccd9f74a085f95e91576ae05de3abe5bb85d21\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3717af1c650af5a3f777033d4e9fc8bfbd56da0cb8d5bdd592c1b5e704fdaa82\"" Oct 8 19:32:53.338130 containerd[2024]: time="2024-10-08T19:32:53.337647287Z" level=info msg="StartContainer for \"3717af1c650af5a3f777033d4e9fc8bfbd56da0cb8d5bdd592c1b5e704fdaa82\"" Oct 8 19:32:53.410378 systemd[1]: Started cri-containerd-3717af1c650af5a3f777033d4e9fc8bfbd56da0cb8d5bdd592c1b5e704fdaa82.scope - libcontainer container 3717af1c650af5a3f777033d4e9fc8bfbd56da0cb8d5bdd592c1b5e704fdaa82. Oct 8 19:32:53.519101 containerd[2024]: time="2024-10-08T19:32:53.518807664Z" level=info msg="StartContainer for \"3717af1c650af5a3f777033d4e9fc8bfbd56da0cb8d5bdd592c1b5e704fdaa82\" returns successfully" Oct 8 19:32:53.687950 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 8 19:32:53.688368 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 8 19:32:54.573060 kubelet[3361]: I1008 19:32:54.571166 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zxpdp" podStartSLOduration=2.815717443 podStartE2EDuration="22.571138981s" podCreationTimestamp="2024-10-08 19:32:32 +0000 UTC" firstStartedPulling="2024-10-08 19:32:33.486459124 +0000 UTC m=+14.771357966" lastFinishedPulling="2024-10-08 19:32:53.241880662 +0000 UTC m=+34.526779504" observedRunningTime="2024-10-08 19:32:54.569281213 +0000 UTC m=+35.854180103" watchObservedRunningTime="2024-10-08 19:32:54.571138981 +0000 UTC m=+35.856037835" Oct 8 19:32:59.059388 containerd[2024]: time="2024-10-08T19:32:59.058606947Z" level=info msg="StopPodSandbox for \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\"" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.213 [INFO][4574] k8s.go 608: Cleaning up netns ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.213 [INFO][4574] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" iface="eth0" netns="/var/run/netns/cni-3a393d16-908b-22d4-ca13-cfa84816b823" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.214 [INFO][4574] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" iface="eth0" netns="/var/run/netns/cni-3a393d16-908b-22d4-ca13-cfa84816b823" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.215 [INFO][4574] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" iface="eth0" netns="/var/run/netns/cni-3a393d16-908b-22d4-ca13-cfa84816b823" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.215 [INFO][4574] k8s.go 615: Releasing IP address(es) ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.215 [INFO][4574] utils.go 188: Calico CNI releasing IP address ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.267 [INFO][4581] ipam_plugin.go 417: Releasing address using handleID ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" HandleID="k8s-pod-network.6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.267 [INFO][4581] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.267 [INFO][4581] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.280 [WARNING][4581] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" HandleID="k8s-pod-network.6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.280 [INFO][4581] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" HandleID="k8s-pod-network.6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.283 [INFO][4581] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:32:59.294084 containerd[2024]: 2024-10-08 19:32:59.288 [INFO][4574] k8s.go 621: Teardown processing complete. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:32:59.297749 kubelet[3361]: I1008 19:32:59.295954 3361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:32:59.300254 containerd[2024]: time="2024-10-08T19:32:59.295113148Z" level=info msg="TearDown network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\" successfully" Oct 8 19:32:59.300254 containerd[2024]: time="2024-10-08T19:32:59.297787672Z" level=info msg="StopPodSandbox for \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\" returns successfully" Oct 8 19:32:59.300254 containerd[2024]: time="2024-10-08T19:32:59.301130500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-svwzg,Uid:4da6ef7d-86ec-4a74-b95b-c04030a59fa2,Namespace:calico-system,Attempt:1,}" Oct 8 19:32:59.299384 systemd[1]: run-netns-cni\x2d3a393d16\x2d908b\x2d22d4\x2dca13\x2dcfa84816b823.mount: Deactivated successfully. Oct 8 19:32:59.848051 systemd-networkd[1854]: cali9e1d3629809: Link UP Oct 8 19:32:59.853750 systemd-networkd[1854]: cali9e1d3629809: Gained carrier Oct 8 19:32:59.872791 (udev-worker)[4634]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.461 [INFO][4588] utils.go 100: File /var/lib/calico/mtu does not exist Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.522 [INFO][4588] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0 csi-node-driver- calico-system 4da6ef7d-86ec-4a74-b95b-c04030a59fa2 684 0 2024-10-08 19:32:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:779867c8f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-19-2 csi-node-driver-svwzg eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali9e1d3629809 [] []}} ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Namespace="calico-system" Pod="csi-node-driver-svwzg" WorkloadEndpoint="ip--172--31--19--2-k8s-csi--node--driver--svwzg-" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.523 [INFO][4588] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Namespace="calico-system" Pod="csi-node-driver-svwzg" WorkloadEndpoint="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.692 [INFO][4617] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" HandleID="k8s-pod-network.b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.717 [INFO][4617] ipam_plugin.go 270: Auto assigning IP ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" HandleID="k8s-pod-network.b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024a5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-2", "pod":"csi-node-driver-svwzg", "timestamp":"2024-10-08 19:32:59.692840814 +0000 UTC"}, Hostname:"ip-172-31-19-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.718 [INFO][4617] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.718 [INFO][4617] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.718 [INFO][4617] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-2' Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.729 [INFO][4617] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" host="ip-172-31-19-2" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.744 [INFO][4617] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-2" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.761 [INFO][4617] ipam.go 489: Trying affinity for 192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.767 [INFO][4617] ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.773 [INFO][4617] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.773 [INFO][4617] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" host="ip-172-31-19-2" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.776 [INFO][4617] ipam.go 1685: Creating new handle: k8s-pod-network.b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.786 [INFO][4617] ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" host="ip-172-31-19-2" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.802 [INFO][4617] ipam.go 1216: Successfully claimed IPs: [192.168.47.129/26] block=192.168.47.128/26 handle="k8s-pod-network.b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" host="ip-172-31-19-2" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.802 [INFO][4617] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.129/26] handle="k8s-pod-network.b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" host="ip-172-31-19-2" Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.802 [INFO][4617] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:32:59.905766 containerd[2024]: 2024-10-08 19:32:59.802 [INFO][4617] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.47.129/26] IPv6=[] ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" HandleID="k8s-pod-network.b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:32:59.907716 containerd[2024]: 2024-10-08 19:32:59.816 [INFO][4588] k8s.go 386: Populated endpoint ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Namespace="calico-system" Pod="csi-node-driver-svwzg" WorkloadEndpoint="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4da6ef7d-86ec-4a74-b95b-c04030a59fa2", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"", Pod:"csi-node-driver-svwzg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9e1d3629809", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:32:59.907716 containerd[2024]: 2024-10-08 19:32:59.816 [INFO][4588] k8s.go 387: Calico CNI using IPs: [192.168.47.129/32] ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Namespace="calico-system" Pod="csi-node-driver-svwzg" WorkloadEndpoint="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:32:59.907716 containerd[2024]: 2024-10-08 19:32:59.816 [INFO][4588] dataplane_linux.go 68: Setting the host side veth name to cali9e1d3629809 ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Namespace="calico-system" Pod="csi-node-driver-svwzg" WorkloadEndpoint="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:32:59.907716 containerd[2024]: 2024-10-08 19:32:59.853 [INFO][4588] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Namespace="calico-system" Pod="csi-node-driver-svwzg" WorkloadEndpoint="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:32:59.907716 containerd[2024]: 2024-10-08 19:32:59.855 [INFO][4588] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Namespace="calico-system" Pod="csi-node-driver-svwzg" WorkloadEndpoint="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4da6ef7d-86ec-4a74-b95b-c04030a59fa2", ResourceVersion:"684", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc", Pod:"csi-node-driver-svwzg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9e1d3629809", MAC:"ea:09:89:9a:26:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:32:59.907716 containerd[2024]: 2024-10-08 19:32:59.898 [INFO][4588] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc" Namespace="calico-system" Pod="csi-node-driver-svwzg" WorkloadEndpoint="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:33:00.009806 containerd[2024]: time="2024-10-08T19:33:00.009561508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:33:00.010179 containerd[2024]: time="2024-10-08T19:33:00.010066012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:00.011070 containerd[2024]: time="2024-10-08T19:33:00.010912072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:33:00.012226 containerd[2024]: time="2024-10-08T19:33:00.011438356Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:00.063761 containerd[2024]: time="2024-10-08T19:33:00.061526320Z" level=info msg="StopPodSandbox for \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\"" Oct 8 19:33:00.089564 systemd[1]: Started cri-containerd-b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc.scope - libcontainer container b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc. Oct 8 19:33:00.312160 systemd[1]: run-containerd-runc-k8s.io-b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc-runc.JOElFb.mount: Deactivated successfully. Oct 8 19:33:00.431381 containerd[2024]: time="2024-10-08T19:33:00.431191518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-svwzg,Uid:4da6ef7d-86ec-4a74-b95b-c04030a59fa2,Namespace:calico-system,Attempt:1,} returns sandbox id \"b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc\"" Oct 8 19:33:00.442303 containerd[2024]: time="2024-10-08T19:33:00.442177230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\"" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.324 [INFO][4686] k8s.go 608: Cleaning up netns ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.324 [INFO][4686] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" iface="eth0" netns="/var/run/netns/cni-6a09d72d-ea57-a42d-70e8-8f8145b254af" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.326 [INFO][4686] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" iface="eth0" netns="/var/run/netns/cni-6a09d72d-ea57-a42d-70e8-8f8145b254af" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.328 [INFO][4686] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" iface="eth0" netns="/var/run/netns/cni-6a09d72d-ea57-a42d-70e8-8f8145b254af" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.329 [INFO][4686] k8s.go 615: Releasing IP address(es) ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.330 [INFO][4686] utils.go 188: Calico CNI releasing IP address ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.482 [INFO][4705] ipam_plugin.go 417: Releasing address using handleID ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" HandleID="k8s-pod-network.e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.485 [INFO][4705] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.485 [INFO][4705] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.509 [WARNING][4705] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" HandleID="k8s-pod-network.e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.509 [INFO][4705] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" HandleID="k8s-pod-network.e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.514 [INFO][4705] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:00.525228 containerd[2024]: 2024-10-08 19:33:00.519 [INFO][4686] k8s.go 621: Teardown processing complete. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:00.534853 systemd[1]: run-netns-cni\x2d6a09d72d\x2dea57\x2da42d\x2d70e8\x2d8f8145b254af.mount: Deactivated successfully. Oct 8 19:33:00.543583 containerd[2024]: time="2024-10-08T19:33:00.542931727Z" level=info msg="TearDown network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\" successfully" Oct 8 19:33:00.543583 containerd[2024]: time="2024-10-08T19:33:00.543043099Z" level=info msg="StopPodSandbox for \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\" returns successfully" Oct 8 19:33:00.544498 containerd[2024]: time="2024-10-08T19:33:00.544427575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bc65f54-9m4dk,Uid:5dc824f0-1019-4436-8135-184fe45fe379,Namespace:calico-system,Attempt:1,}" Oct 8 19:33:00.819085 kernel: bpftool[4761]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 8 19:33:01.041197 systemd-networkd[1854]: cali9e1d3629809: Gained IPv6LL Oct 8 19:33:01.072892 systemd-networkd[1854]: cali4c14bdf0bda: Link UP Oct 8 19:33:01.073387 systemd-networkd[1854]: cali4c14bdf0bda: Gained carrier Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:00.741 [INFO][4734] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0 calico-kube-controllers-858bc65f54- calico-system 5dc824f0-1019-4436-8135-184fe45fe379 696 0 2024-10-08 19:32:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:858bc65f54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-19-2 calico-kube-controllers-858bc65f54-9m4dk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4c14bdf0bda [] []}} ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Namespace="calico-system" Pod="calico-kube-controllers-858bc65f54-9m4dk" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:00.742 [INFO][4734] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Namespace="calico-system" Pod="calico-kube-controllers-858bc65f54-9m4dk" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:00.895 [INFO][4752] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" HandleID="k8s-pod-network.a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:00.923 [INFO][4752] ipam_plugin.go 270: Auto assigning IP ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" HandleID="k8s-pod-network.a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000384030), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-19-2", "pod":"calico-kube-controllers-858bc65f54-9m4dk", "timestamp":"2024-10-08 19:33:00.895069652 +0000 UTC"}, Hostname:"ip-172-31-19-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:00.924 [INFO][4752] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:00.926 [INFO][4752] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:00.926 [INFO][4752] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-2' Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:00.938 [INFO][4752] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" host="ip-172-31-19-2" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:00.984 [INFO][4752] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-2" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.010 [INFO][4752] ipam.go 489: Trying affinity for 192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.017 [INFO][4752] ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.024 [INFO][4752] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.024 [INFO][4752] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" host="ip-172-31-19-2" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.029 [INFO][4752] ipam.go 1685: Creating new handle: k8s-pod-network.a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.044 [INFO][4752] ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" host="ip-172-31-19-2" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.062 [INFO][4752] ipam.go 1216: Successfully claimed IPs: [192.168.47.130/26] block=192.168.47.128/26 handle="k8s-pod-network.a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" host="ip-172-31-19-2" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.062 [INFO][4752] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.130/26] handle="k8s-pod-network.a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" host="ip-172-31-19-2" Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.063 [INFO][4752] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:01.119396 containerd[2024]: 2024-10-08 19:33:01.063 [INFO][4752] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.47.130/26] IPv6=[] ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" HandleID="k8s-pod-network.a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:01.122490 containerd[2024]: 2024-10-08 19:33:01.067 [INFO][4734] k8s.go 386: Populated endpoint ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Namespace="calico-system" Pod="calico-kube-controllers-858bc65f54-9m4dk" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0", GenerateName:"calico-kube-controllers-858bc65f54-", Namespace:"calico-system", SelfLink:"", UID:"5dc824f0-1019-4436-8135-184fe45fe379", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bc65f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"", Pod:"calico-kube-controllers-858bc65f54-9m4dk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c14bdf0bda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:01.122490 containerd[2024]: 2024-10-08 19:33:01.068 [INFO][4734] k8s.go 387: Calico CNI using IPs: [192.168.47.130/32] ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Namespace="calico-system" Pod="calico-kube-controllers-858bc65f54-9m4dk" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:01.122490 containerd[2024]: 2024-10-08 19:33:01.068 [INFO][4734] dataplane_linux.go 68: Setting the host side veth name to cali4c14bdf0bda ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Namespace="calico-system" Pod="calico-kube-controllers-858bc65f54-9m4dk" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:01.122490 containerd[2024]: 2024-10-08 19:33:01.072 [INFO][4734] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Namespace="calico-system" Pod="calico-kube-controllers-858bc65f54-9m4dk" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:01.122490 containerd[2024]: 2024-10-08 19:33:01.073 [INFO][4734] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Namespace="calico-system" Pod="calico-kube-controllers-858bc65f54-9m4dk" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0", GenerateName:"calico-kube-controllers-858bc65f54-", Namespace:"calico-system", SelfLink:"", UID:"5dc824f0-1019-4436-8135-184fe45fe379", ResourceVersion:"696", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bc65f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab", Pod:"calico-kube-controllers-858bc65f54-9m4dk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c14bdf0bda", MAC:"a2:26:03:f8:90:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:01.122490 containerd[2024]: 2024-10-08 19:33:01.111 [INFO][4734] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab" Namespace="calico-system" Pod="calico-kube-controllers-858bc65f54-9m4dk" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:01.182135 containerd[2024]: time="2024-10-08T19:33:01.181284438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:33:01.182135 containerd[2024]: time="2024-10-08T19:33:01.181564326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:01.182135 containerd[2024]: time="2024-10-08T19:33:01.181637958Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:33:01.182135 containerd[2024]: time="2024-10-08T19:33:01.181702734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:01.250529 systemd[1]: Started cri-containerd-a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab.scope - libcontainer container a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab. Oct 8 19:33:01.405150 containerd[2024]: time="2024-10-08T19:33:01.403324747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-858bc65f54-9m4dk,Uid:5dc824f0-1019-4436-8135-184fe45fe379,Namespace:calico-system,Attempt:1,} returns sandbox id \"a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab\"" Oct 8 19:33:01.484702 systemd-networkd[1854]: vxlan.calico: Link UP Oct 8 19:33:01.484807 systemd-networkd[1854]: vxlan.calico: Gained carrier Oct 8 19:33:01.485428 (udev-worker)[4633]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:33:02.055458 containerd[2024]: time="2024-10-08T19:33:02.055365330Z" level=info msg="StopPodSandbox for \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\"" Oct 8 19:33:02.090055 containerd[2024]: time="2024-10-08T19:33:02.085881198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:02.096119 containerd[2024]: time="2024-10-08T19:33:02.096055026Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.1: active requests=0, bytes read=7211060" Oct 8 19:33:02.101211 containerd[2024]: time="2024-10-08T19:33:02.101132658Z" level=info msg="ImageCreate event name:\"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:02.125642 containerd[2024]: time="2024-10-08T19:33:02.124292070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:02.139312 containerd[2024]: time="2024-10-08T19:33:02.137410986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.1\" with image id \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:01e16d03dd0c29a8e1e302455eb15c2d0326c49cbaca4bbe8dc0e2d5308c5add\", size \"8578579\" in 1.695118148s" Oct 8 19:33:02.139312 containerd[2024]: time="2024-10-08T19:33:02.137487378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.1\" returns image reference \"sha256:dd6cf4bf9b3656f9dd9713f21ac1be96858f750a9a3bf340983fb7072f4eda2a\"" Oct 8 19:33:02.142548 containerd[2024]: time="2024-10-08T19:33:02.142440726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\"" Oct 8 19:33:02.155790 containerd[2024]: time="2024-10-08T19:33:02.155332615Z" level=info msg="CreateContainer within sandbox \"b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 8 19:33:02.243689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684931021.mount: Deactivated successfully. Oct 8 19:33:02.270407 containerd[2024]: time="2024-10-08T19:33:02.269213611Z" level=info msg="CreateContainer within sandbox \"b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5d2320b2d338cb7de64f9b6cbe5e9b69ca196a375b7d8b8a129bfde775e8cebb\"" Oct 8 19:33:02.275111 containerd[2024]: time="2024-10-08T19:33:02.273320467Z" level=info msg="StartContainer for \"5d2320b2d338cb7de64f9b6cbe5e9b69ca196a375b7d8b8a129bfde775e8cebb\"" Oct 8 19:33:02.438220 systemd[1]: run-containerd-runc-k8s.io-5d2320b2d338cb7de64f9b6cbe5e9b69ca196a375b7d8b8a129bfde775e8cebb-runc.WyLbCc.mount: Deactivated successfully. Oct 8 19:33:02.449788 systemd-networkd[1854]: cali4c14bdf0bda: Gained IPv6LL Oct 8 19:33:02.471357 systemd[1]: Started cri-containerd-5d2320b2d338cb7de64f9b6cbe5e9b69ca196a375b7d8b8a129bfde775e8cebb.scope - libcontainer container 5d2320b2d338cb7de64f9b6cbe5e9b69ca196a375b7d8b8a129bfde775e8cebb. Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.362 [INFO][4884] k8s.go 608: Cleaning up netns ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.365 [INFO][4884] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" iface="eth0" netns="/var/run/netns/cni-7a9f7768-d8dc-5d39-eab5-fcedf02ac8f1" Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.367 [INFO][4884] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" iface="eth0" netns="/var/run/netns/cni-7a9f7768-d8dc-5d39-eab5-fcedf02ac8f1" Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.369 [INFO][4884] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" iface="eth0" netns="/var/run/netns/cni-7a9f7768-d8dc-5d39-eab5-fcedf02ac8f1" Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.370 [INFO][4884] k8s.go 615: Releasing IP address(es) ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.370 [INFO][4884] utils.go 188: Calico CNI releasing IP address ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.531 [INFO][4917] ipam_plugin.go 417: Releasing address using handleID ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" HandleID="k8s-pod-network.1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.532 [INFO][4917] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.532 [INFO][4917] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.550 [WARNING][4917] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" HandleID="k8s-pod-network.1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.550 [INFO][4917] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" HandleID="k8s-pod-network.1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.558 [INFO][4917] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:02.587100 containerd[2024]: 2024-10-08 19:33:02.573 [INFO][4884] k8s.go 621: Teardown processing complete. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:02.591339 containerd[2024]: time="2024-10-08T19:33:02.589272417Z" level=info msg="TearDown network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\" successfully" Oct 8 19:33:02.591339 containerd[2024]: time="2024-10-08T19:33:02.589587801Z" level=info msg="StopPodSandbox for \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\" returns successfully" Oct 8 19:33:02.594945 containerd[2024]: time="2024-10-08T19:33:02.594594237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-msl8r,Uid:d686f5b9-ae61-4c98-9562-0d2b1ca6daa8,Namespace:kube-system,Attempt:1,}" Oct 8 19:33:02.655493 containerd[2024]: time="2024-10-08T19:33:02.655395993Z" level=info msg="StartContainer for \"5d2320b2d338cb7de64f9b6cbe5e9b69ca196a375b7d8b8a129bfde775e8cebb\" returns successfully" Oct 8 19:33:02.918835 systemd-networkd[1854]: cali28bde894187: Link UP Oct 8 19:33:02.921978 systemd-networkd[1854]: cali28bde894187: Gained carrier Oct 8 19:33:02.929330 (udev-worker)[4849]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.739 [INFO][4947] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0 coredns-6f6b679f8f- kube-system d686f5b9-ae61-4c98-9562-0d2b1ca6daa8 708 0 2024-10-08 19:32:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-2 coredns-6f6b679f8f-msl8r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28bde894187 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Namespace="kube-system" Pod="coredns-6f6b679f8f-msl8r" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.739 [INFO][4947] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Namespace="kube-system" Pod="coredns-6f6b679f8f-msl8r" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.811 [INFO][4963] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" HandleID="k8s-pod-network.37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.838 [INFO][4963] ipam_plugin.go 270: Auto assigning IP ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" HandleID="k8s-pod-network.37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000319300), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-2", "pod":"coredns-6f6b679f8f-msl8r", "timestamp":"2024-10-08 19:33:02.81102559 +0000 UTC"}, Hostname:"ip-172-31-19-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.839 [INFO][4963] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.839 [INFO][4963] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.839 [INFO][4963] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-2' Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.846 [INFO][4963] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" host="ip-172-31-19-2" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.857 [INFO][4963] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-2" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.867 [INFO][4963] ipam.go 489: Trying affinity for 192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.872 [INFO][4963] ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.877 [INFO][4963] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.877 [INFO][4963] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" host="ip-172-31-19-2" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.880 [INFO][4963] ipam.go 1685: Creating new handle: k8s-pod-network.37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19 Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.890 [INFO][4963] ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" host="ip-172-31-19-2" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.906 [INFO][4963] ipam.go 1216: Successfully claimed IPs: [192.168.47.131/26] block=192.168.47.128/26 handle="k8s-pod-network.37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" host="ip-172-31-19-2" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.906 [INFO][4963] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.131/26] handle="k8s-pod-network.37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" host="ip-172-31-19-2" Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.906 [INFO][4963] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:02.953624 containerd[2024]: 2024-10-08 19:33:02.906 [INFO][4963] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.47.131/26] IPv6=[] ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" HandleID="k8s-pod-network.37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:02.958282 containerd[2024]: 2024-10-08 19:33:02.912 [INFO][4947] k8s.go 386: Populated endpoint ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Namespace="kube-system" Pod="coredns-6f6b679f8f-msl8r" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d686f5b9-ae61-4c98-9562-0d2b1ca6daa8", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"", Pod:"coredns-6f6b679f8f-msl8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28bde894187", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:02.958282 containerd[2024]: 2024-10-08 19:33:02.912 [INFO][4947] k8s.go 387: Calico CNI using IPs: [192.168.47.131/32] ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Namespace="kube-system" Pod="coredns-6f6b679f8f-msl8r" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:02.958282 containerd[2024]: 2024-10-08 19:33:02.912 [INFO][4947] dataplane_linux.go 68: Setting the host side veth name to cali28bde894187 ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Namespace="kube-system" Pod="coredns-6f6b679f8f-msl8r" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:02.958282 containerd[2024]: 2024-10-08 19:33:02.918 [INFO][4947] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Namespace="kube-system" Pod="coredns-6f6b679f8f-msl8r" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:02.958282 containerd[2024]: 2024-10-08 19:33:02.919 [INFO][4947] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Namespace="kube-system" Pod="coredns-6f6b679f8f-msl8r" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d686f5b9-ae61-4c98-9562-0d2b1ca6daa8", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19", Pod:"coredns-6f6b679f8f-msl8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28bde894187", MAC:"ae:e7:16:11:ab:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:02.958282 containerd[2024]: 2024-10-08 19:33:02.943 [INFO][4947] k8s.go 500: Wrote updated endpoint to datastore ContainerID="37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19" Namespace="kube-system" Pod="coredns-6f6b679f8f-msl8r" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:03.043186 containerd[2024]: time="2024-10-08T19:33:03.042751195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:33:03.043186 containerd[2024]: time="2024-10-08T19:33:03.042901159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:03.043186 containerd[2024]: time="2024-10-08T19:33:03.042953875Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:33:03.043186 containerd[2024]: time="2024-10-08T19:33:03.043049671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:03.069572 containerd[2024]: time="2024-10-08T19:33:03.069254311Z" level=info msg="StopPodSandbox for \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\"" Oct 8 19:33:03.126769 systemd[1]: Started cri-containerd-37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19.scope - libcontainer container 37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19. Oct 8 19:33:03.155695 systemd-networkd[1854]: vxlan.calico: Gained IPv6LL Oct 8 19:33:03.238263 systemd[1]: run-netns-cni\x2d7a9f7768\x2dd8dc\x2d5d39\x2deab5\x2dfcedf02ac8f1.mount: Deactivated successfully. Oct 8 19:33:03.353124 containerd[2024]: time="2024-10-08T19:33:03.353041292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-msl8r,Uid:d686f5b9-ae61-4c98-9562-0d2b1ca6daa8,Namespace:kube-system,Attempt:1,} returns sandbox id \"37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19\"" Oct 8 19:33:03.368315 containerd[2024]: time="2024-10-08T19:33:03.368047605Z" level=info msg="CreateContainer within sandbox \"37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:33:03.457740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662256828.mount: Deactivated successfully. Oct 8 19:33:03.470135 containerd[2024]: time="2024-10-08T19:33:03.468244221Z" level=info msg="CreateContainer within sandbox \"37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f900f0fb7fe0dd129c6b2bc0e3ca582bc19921d74b55cae3aa1f0fda424681c2\"" Oct 8 19:33:03.473765 containerd[2024]: time="2024-10-08T19:33:03.473541501Z" level=info msg="StartContainer for \"f900f0fb7fe0dd129c6b2bc0e3ca582bc19921d74b55cae3aa1f0fda424681c2\"" Oct 8 19:33:03.612869 systemd[1]: Started cri-containerd-f900f0fb7fe0dd129c6b2bc0e3ca582bc19921d74b55cae3aa1f0fda424681c2.scope - libcontainer container f900f0fb7fe0dd129c6b2bc0e3ca582bc19921d74b55cae3aa1f0fda424681c2. Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.375 [INFO][5023] k8s.go 608: Cleaning up netns ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.375 [INFO][5023] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" iface="eth0" netns="/var/run/netns/cni-b7479764-e194-0ded-6a1b-e5df6dd82a03" Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.376 [INFO][5023] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" iface="eth0" netns="/var/run/netns/cni-b7479764-e194-0ded-6a1b-e5df6dd82a03" Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.377 [INFO][5023] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" iface="eth0" netns="/var/run/netns/cni-b7479764-e194-0ded-6a1b-e5df6dd82a03" Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.377 [INFO][5023] k8s.go 615: Releasing IP address(es) ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.378 [INFO][5023] utils.go 188: Calico CNI releasing IP address ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.537 [INFO][5045] ipam_plugin.go 417: Releasing address using handleID ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" HandleID="k8s-pod-network.98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.551 [INFO][5045] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.551 [INFO][5045] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.600 [WARNING][5045] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" HandleID="k8s-pod-network.98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.600 [INFO][5045] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" HandleID="k8s-pod-network.98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.605 [INFO][5045] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:03.634524 containerd[2024]: 2024-10-08 19:33:03.617 [INFO][5023] k8s.go 621: Teardown processing complete. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:03.640501 containerd[2024]: time="2024-10-08T19:33:03.640274854Z" level=info msg="TearDown network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\" successfully" Oct 8 19:33:03.640666 containerd[2024]: time="2024-10-08T19:33:03.640594570Z" level=info msg="StopPodSandbox for \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\" returns successfully" Oct 8 19:33:03.656385 containerd[2024]: time="2024-10-08T19:33:03.655493506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5gqd7,Uid:bf9af281-d393-45ad-bbb2-55616503d436,Namespace:kube-system,Attempt:1,}" Oct 8 19:33:03.800180 containerd[2024]: time="2024-10-08T19:33:03.800024999Z" level=info msg="StartContainer for \"f900f0fb7fe0dd129c6b2bc0e3ca582bc19921d74b55cae3aa1f0fda424681c2\" returns successfully" Oct 8 19:33:04.110414 systemd-networkd[1854]: cali86676f450e3: Link UP Oct 8 19:33:04.113092 systemd-networkd[1854]: cali86676f450e3: Gained carrier Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:03.903 [INFO][5077] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0 coredns-6f6b679f8f- kube-system bf9af281-d393-45ad-bbb2-55616503d436 718 0 2024-10-08 19:32:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-19-2 coredns-6f6b679f8f-5gqd7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86676f450e3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Namespace="kube-system" Pod="coredns-6f6b679f8f-5gqd7" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:03.904 [INFO][5077] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Namespace="kube-system" Pod="coredns-6f6b679f8f-5gqd7" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.009 [INFO][5101] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" HandleID="k8s-pod-network.c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.032 [INFO][5101] ipam_plugin.go 270: Auto assigning IP ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" HandleID="k8s-pod-network.c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000362620), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-19-2", "pod":"coredns-6f6b679f8f-5gqd7", "timestamp":"2024-10-08 19:33:04.009823076 +0000 UTC"}, Hostname:"ip-172-31-19-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.032 [INFO][5101] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.033 [INFO][5101] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.033 [INFO][5101] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-2' Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.036 [INFO][5101] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" host="ip-172-31-19-2" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.043 [INFO][5101] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-2" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.051 [INFO][5101] ipam.go 489: Trying affinity for 192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.063 [INFO][5101] ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.067 [INFO][5101] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.068 [INFO][5101] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" host="ip-172-31-19-2" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.071 [INFO][5101] ipam.go 1685: Creating new handle: k8s-pod-network.c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.078 [INFO][5101] ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" host="ip-172-31-19-2" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.094 [INFO][5101] ipam.go 1216: Successfully claimed IPs: [192.168.47.132/26] block=192.168.47.128/26 handle="k8s-pod-network.c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" host="ip-172-31-19-2" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.095 [INFO][5101] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.132/26] handle="k8s-pod-network.c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" host="ip-172-31-19-2" Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.095 [INFO][5101] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:04.192161 containerd[2024]: 2024-10-08 19:33:04.095 [INFO][5101] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.47.132/26] IPv6=[] ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" HandleID="k8s-pod-network.c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:04.193772 containerd[2024]: 2024-10-08 19:33:04.102 [INFO][5077] k8s.go 386: Populated endpoint ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Namespace="kube-system" Pod="coredns-6f6b679f8f-5gqd7" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf9af281-d393-45ad-bbb2-55616503d436", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"", Pod:"coredns-6f6b679f8f-5gqd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86676f450e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:04.193772 containerd[2024]: 2024-10-08 19:33:04.102 [INFO][5077] k8s.go 387: Calico CNI using IPs: [192.168.47.132/32] ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Namespace="kube-system" Pod="coredns-6f6b679f8f-5gqd7" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:04.193772 containerd[2024]: 2024-10-08 19:33:04.102 [INFO][5077] dataplane_linux.go 68: Setting the host side veth name to cali86676f450e3 ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Namespace="kube-system" Pod="coredns-6f6b679f8f-5gqd7" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:04.193772 containerd[2024]: 2024-10-08 19:33:04.108 [INFO][5077] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Namespace="kube-system" Pod="coredns-6f6b679f8f-5gqd7" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:04.193772 containerd[2024]: 2024-10-08 19:33:04.114 [INFO][5077] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Namespace="kube-system" Pod="coredns-6f6b679f8f-5gqd7" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf9af281-d393-45ad-bbb2-55616503d436", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a", Pod:"coredns-6f6b679f8f-5gqd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86676f450e3", MAC:"72:35:d5:a9:84:32", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:04.193772 containerd[2024]: 2024-10-08 19:33:04.178 [INFO][5077] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a" Namespace="kube-system" Pod="coredns-6f6b679f8f-5gqd7" WorkloadEndpoint="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:04.244932 systemd[1]: run-netns-cni\x2db7479764\x2de194\x2d0ded\x2d6a1b\x2de5df6dd82a03.mount: Deactivated successfully. Oct 8 19:33:04.293148 containerd[2024]: time="2024-10-08T19:33:04.292103145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:33:04.293148 containerd[2024]: time="2024-10-08T19:33:04.292228605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:04.293148 containerd[2024]: time="2024-10-08T19:33:04.292297149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:33:04.293148 containerd[2024]: time="2024-10-08T19:33:04.292337757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:04.395395 systemd[1]: Started cri-containerd-c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a.scope - libcontainer container c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a. Oct 8 19:33:04.497501 systemd-networkd[1854]: cali28bde894187: Gained IPv6LL Oct 8 19:33:04.558264 containerd[2024]: time="2024-10-08T19:33:04.558087358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-5gqd7,Uid:bf9af281-d393-45ad-bbb2-55616503d436,Namespace:kube-system,Attempt:1,} returns sandbox id \"c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a\"" Oct 8 19:33:04.582062 containerd[2024]: time="2024-10-08T19:33:04.581778671Z" level=info msg="CreateContainer within sandbox \"c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 8 19:33:04.620404 containerd[2024]: time="2024-10-08T19:33:04.618773807Z" level=info msg="CreateContainer within sandbox \"c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"214903e92222338fc4c34cdaaf385d4f6dcf1741e5981790de64d28e0abf1b0e\"" Oct 8 19:33:04.630807 containerd[2024]: time="2024-10-08T19:33:04.630232427Z" level=info msg="StartContainer for \"214903e92222338fc4c34cdaaf385d4f6dcf1741e5981790de64d28e0abf1b0e\"" Oct 8 19:33:04.722285 systemd[1]: Started cri-containerd-214903e92222338fc4c34cdaaf385d4f6dcf1741e5981790de64d28e0abf1b0e.scope - libcontainer container 214903e92222338fc4c34cdaaf385d4f6dcf1741e5981790de64d28e0abf1b0e. Oct 8 19:33:04.864332 kubelet[3361]: I1008 19:33:04.864128 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-msl8r" podStartSLOduration=40.863980056 podStartE2EDuration="40.863980056s" podCreationTimestamp="2024-10-08 19:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:33:04.79825836 +0000 UTC m=+46.083157226" watchObservedRunningTime="2024-10-08 19:33:04.863980056 +0000 UTC m=+46.148878910" Oct 8 19:33:04.931389 containerd[2024]: time="2024-10-08T19:33:04.931304280Z" level=info msg="StartContainer for \"214903e92222338fc4c34cdaaf385d4f6dcf1741e5981790de64d28e0abf1b0e\" returns successfully" Oct 8 19:33:05.459379 systemd-networkd[1854]: cali86676f450e3: Gained IPv6LL Oct 8 19:33:05.871414 kubelet[3361]: I1008 19:33:05.871278 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-5gqd7" podStartSLOduration=41.871253221 podStartE2EDuration="41.871253221s" podCreationTimestamp="2024-10-08 19:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-10-08 19:33:05.833402617 +0000 UTC m=+47.118301483" watchObservedRunningTime="2024-10-08 19:33:05.871253221 +0000 UTC m=+47.156152087" Oct 8 19:33:07.459250 containerd[2024]: time="2024-10-08T19:33:07.459149281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:07.465943 containerd[2024]: time="2024-10-08T19:33:07.465382813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.1: active requests=0, bytes read=31361753" Oct 8 19:33:07.470115 containerd[2024]: time="2024-10-08T19:33:07.469876561Z" level=info msg="ImageCreate event name:\"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:07.480308 containerd[2024]: time="2024-10-08T19:33:07.479666557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:07.486197 containerd[2024]: time="2024-10-08T19:33:07.484444057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" with image id \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:9a7338f7187d4d2352fe49eedee44b191ac92557a2e71aa3de3527ed85c1641b\", size \"32729240\" in 5.341540527s" Oct 8 19:33:07.486197 containerd[2024]: time="2024-10-08T19:33:07.484587745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.1\" returns image reference \"sha256:dde0e0aa888dfe01de8f2b6b4879c4391e01cc95a7a8a608194d8ed663fe6a39\"" Oct 8 19:33:07.494227 containerd[2024]: time="2024-10-08T19:33:07.493803481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\"" Oct 8 19:33:07.566711 containerd[2024]: time="2024-10-08T19:33:07.566355265Z" level=info msg="CreateContainer within sandbox \"a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 8 19:33:07.595466 ntpd[1990]: Listen normally on 7 vxlan.calico 192.168.47.128:123 Oct 8 19:33:07.598411 ntpd[1990]: 8 Oct 19:33:07 ntpd[1990]: Listen normally on 7 vxlan.calico 192.168.47.128:123 Oct 8 19:33:07.598411 ntpd[1990]: 8 Oct 19:33:07 ntpd[1990]: Listen normally on 8 cali9e1d3629809 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 8 19:33:07.595768 ntpd[1990]: Listen normally on 8 cali9e1d3629809 [fe80::ecee:eeff:feee:eeee%4]:123 Oct 8 19:33:07.595966 ntpd[1990]: Listen normally on 9 cali4c14bdf0bda [fe80::ecee:eeff:feee:eeee%5]:123 Oct 8 19:33:07.604972 ntpd[1990]: 8 Oct 19:33:07 ntpd[1990]: Listen normally on 9 cali4c14bdf0bda [fe80::ecee:eeff:feee:eeee%5]:123 Oct 8 19:33:07.604972 ntpd[1990]: 8 Oct 19:33:07 ntpd[1990]: Listen normally on 10 vxlan.calico [fe80::642e:cdff:fe41:d6eb%6]:123 Oct 8 19:33:07.601944 ntpd[1990]: Listen normally on 10 vxlan.calico [fe80::642e:cdff:fe41:d6eb%6]:123 Oct 8 19:33:07.607639 ntpd[1990]: 8 Oct 19:33:07 ntpd[1990]: Listen normally on 11 cali28bde894187 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 19:33:07.604718 ntpd[1990]: Listen normally on 11 cali28bde894187 [fe80::ecee:eeff:feee:eeee%9]:123 Oct 8 19:33:07.608154 ntpd[1990]: Listen normally on 12 cali86676f450e3 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 19:33:07.610803 ntpd[1990]: 8 Oct 19:33:07 ntpd[1990]: Listen normally on 12 cali86676f450e3 [fe80::ecee:eeff:feee:eeee%10]:123 Oct 8 19:33:07.628887 containerd[2024]: time="2024-10-08T19:33:07.628656206Z" level=info msg="CreateContainer within sandbox \"a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a2e35ac1520c6560b2e166f39b8b9a9aea7f72920b28bb60777943e8f7306348\"" Oct 8 19:33:07.634770 containerd[2024]: time="2024-10-08T19:33:07.633508970Z" level=info msg="StartContainer for \"a2e35ac1520c6560b2e166f39b8b9a9aea7f72920b28bb60777943e8f7306348\"" Oct 8 19:33:07.783958 systemd[1]: Started cri-containerd-a2e35ac1520c6560b2e166f39b8b9a9aea7f72920b28bb60777943e8f7306348.scope - libcontainer container a2e35ac1520c6560b2e166f39b8b9a9aea7f72920b28bb60777943e8f7306348. Oct 8 19:33:07.961436 containerd[2024]: time="2024-10-08T19:33:07.960767895Z" level=info msg="StartContainer for \"a2e35ac1520c6560b2e166f39b8b9a9aea7f72920b28bb60777943e8f7306348\" returns successfully" Oct 8 19:33:08.885567 kubelet[3361]: I1008 19:33:08.885263 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-858bc65f54-9m4dk" podStartSLOduration=29.804960238 podStartE2EDuration="35.885216064s" podCreationTimestamp="2024-10-08 19:32:33 +0000 UTC" firstStartedPulling="2024-10-08 19:33:01.409571239 +0000 UTC m=+42.694470093" lastFinishedPulling="2024-10-08 19:33:07.489826513 +0000 UTC m=+48.774725919" observedRunningTime="2024-10-08 19:33:08.884436172 +0000 UTC m=+50.169335062" watchObservedRunningTime="2024-10-08 19:33:08.885216064 +0000 UTC m=+50.170115026" Oct 8 19:33:10.142106 containerd[2024]: time="2024-10-08T19:33:10.138172262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:10.143404 containerd[2024]: time="2024-10-08T19:33:10.143318366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1: active requests=0, bytes read=12116870" Oct 8 19:33:10.144619 containerd[2024]: time="2024-10-08T19:33:10.144425666Z" level=info msg="ImageCreate event name:\"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:10.159671 containerd[2024]: time="2024-10-08T19:33:10.159551426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:10.162212 containerd[2024]: time="2024-10-08T19:33:10.162140894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" with image id \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:682cc97e4580d25b7314032c008a552bb05182fac34eba82cc389113c7767076\", size \"13484341\" in 2.666301217s" Oct 8 19:33:10.163767 containerd[2024]: time="2024-10-08T19:33:10.162354542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.1\" returns image reference \"sha256:4df800f2dc90e056e3dc95be5afe5cd399ce8785c6817ddeaf07b498cb85207a\"" Oct 8 19:33:10.194259 containerd[2024]: time="2024-10-08T19:33:10.192741362Z" level=info msg="CreateContainer within sandbox \"b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 8 19:33:10.269452 containerd[2024]: time="2024-10-08T19:33:10.269361699Z" level=info msg="CreateContainer within sandbox \"b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b7f8cf0593b5dad5e5ac22a7bf49a00fe62546c5bc2cbe40b4bdf38cc2dd39fe\"" Oct 8 19:33:10.279085 containerd[2024]: time="2024-10-08T19:33:10.276231819Z" level=info msg="StartContainer for \"b7f8cf0593b5dad5e5ac22a7bf49a00fe62546c5bc2cbe40b4bdf38cc2dd39fe\"" Oct 8 19:33:10.388386 systemd[1]: Started cri-containerd-b7f8cf0593b5dad5e5ac22a7bf49a00fe62546c5bc2cbe40b4bdf38cc2dd39fe.scope - libcontainer container b7f8cf0593b5dad5e5ac22a7bf49a00fe62546c5bc2cbe40b4bdf38cc2dd39fe. Oct 8 19:33:10.477580 containerd[2024]: time="2024-10-08T19:33:10.476221636Z" level=info msg="StartContainer for \"b7f8cf0593b5dad5e5ac22a7bf49a00fe62546c5bc2cbe40b4bdf38cc2dd39fe\" returns successfully" Oct 8 19:33:11.285219 kubelet[3361]: I1008 19:33:11.284518 3361 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 8 19:33:11.285219 kubelet[3361]: I1008 19:33:11.284575 3361 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 8 19:33:11.549376 kubelet[3361]: I1008 19:33:11.548952 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-svwzg" podStartSLOduration=29.820076421 podStartE2EDuration="39.548903129s" podCreationTimestamp="2024-10-08 19:32:32 +0000 UTC" firstStartedPulling="2024-10-08 19:33:00.441012366 +0000 UTC m=+41.725911244" lastFinishedPulling="2024-10-08 19:33:10.16983911 +0000 UTC m=+51.454737952" observedRunningTime="2024-10-08 19:33:10.997855626 +0000 UTC m=+52.282754504" watchObservedRunningTime="2024-10-08 19:33:11.548903129 +0000 UTC m=+52.833801983" Oct 8 19:33:11.569428 kubelet[3361]: I1008 19:33:11.568920 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9f7g\" (UniqueName: \"kubernetes.io/projected/56a8d209-58cb-4c88-ad25-87ad5a71c08b-kube-api-access-g9f7g\") pod \"calico-apiserver-69665fc5c6-wwq7q\" (UID: \"56a8d209-58cb-4c88-ad25-87ad5a71c08b\") " pod="calico-apiserver/calico-apiserver-69665fc5c6-wwq7q" Oct 8 19:33:11.569428 kubelet[3361]: I1008 19:33:11.569172 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/56a8d209-58cb-4c88-ad25-87ad5a71c08b-calico-apiserver-certs\") pod \"calico-apiserver-69665fc5c6-wwq7q\" (UID: \"56a8d209-58cb-4c88-ad25-87ad5a71c08b\") " pod="calico-apiserver/calico-apiserver-69665fc5c6-wwq7q" Oct 8 19:33:11.582187 systemd[1]: Created slice kubepods-besteffort-pod56a8d209_58cb_4c88_ad25_87ad5a71c08b.slice - libcontainer container kubepods-besteffort-pod56a8d209_58cb_4c88_ad25_87ad5a71c08b.slice. Oct 8 19:33:11.650682 systemd[1]: Created slice kubepods-besteffort-poda1400822_7870_497c_8f0a_0c1f0386e8a8.slice - libcontainer container kubepods-besteffort-poda1400822_7870_497c_8f0a_0c1f0386e8a8.slice. Oct 8 19:33:11.671054 kubelet[3361]: I1008 19:33:11.669976 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cc6h\" (UniqueName: \"kubernetes.io/projected/a1400822-7870-497c-8f0a-0c1f0386e8a8-kube-api-access-7cc6h\") pod \"calico-apiserver-69665fc5c6-4945x\" (UID: \"a1400822-7870-497c-8f0a-0c1f0386e8a8\") " pod="calico-apiserver/calico-apiserver-69665fc5c6-4945x" Oct 8 19:33:11.671054 kubelet[3361]: I1008 19:33:11.670192 3361 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a1400822-7870-497c-8f0a-0c1f0386e8a8-calico-apiserver-certs\") pod \"calico-apiserver-69665fc5c6-4945x\" (UID: \"a1400822-7870-497c-8f0a-0c1f0386e8a8\") " pod="calico-apiserver/calico-apiserver-69665fc5c6-4945x" Oct 8 19:33:11.671656 kubelet[3361]: E1008 19:33:11.671593 3361 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:33:11.672578 kubelet[3361]: E1008 19:33:11.672536 3361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56a8d209-58cb-4c88-ad25-87ad5a71c08b-calico-apiserver-certs podName:56a8d209-58cb-4c88-ad25-87ad5a71c08b nodeName:}" failed. No retries permitted until 2024-10-08 19:33:12.17213221 +0000 UTC m=+53.457031076 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/56a8d209-58cb-4c88-ad25-87ad5a71c08b-calico-apiserver-certs") pod "calico-apiserver-69665fc5c6-wwq7q" (UID: "56a8d209-58cb-4c88-ad25-87ad5a71c08b") : secret "calico-apiserver-certs" not found Oct 8 19:33:11.773074 kubelet[3361]: E1008 19:33:11.770952 3361 secret.go:188] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Oct 8 19:33:11.773074 kubelet[3361]: E1008 19:33:11.771202 3361 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1400822-7870-497c-8f0a-0c1f0386e8a8-calico-apiserver-certs podName:a1400822-7870-497c-8f0a-0c1f0386e8a8 nodeName:}" failed. No retries permitted until 2024-10-08 19:33:12.271113538 +0000 UTC m=+53.556012392 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/a1400822-7870-497c-8f0a-0c1f0386e8a8-calico-apiserver-certs") pod "calico-apiserver-69665fc5c6-4945x" (UID: "a1400822-7870-497c-8f0a-0c1f0386e8a8") : secret "calico-apiserver-certs" not found Oct 8 19:33:12.192060 containerd[2024]: time="2024-10-08T19:33:12.190720420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69665fc5c6-wwq7q,Uid:56a8d209-58cb-4c88-ad25-87ad5a71c08b,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:33:12.564480 containerd[2024]: time="2024-10-08T19:33:12.562741182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69665fc5c6-4945x,Uid:a1400822-7870-497c-8f0a-0c1f0386e8a8,Namespace:calico-apiserver,Attempt:0,}" Oct 8 19:33:12.751557 systemd[1]: Started sshd@7-172.31.19.2:22-139.178.68.195:43720.service - OpenSSH per-connection server daemon (139.178.68.195:43720). Oct 8 19:33:12.989520 sshd[5392]: Accepted publickey for core from 139.178.68.195 port 43720 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:13.002360 sshd[5392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:13.036769 systemd-logind[1996]: New session 8 of user core. Oct 8 19:33:13.047265 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 8 19:33:13.241767 systemd-networkd[1854]: calib6e9bc405f2: Link UP Oct 8 19:33:13.242283 systemd-networkd[1854]: calib6e9bc405f2: Gained carrier Oct 8 19:33:13.264189 (udev-worker)[5416]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:12.511 [INFO][5357] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0 calico-apiserver-69665fc5c6- calico-apiserver 56a8d209-58cb-4c88-ad25-87ad5a71c08b 822 0 2024-10-08 19:33:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69665fc5c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-2 calico-apiserver-69665fc5c6-wwq7q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6e9bc405f2 [] []}} ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-wwq7q" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:12.512 [INFO][5357] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-wwq7q" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:12.828 [INFO][5386] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" HandleID="k8s-pod-network.bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Workload="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:12.970 [INFO][5386] ipam_plugin.go 270: Auto assigning IP ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" HandleID="k8s-pod-network.bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Workload="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051cb30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-2", "pod":"calico-apiserver-69665fc5c6-wwq7q", "timestamp":"2024-10-08 19:33:12.82858676 +0000 UTC"}, Hostname:"ip-172-31-19-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:12.970 [INFO][5386] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:12.970 [INFO][5386] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:12.970 [INFO][5386] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-2' Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.006 [INFO][5386] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" host="ip-172-31-19-2" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.073 [INFO][5386] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-2" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.099 [INFO][5386] ipam.go 489: Trying affinity for 192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.113 [INFO][5386] ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.128 [INFO][5386] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.128 [INFO][5386] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" host="ip-172-31-19-2" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.155 [INFO][5386] ipam.go 1685: Creating new handle: k8s-pod-network.bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119 Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.178 [INFO][5386] ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" host="ip-172-31-19-2" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.215 [INFO][5386] ipam.go 1216: Successfully claimed IPs: [192.168.47.133/26] block=192.168.47.128/26 handle="k8s-pod-network.bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" host="ip-172-31-19-2" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.215 [INFO][5386] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.133/26] handle="k8s-pod-network.bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" host="ip-172-31-19-2" Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.216 [INFO][5386] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:13.316369 containerd[2024]: 2024-10-08 19:33:13.216 [INFO][5386] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.47.133/26] IPv6=[] ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" HandleID="k8s-pod-network.bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Workload="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" Oct 8 19:33:13.329782 containerd[2024]: 2024-10-08 19:33:13.226 [INFO][5357] k8s.go 386: Populated endpoint ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-wwq7q" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0", GenerateName:"calico-apiserver-69665fc5c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"56a8d209-58cb-4c88-ad25-87ad5a71c08b", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69665fc5c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"", Pod:"calico-apiserver-69665fc5c6-wwq7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6e9bc405f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:13.329782 containerd[2024]: 2024-10-08 19:33:13.228 [INFO][5357] k8s.go 387: Calico CNI using IPs: [192.168.47.133/32] ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-wwq7q" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" Oct 8 19:33:13.329782 containerd[2024]: 2024-10-08 19:33:13.228 [INFO][5357] dataplane_linux.go 68: Setting the host side veth name to calib6e9bc405f2 ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-wwq7q" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" Oct 8 19:33:13.329782 containerd[2024]: 2024-10-08 19:33:13.242 [INFO][5357] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-wwq7q" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" Oct 8 19:33:13.329782 containerd[2024]: 2024-10-08 19:33:13.250 [INFO][5357] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-wwq7q" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0", GenerateName:"calico-apiserver-69665fc5c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"56a8d209-58cb-4c88-ad25-87ad5a71c08b", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69665fc5c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119", Pod:"calico-apiserver-69665fc5c6-wwq7q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6e9bc405f2", MAC:"86:74:f8:7e:55:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:13.329782 containerd[2024]: 2024-10-08 19:33:13.309 [INFO][5357] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-wwq7q" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--wwq7q-eth0" Oct 8 19:33:13.421121 containerd[2024]: time="2024-10-08T19:33:13.416355366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:33:13.421121 containerd[2024]: time="2024-10-08T19:33:13.416528286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:13.421121 containerd[2024]: time="2024-10-08T19:33:13.416576598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:33:13.421121 containerd[2024]: time="2024-10-08T19:33:13.416612214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:13.551813 systemd[1]: run-containerd-runc-k8s.io-bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119-runc.q4PNJX.mount: Deactivated successfully. Oct 8 19:33:13.603632 systemd[1]: Started cri-containerd-bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119.scope - libcontainer container bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119. Oct 8 19:33:13.639764 (udev-worker)[5418]: Network interface NamePolicy= disabled on kernel command line. Oct 8 19:33:13.642592 systemd-networkd[1854]: cali2e1af1f5fd0: Link UP Oct 8 19:33:13.651405 systemd-networkd[1854]: cali2e1af1f5fd0: Gained carrier Oct 8 19:33:13.709432 sshd[5392]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:13.724183 systemd[1]: sshd@7-172.31.19.2:22-139.178.68.195:43720.service: Deactivated successfully. Oct 8 19:33:13.735642 systemd[1]: session-8.scope: Deactivated successfully. Oct 8 19:33:13.750861 systemd-logind[1996]: Session 8 logged out. Waiting for processes to exit. Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:12.894 [INFO][5380] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0 calico-apiserver-69665fc5c6- calico-apiserver a1400822-7870-497c-8f0a-0c1f0386e8a8 827 0 2024-10-08 19:33:11 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69665fc5c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-19-2 calico-apiserver-69665fc5c6-4945x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e1af1f5fd0 [] []}} ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-4945x" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:12.894 [INFO][5380] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-4945x" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.154 [INFO][5398] ipam_plugin.go 230: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" HandleID="k8s-pod-network.804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Workload="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.333 [INFO][5398] ipam_plugin.go 270: Auto assigning IP ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" HandleID="k8s-pod-network.804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Workload="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a7830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-19-2", "pod":"calico-apiserver-69665fc5c6-4945x", "timestamp":"2024-10-08 19:33:13.154321397 +0000 UTC"}, Hostname:"ip-172-31-19-2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.338 [INFO][5398] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.339 [INFO][5398] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.339 [INFO][5398] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-19-2' Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.356 [INFO][5398] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" host="ip-172-31-19-2" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.451 [INFO][5398] ipam.go 372: Looking up existing affinities for host host="ip-172-31-19-2" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.479 [INFO][5398] ipam.go 489: Trying affinity for 192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.488 [INFO][5398] ipam.go 155: Attempting to load block cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.518 [INFO][5398] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.128/26 host="ip-172-31-19-2" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.518 [INFO][5398] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.128/26 handle="k8s-pod-network.804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" host="ip-172-31-19-2" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.547 [INFO][5398] ipam.go 1685: Creating new handle: k8s-pod-network.804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.570 [INFO][5398] ipam.go 1203: Writing block in order to claim IPs block=192.168.47.128/26 handle="k8s-pod-network.804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" host="ip-172-31-19-2" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.613 [INFO][5398] ipam.go 1216: Successfully claimed IPs: [192.168.47.134/26] block=192.168.47.128/26 handle="k8s-pod-network.804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" host="ip-172-31-19-2" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.613 [INFO][5398] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.134/26] handle="k8s-pod-network.804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" host="ip-172-31-19-2" Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.614 [INFO][5398] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:13.753585 containerd[2024]: 2024-10-08 19:33:13.614 [INFO][5398] ipam_plugin.go 288: Calico CNI IPAM assigned addresses IPv4=[192.168.47.134/26] IPv6=[] ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" HandleID="k8s-pod-network.804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Workload="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" Oct 8 19:33:13.755005 containerd[2024]: 2024-10-08 19:33:13.625 [INFO][5380] k8s.go 386: Populated endpoint ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-4945x" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0", GenerateName:"calico-apiserver-69665fc5c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1400822-7870-497c-8f0a-0c1f0386e8a8", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69665fc5c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"", Pod:"calico-apiserver-69665fc5c6-4945x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e1af1f5fd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:13.755005 containerd[2024]: 2024-10-08 19:33:13.627 [INFO][5380] k8s.go 387: Calico CNI using IPs: [192.168.47.134/32] ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-4945x" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" Oct 8 19:33:13.755005 containerd[2024]: 2024-10-08 19:33:13.627 [INFO][5380] dataplane_linux.go 68: Setting the host side veth name to cali2e1af1f5fd0 ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-4945x" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" Oct 8 19:33:13.755005 containerd[2024]: 2024-10-08 19:33:13.666 [INFO][5380] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-4945x" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" Oct 8 19:33:13.755005 containerd[2024]: 2024-10-08 19:33:13.691 [INFO][5380] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-4945x" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0", GenerateName:"calico-apiserver-69665fc5c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"a1400822-7870-497c-8f0a-0c1f0386e8a8", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 33, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69665fc5c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e", Pod:"calico-apiserver-69665fc5c6-4945x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.47.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e1af1f5fd0", MAC:"5a:ea:44:27:19:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:13.755005 containerd[2024]: 2024-10-08 19:33:13.741 [INFO][5380] k8s.go 500: Wrote updated endpoint to datastore ContainerID="804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e" Namespace="calico-apiserver" Pod="calico-apiserver-69665fc5c6-4945x" WorkloadEndpoint="ip--172--31--19--2-k8s-calico--apiserver--69665fc5c6--4945x-eth0" Oct 8 19:33:13.758380 systemd-logind[1996]: Removed session 8. Oct 8 19:33:13.853981 containerd[2024]: time="2024-10-08T19:33:13.852090861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 8 19:33:13.853981 containerd[2024]: time="2024-10-08T19:33:13.852248745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:13.853981 containerd[2024]: time="2024-10-08T19:33:13.852303933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 8 19:33:13.853981 containerd[2024]: time="2024-10-08T19:33:13.852338841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 8 19:33:13.972188 systemd[1]: Started cri-containerd-804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e.scope - libcontainer container 804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e. Oct 8 19:33:14.078033 containerd[2024]: time="2024-10-08T19:33:14.076478466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69665fc5c6-wwq7q,Uid:56a8d209-58cb-4c88-ad25-87ad5a71c08b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119\"" Oct 8 19:33:14.087269 containerd[2024]: time="2024-10-08T19:33:14.086813514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:33:14.287158 containerd[2024]: time="2024-10-08T19:33:14.286028635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69665fc5c6-4945x,Uid:a1400822-7870-497c-8f0a-0c1f0386e8a8,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e\"" Oct 8 19:33:14.865387 systemd-networkd[1854]: calib6e9bc405f2: Gained IPv6LL Oct 8 19:33:15.634472 systemd-networkd[1854]: cali2e1af1f5fd0: Gained IPv6LL Oct 8 19:33:17.578798 containerd[2024]: time="2024-10-08T19:33:17.576580427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:17.582272 containerd[2024]: time="2024-10-08T19:33:17.582200531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=37849884" Oct 8 19:33:17.583613 containerd[2024]: time="2024-10-08T19:33:17.583511027Z" level=info msg="ImageCreate event name:\"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:17.593932 containerd[2024]: time="2024-10-08T19:33:17.593783123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:17.599127 containerd[2024]: time="2024-10-08T19:33:17.596941847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 3.510040265s" Oct 8 19:33:17.599127 containerd[2024]: time="2024-10-08T19:33:17.597033515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:33:17.605451 containerd[2024]: time="2024-10-08T19:33:17.605381471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\"" Oct 8 19:33:17.608068 containerd[2024]: time="2024-10-08T19:33:17.607679327Z" level=info msg="CreateContainer within sandbox \"bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:33:17.651322 containerd[2024]: time="2024-10-08T19:33:17.648122568Z" level=info msg="CreateContainer within sandbox \"bd6196f9a0bdabe8147792fe5315ba309aac62c3000b0020c5fc82a694c05119\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a8c24bfa058cff7062234cfcc161c9314b8782d7a86d6b8730e8e8600fedac78\"" Oct 8 19:33:17.655920 containerd[2024]: time="2024-10-08T19:33:17.654181992Z" level=info msg="StartContainer for \"a8c24bfa058cff7062234cfcc161c9314b8782d7a86d6b8730e8e8600fedac78\"" Oct 8 19:33:17.659306 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591249397.mount: Deactivated successfully. Oct 8 19:33:17.771804 systemd[1]: Started cri-containerd-a8c24bfa058cff7062234cfcc161c9314b8782d7a86d6b8730e8e8600fedac78.scope - libcontainer container a8c24bfa058cff7062234cfcc161c9314b8782d7a86d6b8730e8e8600fedac78. Oct 8 19:33:18.060747 containerd[2024]: time="2024-10-08T19:33:18.060566746Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.1: active requests=0, bytes read=77" Oct 8 19:33:18.061289 containerd[2024]: time="2024-10-08T19:33:18.060931750Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 8 19:33:18.081478 containerd[2024]: time="2024-10-08T19:33:18.081355462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" with image id \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b4ee1aa27bdeddc34dd200145eb033b716cf598570206c96693a35a317ab4f1e\", size \"39217419\" in 475.896735ms" Oct 8 19:33:18.081478 containerd[2024]: time="2024-10-08T19:33:18.081455170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.1\" returns image reference \"sha256:913d8e601c95ebd056c4c949f148ec565327fa2c94a6c34bb4fcfbd9063a58ec\"" Oct 8 19:33:18.088617 containerd[2024]: time="2024-10-08T19:33:18.088517230Z" level=info msg="CreateContainer within sandbox \"804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 8 19:33:18.134707 containerd[2024]: time="2024-10-08T19:33:18.134277346Z" level=info msg="CreateContainer within sandbox \"804e0834756e4bd3b600495dfad707c6e808e891e1509a9bd4e566a71c5ae13e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"720e1cb08dbb1a64accb9d03ed9c01b0e297312ca49cb0a1815287a87088c623\"" Oct 8 19:33:18.142056 containerd[2024]: time="2024-10-08T19:33:18.141848782Z" level=info msg="StartContainer for \"720e1cb08dbb1a64accb9d03ed9c01b0e297312ca49cb0a1815287a87088c623\"" Oct 8 19:33:18.180034 containerd[2024]: time="2024-10-08T19:33:18.179221438Z" level=info msg="StartContainer for \"a8c24bfa058cff7062234cfcc161c9314b8782d7a86d6b8730e8e8600fedac78\" returns successfully" Oct 8 19:33:18.257601 systemd[1]: Started cri-containerd-720e1cb08dbb1a64accb9d03ed9c01b0e297312ca49cb0a1815287a87088c623.scope - libcontainer container 720e1cb08dbb1a64accb9d03ed9c01b0e297312ca49cb0a1815287a87088c623. Oct 8 19:33:18.495847 containerd[2024]: time="2024-10-08T19:33:18.495748212Z" level=info msg="StartContainer for \"720e1cb08dbb1a64accb9d03ed9c01b0e297312ca49cb0a1815287a87088c623\" returns successfully" Oct 8 19:33:18.594869 ntpd[1990]: Listen normally on 13 calib6e9bc405f2 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 19:33:18.595838 ntpd[1990]: 8 Oct 19:33:18 ntpd[1990]: Listen normally on 13 calib6e9bc405f2 [fe80::ecee:eeff:feee:eeee%11]:123 Oct 8 19:33:18.595838 ntpd[1990]: 8 Oct 19:33:18 ntpd[1990]: Listen normally on 14 cali2e1af1f5fd0 [fe80::ecee:eeff:feee:eeee%12]:123 Oct 8 19:33:18.595114 ntpd[1990]: Listen normally on 14 cali2e1af1f5fd0 [fe80::ecee:eeff:feee:eeee%12]:123 Oct 8 19:33:18.757592 systemd[1]: Started sshd@8-172.31.19.2:22-139.178.68.195:43736.service - OpenSSH per-connection server daemon (139.178.68.195:43736). Oct 8 19:33:18.999401 sshd[5620]: Accepted publickey for core from 139.178.68.195 port 43736 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:19.006006 sshd[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:19.026903 systemd-logind[1996]: New session 9 of user core. Oct 8 19:33:19.035288 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 8 19:33:19.097679 containerd[2024]: time="2024-10-08T19:33:19.097469663Z" level=info msg="StopPodSandbox for \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\"" Oct 8 19:33:19.122046 kubelet[3361]: I1008 19:33:19.120229 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69665fc5c6-4945x" podStartSLOduration=4.329715596 podStartE2EDuration="8.120152735s" podCreationTimestamp="2024-10-08 19:33:11 +0000 UTC" firstStartedPulling="2024-10-08 19:33:14.293688055 +0000 UTC m=+55.578586909" lastFinishedPulling="2024-10-08 19:33:18.084125206 +0000 UTC m=+59.369024048" observedRunningTime="2024-10-08 19:33:19.117907691 +0000 UTC m=+60.402806557" watchObservedRunningTime="2024-10-08 19:33:19.120152735 +0000 UTC m=+60.405051601" Oct 8 19:33:19.302322 kubelet[3361]: I1008 19:33:19.298317 3361 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69665fc5c6-wwq7q" podStartSLOduration=4.7800339229999995 podStartE2EDuration="8.298275624s" podCreationTimestamp="2024-10-08 19:33:11 +0000 UTC" firstStartedPulling="2024-10-08 19:33:14.08505255 +0000 UTC m=+55.369951404" lastFinishedPulling="2024-10-08 19:33:17.603294239 +0000 UTC m=+58.888193105" observedRunningTime="2024-10-08 19:33:19.292927464 +0000 UTC m=+60.577826342" watchObservedRunningTime="2024-10-08 19:33:19.298275624 +0000 UTC m=+60.583174478" Oct 8 19:33:19.568541 sshd[5620]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:19.588117 systemd[1]: sshd@8-172.31.19.2:22-139.178.68.195:43736.service: Deactivated successfully. Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.451 [WARNING][5638] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf9af281-d393-45ad-bbb2-55616503d436", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a", Pod:"coredns-6f6b679f8f-5gqd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86676f450e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.451 [INFO][5638] k8s.go 608: Cleaning up netns ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.451 [INFO][5638] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" iface="eth0" netns="" Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.451 [INFO][5638] k8s.go 615: Releasing IP address(es) ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.451 [INFO][5638] utils.go 188: Calico CNI releasing IP address ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.548 [INFO][5658] ipam_plugin.go 417: Releasing address using handleID ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" HandleID="k8s-pod-network.98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.549 [INFO][5658] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.549 [INFO][5658] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.564 [WARNING][5658] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" HandleID="k8s-pod-network.98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.565 [INFO][5658] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" HandleID="k8s-pod-network.98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.573 [INFO][5658] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:19.602283 containerd[2024]: 2024-10-08 19:33:19.586 [INFO][5638] k8s.go 621: Teardown processing complete. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:19.602283 containerd[2024]: time="2024-10-08T19:33:19.599868553Z" level=info msg="TearDown network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\" successfully" Oct 8 19:33:19.602283 containerd[2024]: time="2024-10-08T19:33:19.599950033Z" level=info msg="StopPodSandbox for \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\" returns successfully" Oct 8 19:33:19.606498 containerd[2024]: time="2024-10-08T19:33:19.604168297Z" level=info msg="RemovePodSandbox for \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\"" Oct 8 19:33:19.606498 containerd[2024]: time="2024-10-08T19:33:19.604340689Z" level=info msg="Forcibly stopping sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\"" Oct 8 19:33:19.608488 systemd[1]: session-9.scope: Deactivated successfully. Oct 8 19:33:19.618060 systemd-logind[1996]: Session 9 logged out. Waiting for processes to exit. Oct 8 19:33:19.631344 systemd-logind[1996]: Removed session 9. Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.778 [WARNING][5678] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"bf9af281-d393-45ad-bbb2-55616503d436", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"c11232bf5624f62acef5f5ba187f5d03d65326b51d2aa4657647281248c7be9a", Pod:"coredns-6f6b679f8f-5gqd7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86676f450e3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.780 [INFO][5678] k8s.go 608: Cleaning up netns ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.781 [INFO][5678] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" iface="eth0" netns="" Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.781 [INFO][5678] k8s.go 615: Releasing IP address(es) ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.781 [INFO][5678] utils.go 188: Calico CNI releasing IP address ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.851 [INFO][5685] ipam_plugin.go 417: Releasing address using handleID ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" HandleID="k8s-pod-network.98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.852 [INFO][5685] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.852 [INFO][5685] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.872 [WARNING][5685] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" HandleID="k8s-pod-network.98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.872 [INFO][5685] ipam_plugin.go 445: Releasing address using workloadID ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" HandleID="k8s-pod-network.98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--5gqd7-eth0" Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.876 [INFO][5685] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:19.886692 containerd[2024]: 2024-10-08 19:33:19.880 [INFO][5678] k8s.go 621: Teardown processing complete. ContainerID="98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1" Oct 8 19:33:19.886692 containerd[2024]: time="2024-10-08T19:33:19.885202203Z" level=info msg="TearDown network for sandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\" successfully" Oct 8 19:33:19.894457 containerd[2024]: time="2024-10-08T19:33:19.894340863Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:33:19.894708 containerd[2024]: time="2024-10-08T19:33:19.894516375Z" level=info msg="RemovePodSandbox \"98c773eca40d82870697d45c643aade183338239dc9d39615c4f7325217f04b1\" returns successfully" Oct 8 19:33:19.895544 containerd[2024]: time="2024-10-08T19:33:19.895449291Z" level=info msg="StopPodSandbox for \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\"" Oct 8 19:33:20.117835 kubelet[3361]: I1008 19:33:20.114478 3361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:33:20.117835 kubelet[3361]: I1008 19:33:20.115463 3361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.095 [WARNING][5706] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0", GenerateName:"calico-kube-controllers-858bc65f54-", Namespace:"calico-system", SelfLink:"", UID:"5dc824f0-1019-4436-8135-184fe45fe379", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bc65f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab", Pod:"calico-kube-controllers-858bc65f54-9m4dk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c14bdf0bda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.096 [INFO][5706] k8s.go 608: Cleaning up netns ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.096 [INFO][5706] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" iface="eth0" netns="" Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.096 [INFO][5706] k8s.go 615: Releasing IP address(es) ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.096 [INFO][5706] utils.go 188: Calico CNI releasing IP address ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.218 [INFO][5714] ipam_plugin.go 417: Releasing address using handleID ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" HandleID="k8s-pod-network.e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.219 [INFO][5714] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.220 [INFO][5714] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.262 [WARNING][5714] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" HandleID="k8s-pod-network.e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.262 [INFO][5714] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" HandleID="k8s-pod-network.e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.267 [INFO][5714] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:20.279538 containerd[2024]: 2024-10-08 19:33:20.273 [INFO][5706] k8s.go 621: Teardown processing complete. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:20.281724 containerd[2024]: time="2024-10-08T19:33:20.279623497Z" level=info msg="TearDown network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\" successfully" Oct 8 19:33:20.281724 containerd[2024]: time="2024-10-08T19:33:20.279705781Z" level=info msg="StopPodSandbox for \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\" returns successfully" Oct 8 19:33:20.281724 containerd[2024]: time="2024-10-08T19:33:20.281667901Z" level=info msg="RemovePodSandbox for \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\"" Oct 8 19:33:20.281893 containerd[2024]: time="2024-10-08T19:33:20.281729005Z" level=info msg="Forcibly stopping sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\"" Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.426 [WARNING][5733] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0", GenerateName:"calico-kube-controllers-858bc65f54-", Namespace:"calico-system", SelfLink:"", UID:"5dc824f0-1019-4436-8135-184fe45fe379", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"858bc65f54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"a24b2704ec1ae396ae6a0b8bc77e9de44dad68696f61e2f2eda6acbae5b68eab", Pod:"calico-kube-controllers-858bc65f54-9m4dk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.47.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c14bdf0bda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.426 [INFO][5733] k8s.go 608: Cleaning up netns ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.427 [INFO][5733] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" iface="eth0" netns="" Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.427 [INFO][5733] k8s.go 615: Releasing IP address(es) ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.427 [INFO][5733] utils.go 188: Calico CNI releasing IP address ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.503 [INFO][5740] ipam_plugin.go 417: Releasing address using handleID ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" HandleID="k8s-pod-network.e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.504 [INFO][5740] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.504 [INFO][5740] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.524 [WARNING][5740] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" HandleID="k8s-pod-network.e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.525 [INFO][5740] ipam_plugin.go 445: Releasing address using workloadID ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" HandleID="k8s-pod-network.e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Workload="ip--172--31--19--2-k8s-calico--kube--controllers--858bc65f54--9m4dk-eth0" Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.533 [INFO][5740] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:20.547524 containerd[2024]: 2024-10-08 19:33:20.539 [INFO][5733] k8s.go 621: Teardown processing complete. ContainerID="e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4" Oct 8 19:33:20.547524 containerd[2024]: time="2024-10-08T19:33:20.545221934Z" level=info msg="TearDown network for sandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\" successfully" Oct 8 19:33:20.553188 containerd[2024]: time="2024-10-08T19:33:20.552922970Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:33:20.553463 containerd[2024]: time="2024-10-08T19:33:20.553287398Z" level=info msg="RemovePodSandbox \"e8a90703cbe0947d31f8c6e68c55fee4d7d328ab9dd23e077852262d29124aa4\" returns successfully" Oct 8 19:33:20.557025 containerd[2024]: time="2024-10-08T19:33:20.554834762Z" level=info msg="StopPodSandbox for \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\"" Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.680 [WARNING][5758] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d686f5b9-ae61-4c98-9562-0d2b1ca6daa8", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19", Pod:"coredns-6f6b679f8f-msl8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28bde894187", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.680 [INFO][5758] k8s.go 608: Cleaning up netns ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.680 [INFO][5758] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" iface="eth0" netns="" Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.680 [INFO][5758] k8s.go 615: Releasing IP address(es) ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.680 [INFO][5758] utils.go 188: Calico CNI releasing IP address ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.749 [INFO][5765] ipam_plugin.go 417: Releasing address using handleID ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" HandleID="k8s-pod-network.1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.750 [INFO][5765] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.750 [INFO][5765] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.768 [WARNING][5765] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" HandleID="k8s-pod-network.1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.768 [INFO][5765] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" HandleID="k8s-pod-network.1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.773 [INFO][5765] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:20.780852 containerd[2024]: 2024-10-08 19:33:20.777 [INFO][5758] k8s.go 621: Teardown processing complete. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:20.785047 containerd[2024]: time="2024-10-08T19:33:20.780883971Z" level=info msg="TearDown network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\" successfully" Oct 8 19:33:20.785047 containerd[2024]: time="2024-10-08T19:33:20.780969879Z" level=info msg="StopPodSandbox for \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\" returns successfully" Oct 8 19:33:20.785047 containerd[2024]: time="2024-10-08T19:33:20.782554719Z" level=info msg="RemovePodSandbox for \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\"" Oct 8 19:33:20.785047 containerd[2024]: time="2024-10-08T19:33:20.782750319Z" level=info msg="Forcibly stopping sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\"" Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:20.950 [WARNING][5783] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"d686f5b9-ae61-4c98-9562-0d2b1ca6daa8", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"37e11937398a525336470a3f7ada9f0dad342fa74309e424f146341c50edcd19", Pod:"coredns-6f6b679f8f-msl8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.47.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28bde894187", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:20.953 [INFO][5783] k8s.go 608: Cleaning up netns ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:20.955 [INFO][5783] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" iface="eth0" netns="" Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:20.955 [INFO][5783] k8s.go 615: Releasing IP address(es) ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:20.955 [INFO][5783] utils.go 188: Calico CNI releasing IP address ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:21.025 [INFO][5789] ipam_plugin.go 417: Releasing address using handleID ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" HandleID="k8s-pod-network.1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:21.026 [INFO][5789] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:21.026 [INFO][5789] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:21.050 [WARNING][5789] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" HandleID="k8s-pod-network.1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:21.050 [INFO][5789] ipam_plugin.go 445: Releasing address using workloadID ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" HandleID="k8s-pod-network.1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Workload="ip--172--31--19--2-k8s-coredns--6f6b679f8f--msl8r-eth0" Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:21.054 [INFO][5789] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:21.077733 containerd[2024]: 2024-10-08 19:33:21.069 [INFO][5783] k8s.go 621: Teardown processing complete. ContainerID="1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c" Oct 8 19:33:21.079966 containerd[2024]: time="2024-10-08T19:33:21.077699917Z" level=info msg="TearDown network for sandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\" successfully" Oct 8 19:33:21.087591 containerd[2024]: time="2024-10-08T19:33:21.087488845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:33:21.087920 containerd[2024]: time="2024-10-08T19:33:21.087669637Z" level=info msg="RemovePodSandbox \"1637d8c349c8a64bbb9a1ad209a9c77c764bee31bb8949552e94e307b4f3a04c\" returns successfully" Oct 8 19:33:21.090024 containerd[2024]: time="2024-10-08T19:33:21.089882149Z" level=info msg="StopPodSandbox for \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\"" Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.254 [WARNING][5807] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4da6ef7d-86ec-4a74-b95b-c04030a59fa2", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc", Pod:"csi-node-driver-svwzg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9e1d3629809", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.255 [INFO][5807] k8s.go 608: Cleaning up netns ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.255 [INFO][5807] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" iface="eth0" netns="" Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.255 [INFO][5807] k8s.go 615: Releasing IP address(es) ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.255 [INFO][5807] utils.go 188: Calico CNI releasing IP address ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.322 [INFO][5813] ipam_plugin.go 417: Releasing address using handleID ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" HandleID="k8s-pod-network.6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.322 [INFO][5813] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.322 [INFO][5813] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.349 [WARNING][5813] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" HandleID="k8s-pod-network.6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.350 [INFO][5813] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" HandleID="k8s-pod-network.6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.352 [INFO][5813] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:21.360787 containerd[2024]: 2024-10-08 19:33:21.356 [INFO][5807] k8s.go 621: Teardown processing complete. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:33:21.360787 containerd[2024]: time="2024-10-08T19:33:21.359686118Z" level=info msg="TearDown network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\" successfully" Oct 8 19:33:21.360787 containerd[2024]: time="2024-10-08T19:33:21.359729606Z" level=info msg="StopPodSandbox for \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\" returns successfully" Oct 8 19:33:21.364515 containerd[2024]: time="2024-10-08T19:33:21.364372382Z" level=info msg="RemovePodSandbox for \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\"" Oct 8 19:33:21.364515 containerd[2024]: time="2024-10-08T19:33:21.364453514Z" level=info msg="Forcibly stopping sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\"" Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.463 [WARNING][5831] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4da6ef7d-86ec-4a74-b95b-c04030a59fa2", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.October, 8, 19, 32, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"779867c8f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-19-2", ContainerID:"b668d0b008e82fe0bc602f54d4aa8368158f4b7671bf68a8fde4a54a60cd0ebc", Pod:"csi-node-driver-svwzg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.47.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9e1d3629809", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.463 [INFO][5831] k8s.go 608: Cleaning up netns ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.463 [INFO][5831] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" iface="eth0" netns="" Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.464 [INFO][5831] k8s.go 615: Releasing IP address(es) ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.464 [INFO][5831] utils.go 188: Calico CNI releasing IP address ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.540 [INFO][5837] ipam_plugin.go 417: Releasing address using handleID ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" HandleID="k8s-pod-network.6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.540 [INFO][5837] ipam_plugin.go 358: About to acquire host-wide IPAM lock. Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.541 [INFO][5837] ipam_plugin.go 373: Acquired host-wide IPAM lock. Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.563 [WARNING][5837] ipam_plugin.go 434: Asked to release address but it doesn't exist. Ignoring ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" HandleID="k8s-pod-network.6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.563 [INFO][5837] ipam_plugin.go 445: Releasing address using workloadID ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" HandleID="k8s-pod-network.6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Workload="ip--172--31--19--2-k8s-csi--node--driver--svwzg-eth0" Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.566 [INFO][5837] ipam_plugin.go 379: Released host-wide IPAM lock. Oct 8 19:33:21.571687 containerd[2024]: 2024-10-08 19:33:21.569 [INFO][5831] k8s.go 621: Teardown processing complete. ContainerID="6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b" Oct 8 19:33:21.572482 containerd[2024]: time="2024-10-08T19:33:21.571775019Z" level=info msg="TearDown network for sandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\" successfully" Oct 8 19:33:21.577944 containerd[2024]: time="2024-10-08T19:33:21.577814607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 8 19:33:21.578169 containerd[2024]: time="2024-10-08T19:33:21.577972059Z" level=info msg="RemovePodSandbox \"6128e302ac538135908fa0bdc0713670d6b4259aa7479ebe26164ba5a042dc0b\" returns successfully" Oct 8 19:33:24.607657 systemd[1]: Started sshd@9-172.31.19.2:22-139.178.68.195:44412.service - OpenSSH per-connection server daemon (139.178.68.195:44412). Oct 8 19:33:24.805201 sshd[5857]: Accepted publickey for core from 139.178.68.195 port 44412 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:24.811343 sshd[5857]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:24.822955 systemd-logind[1996]: New session 10 of user core. Oct 8 19:33:24.829567 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 8 19:33:25.126753 sshd[5857]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:25.135504 systemd[1]: sshd@9-172.31.19.2:22-139.178.68.195:44412.service: Deactivated successfully. Oct 8 19:33:25.142065 systemd[1]: session-10.scope: Deactivated successfully. Oct 8 19:33:25.144665 systemd-logind[1996]: Session 10 logged out. Waiting for processes to exit. Oct 8 19:33:25.148804 systemd-logind[1996]: Removed session 10. Oct 8 19:33:30.174748 systemd[1]: Started sshd@10-172.31.19.2:22-139.178.68.195:44422.service - OpenSSH per-connection server daemon (139.178.68.195:44422). Oct 8 19:33:30.379289 sshd[5873]: Accepted publickey for core from 139.178.68.195 port 44422 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:30.382442 sshd[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:30.391341 systemd-logind[1996]: New session 11 of user core. Oct 8 19:33:30.403381 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 8 19:33:30.686473 sshd[5873]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:30.692823 systemd-logind[1996]: Session 11 logged out. Waiting for processes to exit. Oct 8 19:33:30.694423 systemd[1]: sshd@10-172.31.19.2:22-139.178.68.195:44422.service: Deactivated successfully. Oct 8 19:33:30.701647 systemd[1]: session-11.scope: Deactivated successfully. Oct 8 19:33:30.703618 systemd-logind[1996]: Removed session 11. Oct 8 19:33:30.724642 systemd[1]: Started sshd@11-172.31.19.2:22-139.178.68.195:55404.service - OpenSSH per-connection server daemon (139.178.68.195:55404). Oct 8 19:33:30.908861 sshd[5886]: Accepted publickey for core from 139.178.68.195 port 55404 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:30.911846 sshd[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:30.923668 systemd-logind[1996]: New session 12 of user core. Oct 8 19:33:30.935461 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 8 19:33:31.275760 sshd[5886]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:31.287972 systemd[1]: sshd@11-172.31.19.2:22-139.178.68.195:55404.service: Deactivated successfully. Oct 8 19:33:31.299212 systemd[1]: session-12.scope: Deactivated successfully. Oct 8 19:33:31.303039 systemd-logind[1996]: Session 12 logged out. Waiting for processes to exit. Oct 8 19:33:31.338836 systemd[1]: Started sshd@12-172.31.19.2:22-139.178.68.195:55410.service - OpenSSH per-connection server daemon (139.178.68.195:55410). Oct 8 19:33:31.344375 systemd-logind[1996]: Removed session 12. Oct 8 19:33:31.546574 sshd[5896]: Accepted publickey for core from 139.178.68.195 port 55410 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:31.550474 sshd[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:31.564403 systemd-logind[1996]: New session 13 of user core. Oct 8 19:33:31.575516 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 8 19:33:31.847779 sshd[5896]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:31.858930 systemd[1]: sshd@12-172.31.19.2:22-139.178.68.195:55410.service: Deactivated successfully. Oct 8 19:33:31.865752 systemd[1]: session-13.scope: Deactivated successfully. Oct 8 19:33:31.868692 systemd-logind[1996]: Session 13 logged out. Waiting for processes to exit. Oct 8 19:33:31.873262 systemd-logind[1996]: Removed session 13. Oct 8 19:33:36.897193 systemd[1]: Started sshd@13-172.31.19.2:22-139.178.68.195:55416.service - OpenSSH per-connection server daemon (139.178.68.195:55416). Oct 8 19:33:37.097786 sshd[5940]: Accepted publickey for core from 139.178.68.195 port 55416 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:37.102223 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:37.120549 systemd-logind[1996]: New session 14 of user core. Oct 8 19:33:37.122877 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 8 19:33:37.459244 sshd[5940]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:37.470183 systemd-logind[1996]: Session 14 logged out. Waiting for processes to exit. Oct 8 19:33:37.470395 systemd[1]: sshd@13-172.31.19.2:22-139.178.68.195:55416.service: Deactivated successfully. Oct 8 19:33:37.478835 systemd[1]: session-14.scope: Deactivated successfully. Oct 8 19:33:37.486158 systemd-logind[1996]: Removed session 14. Oct 8 19:33:42.514712 systemd[1]: Started sshd@14-172.31.19.2:22-139.178.68.195:59640.service - OpenSSH per-connection server daemon (139.178.68.195:59640). Oct 8 19:33:42.722382 sshd[5982]: Accepted publickey for core from 139.178.68.195 port 59640 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:42.725894 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:42.736247 systemd-logind[1996]: New session 15 of user core. Oct 8 19:33:42.745059 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 8 19:33:43.019211 sshd[5982]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:43.026345 systemd[1]: sshd@14-172.31.19.2:22-139.178.68.195:59640.service: Deactivated successfully. Oct 8 19:33:43.031032 systemd[1]: session-15.scope: Deactivated successfully. Oct 8 19:33:43.038583 systemd-logind[1996]: Session 15 logged out. Waiting for processes to exit. Oct 8 19:33:43.044539 systemd-logind[1996]: Removed session 15. Oct 8 19:33:46.335539 kubelet[3361]: I1008 19:33:46.335120 3361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:33:48.071328 systemd[1]: Started sshd@15-172.31.19.2:22-139.178.68.195:59648.service - OpenSSH per-connection server daemon (139.178.68.195:59648). Oct 8 19:33:48.274521 sshd[6022]: Accepted publickey for core from 139.178.68.195 port 59648 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:48.279794 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:48.301453 systemd-logind[1996]: New session 16 of user core. Oct 8 19:33:48.308872 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 8 19:33:48.597523 sshd[6022]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:48.607179 systemd-logind[1996]: Session 16 logged out. Waiting for processes to exit. Oct 8 19:33:48.608101 systemd[1]: sshd@15-172.31.19.2:22-139.178.68.195:59648.service: Deactivated successfully. Oct 8 19:33:48.614900 systemd[1]: session-16.scope: Deactivated successfully. Oct 8 19:33:48.621561 systemd-logind[1996]: Removed session 16. Oct 8 19:33:51.979560 kubelet[3361]: I1008 19:33:51.979449 3361 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 8 19:33:53.649549 systemd[1]: Started sshd@16-172.31.19.2:22-139.178.68.195:36374.service - OpenSSH per-connection server daemon (139.178.68.195:36374). Oct 8 19:33:53.839047 sshd[6038]: Accepted publickey for core from 139.178.68.195 port 36374 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:53.845575 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:53.859694 systemd-logind[1996]: New session 17 of user core. Oct 8 19:33:53.869448 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 8 19:33:54.298826 sshd[6038]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:54.311684 systemd[1]: sshd@16-172.31.19.2:22-139.178.68.195:36374.service: Deactivated successfully. Oct 8 19:33:54.318806 systemd[1]: session-17.scope: Deactivated successfully. Oct 8 19:33:54.323472 systemd-logind[1996]: Session 17 logged out. Waiting for processes to exit. Oct 8 19:33:54.356939 systemd[1]: Started sshd@17-172.31.19.2:22-139.178.68.195:36386.service - OpenSSH per-connection server daemon (139.178.68.195:36386). Oct 8 19:33:54.360694 systemd-logind[1996]: Removed session 17. Oct 8 19:33:54.556367 sshd[6058]: Accepted publickey for core from 139.178.68.195 port 36386 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:54.559504 sshd[6058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:54.571544 systemd-logind[1996]: New session 18 of user core. Oct 8 19:33:54.576378 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 8 19:33:55.259025 sshd[6058]: pam_unix(sshd:session): session closed for user core Oct 8 19:33:55.265469 systemd[1]: sshd@17-172.31.19.2:22-139.178.68.195:36386.service: Deactivated successfully. Oct 8 19:33:55.270699 systemd[1]: session-18.scope: Deactivated successfully. Oct 8 19:33:55.275467 systemd-logind[1996]: Session 18 logged out. Waiting for processes to exit. Oct 8 19:33:55.278019 systemd-logind[1996]: Removed session 18. Oct 8 19:33:55.297391 systemd[1]: Started sshd@18-172.31.19.2:22-139.178.68.195:36394.service - OpenSSH per-connection server daemon (139.178.68.195:36394). Oct 8 19:33:55.490126 sshd[6068]: Accepted publickey for core from 139.178.68.195 port 36394 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:33:55.496507 sshd[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:33:55.512864 systemd-logind[1996]: New session 19 of user core. Oct 8 19:33:55.526517 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 8 19:34:00.180294 sshd[6068]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:00.200522 systemd[1]: sshd@18-172.31.19.2:22-139.178.68.195:36394.service: Deactivated successfully. Oct 8 19:34:00.210712 systemd[1]: session-19.scope: Deactivated successfully. Oct 8 19:34:00.211109 systemd[1]: session-19.scope: Consumed 1.292s CPU time. Oct 8 19:34:00.215765 systemd-logind[1996]: Session 19 logged out. Waiting for processes to exit. Oct 8 19:34:00.256894 systemd[1]: Started sshd@19-172.31.19.2:22-139.178.68.195:36402.service - OpenSSH per-connection server daemon (139.178.68.195:36402). Oct 8 19:34:00.264802 systemd-logind[1996]: Removed session 19. Oct 8 19:34:00.497557 sshd[6091]: Accepted publickey for core from 139.178.68.195 port 36402 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:34:00.502730 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:34:00.517537 systemd-logind[1996]: New session 20 of user core. Oct 8 19:34:00.525777 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 8 19:34:01.188143 sshd[6091]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:01.198765 systemd-logind[1996]: Session 20 logged out. Waiting for processes to exit. Oct 8 19:34:01.199687 systemd[1]: sshd@19-172.31.19.2:22-139.178.68.195:36402.service: Deactivated successfully. Oct 8 19:34:01.209009 systemd[1]: session-20.scope: Deactivated successfully. Oct 8 19:34:01.224686 systemd-logind[1996]: Removed session 20. Oct 8 19:34:01.234550 systemd[1]: Started sshd@20-172.31.19.2:22-139.178.68.195:44888.service - OpenSSH per-connection server daemon (139.178.68.195:44888). Oct 8 19:34:01.409645 sshd[6102]: Accepted publickey for core from 139.178.68.195 port 44888 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:34:01.412608 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:34:01.423066 systemd-logind[1996]: New session 21 of user core. Oct 8 19:34:01.428302 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 8 19:34:01.699157 sshd[6102]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:01.706219 systemd[1]: sshd@20-172.31.19.2:22-139.178.68.195:44888.service: Deactivated successfully. Oct 8 19:34:01.713951 systemd[1]: session-21.scope: Deactivated successfully. Oct 8 19:34:01.718824 systemd-logind[1996]: Session 21 logged out. Waiting for processes to exit. Oct 8 19:34:01.724293 systemd-logind[1996]: Removed session 21. Oct 8 19:34:06.756500 systemd[1]: Started sshd@21-172.31.19.2:22-139.178.68.195:44892.service - OpenSSH per-connection server daemon (139.178.68.195:44892). Oct 8 19:34:06.951782 sshd[6142]: Accepted publickey for core from 139.178.68.195 port 44892 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:34:06.957375 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:34:06.971514 systemd-logind[1996]: New session 22 of user core. Oct 8 19:34:06.983827 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 8 19:34:07.302737 sshd[6142]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:07.314372 systemd[1]: sshd@21-172.31.19.2:22-139.178.68.195:44892.service: Deactivated successfully. Oct 8 19:34:07.323369 systemd[1]: session-22.scope: Deactivated successfully. Oct 8 19:34:07.326617 systemd-logind[1996]: Session 22 logged out. Waiting for processes to exit. Oct 8 19:34:07.330782 systemd-logind[1996]: Removed session 22. Oct 8 19:34:12.348364 systemd[1]: Started sshd@22-172.31.19.2:22-139.178.68.195:46946.service - OpenSSH per-connection server daemon (139.178.68.195:46946). Oct 8 19:34:12.560755 sshd[6181]: Accepted publickey for core from 139.178.68.195 port 46946 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:34:12.566353 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:34:12.583673 systemd-logind[1996]: New session 23 of user core. Oct 8 19:34:12.588898 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 8 19:34:12.859942 sshd[6181]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:12.868221 systemd[1]: sshd@22-172.31.19.2:22-139.178.68.195:46946.service: Deactivated successfully. Oct 8 19:34:12.875377 systemd[1]: session-23.scope: Deactivated successfully. Oct 8 19:34:12.877559 systemd-logind[1996]: Session 23 logged out. Waiting for processes to exit. Oct 8 19:34:12.881653 systemd-logind[1996]: Removed session 23. Oct 8 19:34:17.912722 systemd[1]: Started sshd@23-172.31.19.2:22-139.178.68.195:46954.service - OpenSSH per-connection server daemon (139.178.68.195:46954). Oct 8 19:34:18.104664 sshd[6200]: Accepted publickey for core from 139.178.68.195 port 46954 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:34:18.110488 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:34:18.122811 systemd-logind[1996]: New session 24 of user core. Oct 8 19:34:18.133663 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 8 19:34:18.420567 sshd[6200]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:18.433694 systemd[1]: sshd@23-172.31.19.2:22-139.178.68.195:46954.service: Deactivated successfully. Oct 8 19:34:18.442816 systemd[1]: session-24.scope: Deactivated successfully. Oct 8 19:34:18.452104 systemd-logind[1996]: Session 24 logged out. Waiting for processes to exit. Oct 8 19:34:18.456658 systemd-logind[1996]: Removed session 24. Oct 8 19:34:22.070804 update_engine[1997]: I1008 19:34:22.070413 1997 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 8 19:34:22.070804 update_engine[1997]: I1008 19:34:22.070482 1997 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 8 19:34:22.072036 update_engine[1997]: I1008 19:34:22.071568 1997 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Oct 8 19:34:22.072592 update_engine[1997]: I1008 19:34:22.072543 1997 omaha_request_params.cc:62] Current group set to stable Oct 8 19:34:22.072877 update_engine[1997]: I1008 19:34:22.072712 1997 update_attempter.cc:499] Already updated boot flags. Skipping. Oct 8 19:34:22.072877 update_engine[1997]: I1008 19:34:22.072734 1997 update_attempter.cc:643] Scheduling an action processor start. Oct 8 19:34:22.072877 update_engine[1997]: I1008 19:34:22.072762 1997 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 19:34:22.072877 update_engine[1997]: I1008 19:34:22.072824 1997 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Oct 8 19:34:22.073172 update_engine[1997]: I1008 19:34:22.072924 1997 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 19:34:22.073172 update_engine[1997]: I1008 19:34:22.072937 1997 omaha_request_action.cc:272] Request: Oct 8 19:34:22.073172 update_engine[1997]: Oct 8 19:34:22.073172 update_engine[1997]: Oct 8 19:34:22.073172 update_engine[1997]: Oct 8 19:34:22.073172 update_engine[1997]: Oct 8 19:34:22.073172 update_engine[1997]: Oct 8 19:34:22.073172 update_engine[1997]: Oct 8 19:34:22.073172 update_engine[1997]: Oct 8 19:34:22.073172 update_engine[1997]: Oct 8 19:34:22.073172 update_engine[1997]: I1008 19:34:22.072945 1997 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:34:22.074449 locksmithd[2032]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 8 19:34:22.077011 update_engine[1997]: I1008 19:34:22.076937 1997 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:34:22.077441 update_engine[1997]: I1008 19:34:22.077399 1997 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:34:22.121500 update_engine[1997]: E1008 19:34:22.121405 1997 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:34:22.121735 update_engine[1997]: I1008 19:34:22.121578 1997 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Oct 8 19:34:23.466853 systemd[1]: Started sshd@24-172.31.19.2:22-139.178.68.195:44440.service - OpenSSH per-connection server daemon (139.178.68.195:44440). Oct 8 19:34:23.667692 sshd[6221]: Accepted publickey for core from 139.178.68.195 port 44440 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:34:23.672687 sshd[6221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:34:23.682835 systemd-logind[1996]: New session 25 of user core. Oct 8 19:34:23.694494 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 8 19:34:23.953583 sshd[6221]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:23.966579 systemd[1]: sshd@24-172.31.19.2:22-139.178.68.195:44440.service: Deactivated successfully. Oct 8 19:34:23.967738 systemd-logind[1996]: Session 25 logged out. Waiting for processes to exit. Oct 8 19:34:23.976478 systemd[1]: session-25.scope: Deactivated successfully. Oct 8 19:34:23.984744 systemd-logind[1996]: Removed session 25. Oct 8 19:34:28.992615 systemd[1]: Started sshd@25-172.31.19.2:22-139.178.68.195:44448.service - OpenSSH per-connection server daemon (139.178.68.195:44448). Oct 8 19:34:29.179846 sshd[6241]: Accepted publickey for core from 139.178.68.195 port 44448 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:34:29.184328 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:34:29.199692 systemd-logind[1996]: New session 26 of user core. Oct 8 19:34:29.211661 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 8 19:34:29.498876 sshd[6241]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:29.509878 systemd[1]: sshd@25-172.31.19.2:22-139.178.68.195:44448.service: Deactivated successfully. Oct 8 19:34:29.517253 systemd[1]: session-26.scope: Deactivated successfully. Oct 8 19:34:29.521138 systemd-logind[1996]: Session 26 logged out. Waiting for processes to exit. Oct 8 19:34:29.524853 systemd-logind[1996]: Removed session 26. Oct 8 19:34:32.070097 update_engine[1997]: I1008 19:34:32.069640 1997 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:34:32.071080 update_engine[1997]: I1008 19:34:32.070217 1997 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:34:32.071080 update_engine[1997]: I1008 19:34:32.070592 1997 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:34:32.071219 update_engine[1997]: E1008 19:34:32.071102 1997 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:34:32.071219 update_engine[1997]: I1008 19:34:32.071174 1997 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Oct 8 19:34:34.544817 systemd[1]: Started sshd@26-172.31.19.2:22-139.178.68.195:58030.service - OpenSSH per-connection server daemon (139.178.68.195:58030). Oct 8 19:34:34.759713 sshd[6256]: Accepted publickey for core from 139.178.68.195 port 58030 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:34:34.769712 sshd[6256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:34:34.791039 systemd-logind[1996]: New session 27 of user core. Oct 8 19:34:34.796508 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 8 19:34:35.073079 sshd[6256]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:35.083492 systemd[1]: sshd@26-172.31.19.2:22-139.178.68.195:58030.service: Deactivated successfully. Oct 8 19:34:35.088867 systemd[1]: session-27.scope: Deactivated successfully. Oct 8 19:34:35.092203 systemd-logind[1996]: Session 27 logged out. Waiting for processes to exit. Oct 8 19:34:35.096235 systemd-logind[1996]: Removed session 27. Oct 8 19:34:40.116655 systemd[1]: Started sshd@27-172.31.19.2:22-139.178.68.195:58044.service - OpenSSH per-connection server daemon (139.178.68.195:58044). Oct 8 19:34:40.310445 sshd[6327]: Accepted publickey for core from 139.178.68.195 port 58044 ssh2: RSA SHA256:IeMX6f66zb7RPZo/kruzSd2zvwuQNDsSkQpBR1XCjX8 Oct 8 19:34:40.314555 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 8 19:34:40.327612 systemd-logind[1996]: New session 28 of user core. Oct 8 19:34:40.332333 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 8 19:34:40.616261 sshd[6327]: pam_unix(sshd:session): session closed for user core Oct 8 19:34:40.625285 systemd[1]: sshd@27-172.31.19.2:22-139.178.68.195:58044.service: Deactivated successfully. Oct 8 19:34:40.632414 systemd[1]: session-28.scope: Deactivated successfully. Oct 8 19:34:40.636494 systemd-logind[1996]: Session 28 logged out. Waiting for processes to exit. Oct 8 19:34:40.641140 systemd-logind[1996]: Removed session 28. Oct 8 19:34:42.068448 update_engine[1997]: I1008 19:34:42.068287 1997 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:34:42.070110 update_engine[1997]: I1008 19:34:42.069775 1997 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:34:42.070481 update_engine[1997]: I1008 19:34:42.070375 1997 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:34:42.071218 update_engine[1997]: E1008 19:34:42.071081 1997 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:34:42.071499 update_engine[1997]: I1008 19:34:42.071321 1997 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Oct 8 19:34:52.076105 update_engine[1997]: I1008 19:34:52.076037 1997 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:34:52.076799 update_engine[1997]: I1008 19:34:52.076321 1997 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:34:52.076799 update_engine[1997]: I1008 19:34:52.076704 1997 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:34:52.077320 update_engine[1997]: E1008 19:34:52.077273 1997 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:34:52.077391 update_engine[1997]: I1008 19:34:52.077363 1997 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 19:34:52.077391 update_engine[1997]: I1008 19:34:52.077376 1997 omaha_request_action.cc:617] Omaha request response: Oct 8 19:34:52.077573 update_engine[1997]: E1008 19:34:52.077527 1997 omaha_request_action.cc:636] Omaha request network transfer failed. Oct 8 19:34:52.077648 update_engine[1997]: I1008 19:34:52.077576 1997 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Oct 8 19:34:52.077648 update_engine[1997]: I1008 19:34:52.077587 1997 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 19:34:52.077648 update_engine[1997]: I1008 19:34:52.077595 1997 update_attempter.cc:306] Processing Done. Oct 8 19:34:52.077648 update_engine[1997]: E1008 19:34:52.077622 1997 update_attempter.cc:619] Update failed. Oct 8 19:34:52.077648 update_engine[1997]: I1008 19:34:52.077631 1997 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Oct 8 19:34:52.077648 update_engine[1997]: I1008 19:34:52.077638 1997 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Oct 8 19:34:52.077648 update_engine[1997]: I1008 19:34:52.077646 1997 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Oct 8 19:34:52.078020 update_engine[1997]: I1008 19:34:52.077772 1997 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 8 19:34:52.078020 update_engine[1997]: I1008 19:34:52.077807 1997 omaha_request_action.cc:271] Posting an Omaha request to disabled Oct 8 19:34:52.078020 update_engine[1997]: I1008 19:34:52.077816 1997 omaha_request_action.cc:272] Request: Oct 8 19:34:52.078020 update_engine[1997]: Oct 8 19:34:52.078020 update_engine[1997]: Oct 8 19:34:52.078020 update_engine[1997]: Oct 8 19:34:52.078020 update_engine[1997]: Oct 8 19:34:52.078020 update_engine[1997]: Oct 8 19:34:52.078020 update_engine[1997]: Oct 8 19:34:52.078020 update_engine[1997]: I1008 19:34:52.077826 1997 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 8 19:34:52.078462 update_engine[1997]: I1008 19:34:52.078085 1997 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Oct 8 19:34:52.078462 update_engine[1997]: I1008 19:34:52.078357 1997 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 8 19:34:52.078889 update_engine[1997]: E1008 19:34:52.078819 1997 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Oct 8 19:34:52.079193 locksmithd[2032]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Oct 8 19:34:52.080165 locksmithd[2032]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 8 19:34:52.080227 update_engine[1997]: I1008 19:34:52.079285 1997 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Oct 8 19:34:52.080227 update_engine[1997]: I1008 19:34:52.079303 1997 omaha_request_action.cc:617] Omaha request response: Oct 8 19:34:52.080227 update_engine[1997]: I1008 19:34:52.079314 1997 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 19:34:52.080227 update_engine[1997]: I1008 19:34:52.079322 1997 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Oct 8 19:34:52.080227 update_engine[1997]: I1008 19:34:52.079329 1997 update_attempter.cc:306] Processing Done. Oct 8 19:34:52.080227 update_engine[1997]: I1008 19:34:52.079339 1997 update_attempter.cc:310] Error event sent. Oct 8 19:34:52.080227 update_engine[1997]: I1008 19:34:52.079352 1997 update_check_scheduler.cc:74] Next update check in 47m41s Oct 8 19:35:26.521276 systemd[1]: cri-containerd-a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e.scope: Deactivated successfully. Oct 8 19:35:26.521792 systemd[1]: cri-containerd-a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e.scope: Consumed 12.431s CPU time. Oct 8 19:35:26.595583 containerd[2024]: time="2024-10-08T19:35:26.594340480Z" level=info msg="shim disconnected" id=a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e namespace=k8s.io Oct 8 19:35:26.595583 containerd[2024]: time="2024-10-08T19:35:26.594503368Z" level=warning msg="cleaning up after shim disconnected" id=a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e namespace=k8s.io Oct 8 19:35:26.595583 containerd[2024]: time="2024-10-08T19:35:26.594533812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:35:26.615548 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e-rootfs.mount: Deactivated successfully. Oct 8 19:35:27.406394 systemd[1]: cri-containerd-a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a.scope: Deactivated successfully. Oct 8 19:35:27.408361 systemd[1]: cri-containerd-a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a.scope: Consumed 8.300s CPU time, 20.2M memory peak, 0B memory swap peak. Oct 8 19:35:27.477092 containerd[2024]: time="2024-10-08T19:35:27.476599612Z" level=info msg="shim disconnected" id=a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a namespace=k8s.io Oct 8 19:35:27.477092 containerd[2024]: time="2024-10-08T19:35:27.476825512Z" level=warning msg="cleaning up after shim disconnected" id=a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a namespace=k8s.io Oct 8 19:35:27.477092 containerd[2024]: time="2024-10-08T19:35:27.476867308Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:35:27.485296 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a-rootfs.mount: Deactivated successfully. Oct 8 19:35:27.675438 kubelet[3361]: I1008 19:35:27.675302 3361 scope.go:117] "RemoveContainer" containerID="a214a4fb403544234d5ebdfbe0ed18265be6c4a4e840e128e08c14ce13a0187a" Oct 8 19:35:27.678291 kubelet[3361]: I1008 19:35:27.677884 3361 scope.go:117] "RemoveContainer" containerID="a5c69f76936616831e4a873f734e53cb9cd3214f220eb2b74038c837a652a88e" Oct 8 19:35:27.680614 containerd[2024]: time="2024-10-08T19:35:27.680547437Z" level=info msg="CreateContainer within sandbox \"6bd5bd36620cf2f446e6197213aad07cac08e76a41427120c4921cb5d26ab783\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Oct 8 19:35:27.682202 containerd[2024]: time="2024-10-08T19:35:27.681920033Z" level=info msg="CreateContainer within sandbox \"85385d8593afe428bdecb106bbd171b8daf4d18ce718d3b46320f49513bc5556\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Oct 8 19:35:27.707031 containerd[2024]: time="2024-10-08T19:35:27.704947290Z" level=info msg="CreateContainer within sandbox \"6bd5bd36620cf2f446e6197213aad07cac08e76a41427120c4921cb5d26ab783\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9f81ea80b6689c89f5f7c6a19e30634aa6819bdcf3eea287b616734000eb9062\"" Oct 8 19:35:27.707031 containerd[2024]: time="2024-10-08T19:35:27.705931110Z" level=info msg="StartContainer for \"9f81ea80b6689c89f5f7c6a19e30634aa6819bdcf3eea287b616734000eb9062\"" Oct 8 19:35:27.714649 containerd[2024]: time="2024-10-08T19:35:27.714265266Z" level=info msg="CreateContainer within sandbox \"85385d8593afe428bdecb106bbd171b8daf4d18ce718d3b46320f49513bc5556\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c9b4a409c50029621b5bc2425a5843f879fe15cbbde4a6a1ee4cf44d158d0a4f\"" Oct 8 19:35:27.715669 containerd[2024]: time="2024-10-08T19:35:27.715088550Z" level=info msg="StartContainer for \"c9b4a409c50029621b5bc2425a5843f879fe15cbbde4a6a1ee4cf44d158d0a4f\"" Oct 8 19:35:27.821466 systemd[1]: Started cri-containerd-9f81ea80b6689c89f5f7c6a19e30634aa6819bdcf3eea287b616734000eb9062.scope - libcontainer container 9f81ea80b6689c89f5f7c6a19e30634aa6819bdcf3eea287b616734000eb9062. Oct 8 19:35:27.830955 systemd[1]: Started cri-containerd-c9b4a409c50029621b5bc2425a5843f879fe15cbbde4a6a1ee4cf44d158d0a4f.scope - libcontainer container c9b4a409c50029621b5bc2425a5843f879fe15cbbde4a6a1ee4cf44d158d0a4f. Oct 8 19:35:27.947295 containerd[2024]: time="2024-10-08T19:35:27.946926691Z" level=info msg="StartContainer for \"9f81ea80b6689c89f5f7c6a19e30634aa6819bdcf3eea287b616734000eb9062\" returns successfully" Oct 8 19:35:27.960404 containerd[2024]: time="2024-10-08T19:35:27.960176683Z" level=info msg="StartContainer for \"c9b4a409c50029621b5bc2425a5843f879fe15cbbde4a6a1ee4cf44d158d0a4f\" returns successfully" Oct 8 19:35:32.332077 kubelet[3361]: E1008 19:35:32.331942 3361 controller.go:195] "Failed to update lease" err="Put \"https://172.31.19.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-19-2?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 8 19:35:32.497477 systemd[1]: cri-containerd-37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae.scope: Deactivated successfully. Oct 8 19:35:32.498094 systemd[1]: cri-containerd-37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae.scope: Consumed 4.332s CPU time, 16.3M memory peak, 0B memory swap peak. Oct 8 19:35:32.555842 containerd[2024]: time="2024-10-08T19:35:32.552485758Z" level=info msg="shim disconnected" id=37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae namespace=k8s.io Oct 8 19:35:32.555842 containerd[2024]: time="2024-10-08T19:35:32.552565354Z" level=warning msg="cleaning up after shim disconnected" id=37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae namespace=k8s.io Oct 8 19:35:32.555842 containerd[2024]: time="2024-10-08T19:35:32.552586714Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 8 19:35:32.555511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae-rootfs.mount: Deactivated successfully. Oct 8 19:35:32.708616 kubelet[3361]: I1008 19:35:32.708051 3361 scope.go:117] "RemoveContainer" containerID="37eb74992f4a1143ae116dfe4941b723052048f9f60e2a5ffdd406124871dcae" Oct 8 19:35:32.711800 containerd[2024]: time="2024-10-08T19:35:32.711723550Z" level=info msg="CreateContainer within sandbox \"b5ef534596df88ecde3c6eda9358b275cc4c9b3063696ecfed4ddb24018246fa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Oct 8 19:35:32.735062 containerd[2024]: time="2024-10-08T19:35:32.733401478Z" level=info msg="CreateContainer within sandbox \"b5ef534596df88ecde3c6eda9358b275cc4c9b3063696ecfed4ddb24018246fa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8a26e5bce616d28f1856efdad72a49bd9716e969f43b2986af3f3711738212ac\"" Oct 8 19:35:32.738779 containerd[2024]: time="2024-10-08T19:35:32.738698627Z" level=info msg="StartContainer for \"8a26e5bce616d28f1856efdad72a49bd9716e969f43b2986af3f3711738212ac\"" Oct 8 19:35:32.831447 systemd[1]: Started cri-containerd-8a26e5bce616d28f1856efdad72a49bd9716e969f43b2986af3f3711738212ac.scope - libcontainer container 8a26e5bce616d28f1856efdad72a49bd9716e969f43b2986af3f3711738212ac. Oct 8 19:35:32.911829 containerd[2024]: time="2024-10-08T19:35:32.911746223Z" level=info msg="StartContainer for \"8a26e5bce616d28f1856efdad72a49bd9716e969f43b2986af3f3711738212ac\" returns successfully"