Aug 5 21:35:51.229108 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Aug 5 21:35:51.229195 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:24:20 -00 2024 Aug 5 21:35:51.229221 kernel: KASLR disabled due to lack of seed Aug 5 21:35:51.229238 kernel: efi: EFI v2.7 by EDK II Aug 5 21:35:51.229254 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7852ee18 Aug 5 21:35:51.229270 kernel: ACPI: Early table checksum verification disabled Aug 5 21:35:51.229287 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Aug 5 21:35:51.229303 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Aug 5 21:35:51.229319 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 5 21:35:51.229335 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Aug 5 21:35:51.229356 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 5 21:35:51.229372 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Aug 5 21:35:51.229387 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Aug 5 21:35:51.229403 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Aug 5 21:35:51.229421 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 5 21:35:51.229442 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Aug 5 21:35:51.229459 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Aug 5 21:35:51.229476 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Aug 5 21:35:51.229492 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Aug 5 21:35:51.229508 kernel: printk: bootconsole [uart0] enabled Aug 5 21:35:51.229524 kernel: NUMA: Failed to initialise from firmware Aug 5 21:35:51.229541 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Aug 5 21:35:51.229558 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Aug 5 21:35:51.229574 kernel: Zone ranges: Aug 5 21:35:51.229590 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Aug 5 21:35:51.229606 kernel: DMA32 empty Aug 5 21:35:51.229627 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Aug 5 21:35:51.229643 kernel: Movable zone start for each node Aug 5 21:35:51.229659 kernel: Early memory node ranges Aug 5 21:35:51.229676 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Aug 5 21:35:51.229692 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Aug 5 21:35:51.229708 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Aug 5 21:35:51.229725 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Aug 5 21:35:51.229741 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Aug 5 21:35:51.229757 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Aug 5 21:35:51.229773 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Aug 5 21:35:51.229789 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Aug 5 21:35:51.229806 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Aug 5 21:35:51.229826 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Aug 5 21:35:51.229843 kernel: psci: probing for conduit method from ACPI. Aug 5 21:35:51.229867 kernel: psci: PSCIv1.0 detected in firmware. Aug 5 21:35:51.229884 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 21:35:51.229901 kernel: psci: Trusted OS migration not required Aug 5 21:35:51.229923 kernel: psci: SMC Calling Convention v1.1 Aug 5 21:35:51.229941 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 21:35:51.229958 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 21:35:51.229975 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 5 21:35:51.229992 kernel: Detected PIPT I-cache on CPU0 Aug 5 21:35:51.230010 kernel: CPU features: detected: GIC system register CPU interface Aug 5 21:35:51.230027 kernel: CPU features: detected: Spectre-v2 Aug 5 21:35:51.230044 kernel: CPU features: detected: Spectre-v3a Aug 5 21:35:51.230061 kernel: CPU features: detected: Spectre-BHB Aug 5 21:35:51.230078 kernel: CPU features: detected: ARM erratum 1742098 Aug 5 21:35:51.230095 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Aug 5 21:35:51.230117 kernel: alternatives: applying boot alternatives Aug 5 21:35:51.232205 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:35:51.232240 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 21:35:51.232277 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 21:35:51.232296 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 21:35:51.232314 kernel: Fallback order for Node 0: 0 Aug 5 21:35:51.232332 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Aug 5 21:35:51.232352 kernel: Policy zone: Normal Aug 5 21:35:51.232369 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 21:35:51.232387 kernel: software IO TLB: area num 2. Aug 5 21:35:51.232421 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Aug 5 21:35:51.232453 kernel: Memory: 3820536K/4030464K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 209928K reserved, 0K cma-reserved) Aug 5 21:35:51.232472 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 21:35:51.232490 kernel: trace event string verifier disabled Aug 5 21:35:51.232508 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 21:35:51.232526 kernel: rcu: RCU event tracing is enabled. Aug 5 21:35:51.232544 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 21:35:51.232562 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 21:35:51.232579 kernel: Tracing variant of Tasks RCU enabled. Aug 5 21:35:51.232597 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 21:35:51.232616 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 21:35:51.232633 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 21:35:51.232655 kernel: GICv3: 96 SPIs implemented Aug 5 21:35:51.232673 kernel: GICv3: 0 Extended SPIs implemented Aug 5 21:35:51.232690 kernel: Root IRQ handler: gic_handle_irq Aug 5 21:35:51.232707 kernel: GICv3: GICv3 features: 16 PPIs Aug 5 21:35:51.232725 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Aug 5 21:35:51.232742 kernel: ITS [mem 0x10080000-0x1009ffff] Aug 5 21:35:51.232760 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000c0000 (indirect, esz 8, psz 64K, shr 1) Aug 5 21:35:51.232778 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000d0000 (flat, esz 8, psz 64K, shr 1) Aug 5 21:35:51.232795 kernel: GICv3: using LPI property table @0x00000004000e0000 Aug 5 21:35:51.232813 kernel: ITS: Using hypervisor restricted LPI range [128] Aug 5 21:35:51.232830 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000f0000 Aug 5 21:35:51.232848 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 21:35:51.232870 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Aug 5 21:35:51.232888 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Aug 5 21:35:51.232906 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Aug 5 21:35:51.232923 kernel: Console: colour dummy device 80x25 Aug 5 21:35:51.232941 kernel: printk: console [tty1] enabled Aug 5 21:35:51.232959 kernel: ACPI: Core revision 20230628 Aug 5 21:35:51.232978 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Aug 5 21:35:51.232996 kernel: pid_max: default: 32768 minimum: 301 Aug 5 21:35:51.233013 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 21:35:51.233031 kernel: SELinux: Initializing. Aug 5 21:35:51.233053 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:35:51.233071 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:35:51.233089 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:35:51.233107 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:35:51.233154 kernel: rcu: Hierarchical SRCU implementation. Aug 5 21:35:51.233180 kernel: rcu: Max phase no-delay instances is 400. Aug 5 21:35:51.233199 kernel: Platform MSI: ITS@0x10080000 domain created Aug 5 21:35:51.233216 kernel: PCI/MSI: ITS@0x10080000 domain created Aug 5 21:35:51.233234 kernel: Remapping and enabling EFI services. Aug 5 21:35:51.233259 kernel: smp: Bringing up secondary CPUs ... Aug 5 21:35:51.233277 kernel: Detected PIPT I-cache on CPU1 Aug 5 21:35:51.233295 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Aug 5 21:35:51.233313 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400100000 Aug 5 21:35:51.233330 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Aug 5 21:35:51.233348 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 21:35:51.233365 kernel: SMP: Total of 2 processors activated. Aug 5 21:35:51.233383 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 21:35:51.233400 kernel: CPU features: detected: 32-bit EL1 Support Aug 5 21:35:51.233423 kernel: CPU features: detected: CRC32 instructions Aug 5 21:35:51.233441 kernel: CPU: All CPU(s) started at EL1 Aug 5 21:35:51.233470 kernel: alternatives: applying system-wide alternatives Aug 5 21:35:51.233493 kernel: devtmpfs: initialized Aug 5 21:35:51.233512 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 21:35:51.233530 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 21:35:51.233549 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 21:35:51.233567 kernel: SMBIOS 3.0.0 present. Aug 5 21:35:51.233586 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Aug 5 21:35:51.233609 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 21:35:51.233627 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 21:35:51.233646 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 21:35:51.233665 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 21:35:51.233683 kernel: audit: initializing netlink subsys (disabled) Aug 5 21:35:51.233702 kernel: audit: type=2000 audit(0.295:1): state=initialized audit_enabled=0 res=1 Aug 5 21:35:51.233720 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 21:35:51.233745 kernel: cpuidle: using governor menu Aug 5 21:35:51.233763 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 21:35:51.233782 kernel: ASID allocator initialised with 65536 entries Aug 5 21:35:51.233800 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 21:35:51.233820 kernel: Serial: AMBA PL011 UART driver Aug 5 21:35:51.233838 kernel: Modules: 17600 pages in range for non-PLT usage Aug 5 21:35:51.233857 kernel: Modules: 509120 pages in range for PLT usage Aug 5 21:35:51.233875 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 21:35:51.233894 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 21:35:51.233916 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 21:35:51.233935 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 21:35:51.233954 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 21:35:51.233973 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 21:35:51.233991 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 21:35:51.234010 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 21:35:51.234028 kernel: ACPI: Added _OSI(Module Device) Aug 5 21:35:51.234046 kernel: ACPI: Added _OSI(Processor Device) Aug 5 21:35:51.234065 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 21:35:51.234087 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 21:35:51.234106 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 21:35:51.236167 kernel: ACPI: Interpreter enabled Aug 5 21:35:51.236218 kernel: ACPI: Using GIC for interrupt routing Aug 5 21:35:51.236238 kernel: ACPI: MCFG table detected, 1 entries Aug 5 21:35:51.236257 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Aug 5 21:35:51.236562 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 5 21:35:51.236777 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 5 21:35:51.236989 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 5 21:35:51.237222 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Aug 5 21:35:51.237425 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Aug 5 21:35:51.237451 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Aug 5 21:35:51.237470 kernel: acpiphp: Slot [1] registered Aug 5 21:35:51.237489 kernel: acpiphp: Slot [2] registered Aug 5 21:35:51.237507 kernel: acpiphp: Slot [3] registered Aug 5 21:35:51.237525 kernel: acpiphp: Slot [4] registered Aug 5 21:35:51.237543 kernel: acpiphp: Slot [5] registered Aug 5 21:35:51.237568 kernel: acpiphp: Slot [6] registered Aug 5 21:35:51.237587 kernel: acpiphp: Slot [7] registered Aug 5 21:35:51.237605 kernel: acpiphp: Slot [8] registered Aug 5 21:35:51.237623 kernel: acpiphp: Slot [9] registered Aug 5 21:35:51.237642 kernel: acpiphp: Slot [10] registered Aug 5 21:35:51.237660 kernel: acpiphp: Slot [11] registered Aug 5 21:35:51.237678 kernel: acpiphp: Slot [12] registered Aug 5 21:35:51.237697 kernel: acpiphp: Slot [13] registered Aug 5 21:35:51.237715 kernel: acpiphp: Slot [14] registered Aug 5 21:35:51.237739 kernel: acpiphp: Slot [15] registered Aug 5 21:35:51.237757 kernel: acpiphp: Slot [16] registered Aug 5 21:35:51.237776 kernel: acpiphp: Slot [17] registered Aug 5 21:35:51.237794 kernel: acpiphp: Slot [18] registered Aug 5 21:35:51.237812 kernel: acpiphp: Slot [19] registered Aug 5 21:35:51.237830 kernel: acpiphp: Slot [20] registered Aug 5 21:35:51.237848 kernel: acpiphp: Slot [21] registered Aug 5 21:35:51.237866 kernel: acpiphp: Slot [22] registered Aug 5 21:35:51.237884 kernel: acpiphp: Slot [23] registered Aug 5 21:35:51.237903 kernel: acpiphp: Slot [24] registered Aug 5 21:35:51.237925 kernel: acpiphp: Slot [25] registered Aug 5 21:35:51.237944 kernel: acpiphp: Slot [26] registered Aug 5 21:35:51.237962 kernel: acpiphp: Slot [27] registered Aug 5 21:35:51.237980 kernel: acpiphp: Slot [28] registered Aug 5 21:35:51.237998 kernel: acpiphp: Slot [29] registered Aug 5 21:35:51.238016 kernel: acpiphp: Slot [30] registered Aug 5 21:35:51.238034 kernel: acpiphp: Slot [31] registered Aug 5 21:35:51.238053 kernel: PCI host bridge to bus 0000:00 Aug 5 21:35:51.238278 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Aug 5 21:35:51.238481 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 5 21:35:51.238685 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Aug 5 21:35:51.238884 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Aug 5 21:35:51.239118 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Aug 5 21:35:51.241534 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Aug 5 21:35:51.241742 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Aug 5 21:35:51.241981 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 5 21:35:51.242215 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Aug 5 21:35:51.242421 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 5 21:35:51.242661 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 5 21:35:51.242903 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Aug 5 21:35:51.243109 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Aug 5 21:35:51.245487 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Aug 5 21:35:51.245722 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 5 21:35:51.245932 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Aug 5 21:35:51.247201 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Aug 5 21:35:51.247446 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Aug 5 21:35:51.247649 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Aug 5 21:35:51.247854 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Aug 5 21:35:51.255597 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Aug 5 21:35:51.255838 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 5 21:35:51.256017 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Aug 5 21:35:51.256044 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 5 21:35:51.256064 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 5 21:35:51.256083 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 5 21:35:51.256102 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 5 21:35:51.256120 kernel: iommu: Default domain type: Translated Aug 5 21:35:51.256172 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 21:35:51.256251 kernel: efivars: Registered efivars operations Aug 5 21:35:51.256272 kernel: vgaarb: loaded Aug 5 21:35:51.256292 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 21:35:51.256311 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 21:35:51.256330 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 21:35:51.256348 kernel: pnp: PnP ACPI init Aug 5 21:35:51.256574 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Aug 5 21:35:51.256603 kernel: pnp: PnP ACPI: found 1 devices Aug 5 21:35:51.256628 kernel: NET: Registered PF_INET protocol family Aug 5 21:35:51.256648 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 21:35:51.256667 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 21:35:51.256687 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 21:35:51.256705 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 21:35:51.256724 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 21:35:51.256742 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 21:35:51.256761 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:35:51.256780 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:35:51.256803 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 21:35:51.256822 kernel: PCI: CLS 0 bytes, default 64 Aug 5 21:35:51.256840 kernel: kvm [1]: HYP mode not available Aug 5 21:35:51.256858 kernel: Initialise system trusted keyrings Aug 5 21:35:51.256877 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 21:35:51.256895 kernel: Key type asymmetric registered Aug 5 21:35:51.256913 kernel: Asymmetric key parser 'x509' registered Aug 5 21:35:51.256932 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 21:35:51.256951 kernel: io scheduler mq-deadline registered Aug 5 21:35:51.256974 kernel: io scheduler kyber registered Aug 5 21:35:51.256992 kernel: io scheduler bfq registered Aug 5 21:35:51.259539 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Aug 5 21:35:51.259588 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 5 21:35:51.259608 kernel: ACPI: button: Power Button [PWRB] Aug 5 21:35:51.259629 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Aug 5 21:35:51.259648 kernel: ACPI: button: Sleep Button [SLPB] Aug 5 21:35:51.259667 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 21:35:51.259696 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Aug 5 21:35:51.259918 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Aug 5 21:35:51.259946 kernel: printk: console [ttyS0] disabled Aug 5 21:35:51.259966 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Aug 5 21:35:51.259985 kernel: printk: console [ttyS0] enabled Aug 5 21:35:51.260004 kernel: printk: bootconsole [uart0] disabled Aug 5 21:35:51.260022 kernel: thunder_xcv, ver 1.0 Aug 5 21:35:51.260041 kernel: thunder_bgx, ver 1.0 Aug 5 21:35:51.260059 kernel: nicpf, ver 1.0 Aug 5 21:35:51.260077 kernel: nicvf, ver 1.0 Aug 5 21:35:51.260384 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 21:35:51.260584 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T21:35:50 UTC (1722893750) Aug 5 21:35:51.260610 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 21:35:51.260629 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Aug 5 21:35:51.260648 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 21:35:51.260668 kernel: watchdog: Hard watchdog permanently disabled Aug 5 21:35:51.260686 kernel: NET: Registered PF_INET6 protocol family Aug 5 21:35:51.260705 kernel: Segment Routing with IPv6 Aug 5 21:35:51.260731 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 21:35:51.260749 kernel: NET: Registered PF_PACKET protocol family Aug 5 21:35:51.260769 kernel: Key type dns_resolver registered Aug 5 21:35:51.260790 kernel: registered taskstats version 1 Aug 5 21:35:51.260808 kernel: Loading compiled-in X.509 certificates Aug 5 21:35:51.260827 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 7b6de7a842f23ac7c1bb6bedfb9546933daaea09' Aug 5 21:35:51.260847 kernel: Key type .fscrypt registered Aug 5 21:35:51.260865 kernel: Key type fscrypt-provisioning registered Aug 5 21:35:51.260884 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 21:35:51.260908 kernel: ima: Allocated hash algorithm: sha1 Aug 5 21:35:51.260927 kernel: ima: No architecture policies found Aug 5 21:35:51.260946 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 21:35:51.260965 kernel: clk: Disabling unused clocks Aug 5 21:35:51.260983 kernel: Freeing unused kernel memory: 39040K Aug 5 21:35:51.261002 kernel: Run /init as init process Aug 5 21:35:51.261022 kernel: with arguments: Aug 5 21:35:51.261042 kernel: /init Aug 5 21:35:51.261060 kernel: with environment: Aug 5 21:35:51.261084 kernel: HOME=/ Aug 5 21:35:51.261104 kernel: TERM=linux Aug 5 21:35:51.261122 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 21:35:51.261181 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:35:51.261206 systemd[1]: Detected virtualization amazon. Aug 5 21:35:51.261244 systemd[1]: Detected architecture arm64. Aug 5 21:35:51.261266 systemd[1]: Running in initrd. Aug 5 21:35:51.261287 systemd[1]: No hostname configured, using default hostname. Aug 5 21:35:51.261316 systemd[1]: Hostname set to . Aug 5 21:35:51.261338 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:35:51.261360 systemd[1]: Queued start job for default target initrd.target. Aug 5 21:35:51.261382 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:35:51.261403 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:35:51.261427 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 21:35:51.261448 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:35:51.261476 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 21:35:51.261499 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 21:35:51.261524 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 21:35:51.261546 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 21:35:51.261568 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:35:51.261589 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:35:51.261610 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:35:51.261637 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:35:51.261659 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:35:51.261680 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:35:51.261701 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:35:51.261721 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:35:51.261748 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:35:51.261768 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:35:51.261790 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:35:51.261811 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:35:51.261838 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:35:51.261858 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:35:51.261879 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 21:35:51.261899 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:35:51.261921 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 21:35:51.261941 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 21:35:51.261962 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:35:51.261982 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:35:51.262008 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:35:51.262030 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 21:35:51.262050 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:35:51.262379 systemd-journald[251]: Collecting audit messages is disabled. Aug 5 21:35:51.262446 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 21:35:51.262469 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 21:35:51.262490 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:35:51.262510 kernel: Bridge firewalling registered Aug 5 21:35:51.262530 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:35:51.262555 systemd-journald[251]: Journal started Aug 5 21:35:51.262593 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2a1feb7e7520f302c91a5a1753147f) is 8.0M, max 75.3M, 67.3M free. Aug 5 21:35:51.206456 systemd-modules-load[252]: Inserted module 'overlay' Aug 5 21:35:51.270279 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:35:51.250248 systemd-modules-load[252]: Inserted module 'br_netfilter' Aug 5 21:35:51.276769 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:35:51.280410 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:35:51.300404 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:35:51.313378 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:35:51.327435 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:35:51.345499 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:35:51.387284 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:35:51.398320 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:35:51.404023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:35:51.433410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:35:51.442511 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:35:51.460476 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 21:35:51.499724 dracut-cmdline[288]: dracut-dracut-053 Aug 5 21:35:51.507788 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:35:51.525274 systemd-resolved[286]: Positive Trust Anchors: Aug 5 21:35:51.525307 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:35:51.525382 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:35:51.677167 kernel: SCSI subsystem initialized Aug 5 21:35:51.687159 kernel: Loading iSCSI transport class v2.0-870. Aug 5 21:35:51.698171 kernel: iscsi: registered transport (tcp) Aug 5 21:35:51.721169 kernel: iscsi: registered transport (qla4xxx) Aug 5 21:35:51.721241 kernel: QLogic iSCSI HBA Driver Aug 5 21:35:51.753167 kernel: random: crng init done Aug 5 21:35:51.753336 systemd-resolved[286]: Defaulting to hostname 'linux'. Aug 5 21:35:51.758756 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:35:51.767906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:35:51.803239 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 21:35:51.820428 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 21:35:51.850594 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 21:35:51.850705 kernel: device-mapper: uevent: version 1.0.3 Aug 5 21:35:51.850735 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 21:35:51.921179 kernel: raid6: neonx8 gen() 6757 MB/s Aug 5 21:35:51.938162 kernel: raid6: neonx4 gen() 6558 MB/s Aug 5 21:35:51.955160 kernel: raid6: neonx2 gen() 5462 MB/s Aug 5 21:35:51.972162 kernel: raid6: neonx1 gen() 3924 MB/s Aug 5 21:35:51.989160 kernel: raid6: int64x8 gen() 3796 MB/s Aug 5 21:35:52.006162 kernel: raid6: int64x4 gen() 3711 MB/s Aug 5 21:35:52.023161 kernel: raid6: int64x2 gen() 3606 MB/s Aug 5 21:35:52.040858 kernel: raid6: int64x1 gen() 2770 MB/s Aug 5 21:35:52.040895 kernel: raid6: using algorithm neonx8 gen() 6757 MB/s Aug 5 21:35:52.058844 kernel: raid6: .... xor() 4844 MB/s, rmw enabled Aug 5 21:35:52.058886 kernel: raid6: using neon recovery algorithm Aug 5 21:35:52.067165 kernel: xor: measuring software checksum speed Aug 5 21:35:52.067220 kernel: 8regs : 10963 MB/sec Aug 5 21:35:52.070160 kernel: 32regs : 11936 MB/sec Aug 5 21:35:52.072282 kernel: arm64_neon : 9583 MB/sec Aug 5 21:35:52.072318 kernel: xor: using function: 32regs (11936 MB/sec) Aug 5 21:35:52.158186 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 21:35:52.180394 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:35:52.193458 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:35:52.227205 systemd-udevd[470]: Using default interface naming scheme 'v255'. Aug 5 21:35:52.235411 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:35:52.255104 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 21:35:52.286332 dracut-pre-trigger[480]: rd.md=0: removing MD RAID activation Aug 5 21:35:52.342110 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:35:52.355573 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:35:52.489293 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:35:52.511085 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 21:35:52.559905 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 21:35:52.570353 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:35:52.580711 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:35:52.590320 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:35:52.608935 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 21:35:52.659829 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:35:52.711217 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 5 21:35:52.711330 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Aug 5 21:35:52.745100 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 5 21:35:52.749897 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 5 21:35:52.750174 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:65:6a:e5:66:75 Aug 5 21:35:52.750418 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Aug 5 21:35:52.750447 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 5 21:35:52.729281 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:35:52.755256 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 5 21:35:52.729533 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:35:52.735099 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:35:52.740176 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:35:52.740450 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:35:52.743828 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:35:52.765323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:35:52.784223 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 21:35:52.784299 kernel: GPT:9289727 != 16777215 Aug 5 21:35:52.784326 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 21:35:52.784350 kernel: GPT:9289727 != 16777215 Aug 5 21:35:52.784374 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 21:35:52.785661 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:52.792665 (udev-worker)[540]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:35:52.803544 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:35:52.814493 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:35:52.867757 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:35:52.940709 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 5 21:35:52.961206 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by (udev-worker) (516) Aug 5 21:35:53.005279 kernel: BTRFS: device fsid 8a9ab799-ab52-4671-9234-72d7c6e57b99 devid 1 transid 38 /dev/nvme0n1p3 scanned by (udev-worker) (528) Aug 5 21:35:53.034343 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 5 21:35:53.091847 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 21:35:53.107694 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 5 21:35:53.118624 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 5 21:35:53.130520 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 21:35:53.150216 disk-uuid[660]: Primary Header is updated. Aug 5 21:35:53.150216 disk-uuid[660]: Secondary Entries is updated. Aug 5 21:35:53.150216 disk-uuid[660]: Secondary Header is updated. Aug 5 21:35:53.165236 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:53.174173 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:53.182166 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:54.181186 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 21:35:54.182400 disk-uuid[661]: The operation has completed successfully. Aug 5 21:35:54.375781 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 21:35:54.376477 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 21:35:54.425453 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 21:35:54.447542 sh[1005]: Success Aug 5 21:35:54.477186 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 21:35:54.592566 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 21:35:54.605343 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 21:35:54.611737 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 21:35:54.646545 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9ab799-ab52-4671-9234-72d7c6e57b99 Aug 5 21:35:54.646626 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:35:54.646668 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 21:35:54.649420 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 21:35:54.649458 kernel: BTRFS info (device dm-0): using free space tree Aug 5 21:35:54.840175 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 5 21:35:54.863462 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 21:35:54.868854 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 21:35:54.882422 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 21:35:54.899396 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 21:35:54.911938 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:54.912016 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:35:54.912050 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 21:35:54.918180 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 21:35:54.934608 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 21:35:54.941209 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:54.956074 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 21:35:54.967525 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 21:35:55.068543 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:35:55.095587 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:35:55.149274 systemd-networkd[1197]: lo: Link UP Aug 5 21:35:55.149298 systemd-networkd[1197]: lo: Gained carrier Aug 5 21:35:55.151855 systemd-networkd[1197]: Enumeration completed Aug 5 21:35:55.152374 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:35:55.152574 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:35:55.152582 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:35:55.158105 systemd-networkd[1197]: eth0: Link UP Aug 5 21:35:55.158116 systemd-networkd[1197]: eth0: Gained carrier Aug 5 21:35:55.158156 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:35:55.162185 systemd[1]: Reached target network.target - Network. Aug 5 21:35:55.179262 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.22.168/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 21:35:55.399746 ignition[1110]: Ignition 2.19.0 Aug 5 21:35:55.400335 ignition[1110]: Stage: fetch-offline Aug 5 21:35:55.400882 ignition[1110]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:55.405227 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:35:55.400906 ignition[1110]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:55.401386 ignition[1110]: Ignition finished successfully Aug 5 21:35:55.427274 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 21:35:55.452859 ignition[1208]: Ignition 2.19.0 Aug 5 21:35:55.452883 ignition[1208]: Stage: fetch Aug 5 21:35:55.453577 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:55.453603 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:55.454594 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:55.469765 ignition[1208]: PUT result: OK Aug 5 21:35:55.472934 ignition[1208]: parsed url from cmdline: "" Aug 5 21:35:55.473080 ignition[1208]: no config URL provided Aug 5 21:35:55.473100 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:35:55.473145 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:35:55.473202 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:55.482079 ignition[1208]: PUT result: OK Aug 5 21:35:55.482192 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 5 21:35:55.486439 ignition[1208]: GET result: OK Aug 5 21:35:55.488097 ignition[1208]: parsing config with SHA512: 9f73ea0356da62f4d52b75eedb810d996797256efe025a4f499ae3c005af328c2dfdf852fdb85320dd6ce95f74e9c75454596fdcbf7a1557825551b88b826031 Aug 5 21:35:55.499484 unknown[1208]: fetched base config from "system" Aug 5 21:35:55.499512 unknown[1208]: fetched base config from "system" Aug 5 21:35:55.499527 unknown[1208]: fetched user config from "aws" Aug 5 21:35:55.503370 ignition[1208]: fetch: fetch complete Aug 5 21:35:55.503382 ignition[1208]: fetch: fetch passed Aug 5 21:35:55.503474 ignition[1208]: Ignition finished successfully Aug 5 21:35:55.514004 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 21:35:55.528702 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 21:35:55.558343 ignition[1215]: Ignition 2.19.0 Aug 5 21:35:55.558372 ignition[1215]: Stage: kargs Aug 5 21:35:55.559025 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:55.559051 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:55.559223 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:55.562484 ignition[1215]: PUT result: OK Aug 5 21:35:55.581443 ignition[1215]: kargs: kargs passed Aug 5 21:35:55.581596 ignition[1215]: Ignition finished successfully Aug 5 21:35:55.587847 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 21:35:55.609537 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 21:35:55.634432 ignition[1222]: Ignition 2.19.0 Aug 5 21:35:55.634452 ignition[1222]: Stage: disks Aug 5 21:35:55.635053 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:55.635077 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:55.635243 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:55.637726 ignition[1222]: PUT result: OK Aug 5 21:35:55.650756 ignition[1222]: disks: disks passed Aug 5 21:35:55.650913 ignition[1222]: Ignition finished successfully Aug 5 21:35:55.652961 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 21:35:55.657636 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 21:35:55.661282 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:35:55.661925 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:35:55.677818 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:35:55.680178 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:35:55.692457 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 21:35:55.754397 systemd-fsck[1231]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 21:35:55.762282 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 21:35:55.778544 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 21:35:55.869176 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ec701988-3dff-4e7d-a2a2-79d78965de5d r/w with ordered data mode. Quota mode: none. Aug 5 21:35:55.869954 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 21:35:55.875013 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 21:35:55.906332 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:35:55.916608 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 21:35:55.923376 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 21:35:55.937387 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1250) Aug 5 21:35:55.923482 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 21:35:55.948647 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:55.948684 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:35:55.948710 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 21:35:55.923632 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:35:55.956321 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 21:35:55.960403 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 21:35:55.973670 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 21:35:55.975861 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:35:56.394265 systemd-networkd[1197]: eth0: Gained IPv6LL Aug 5 21:35:56.464058 initrd-setup-root[1274]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 21:35:56.475116 initrd-setup-root[1281]: cut: /sysroot/etc/group: No such file or directory Aug 5 21:35:56.485568 initrd-setup-root[1288]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 21:35:56.506333 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 21:35:56.873041 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 21:35:56.889370 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 21:35:56.899638 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 21:35:56.918060 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 21:35:56.922421 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:56.968628 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 21:35:56.981035 ignition[1363]: INFO : Ignition 2.19.0 Aug 5 21:35:56.981035 ignition[1363]: INFO : Stage: mount Aug 5 21:35:56.986789 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:56.986789 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:56.986789 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:56.996308 ignition[1363]: INFO : PUT result: OK Aug 5 21:35:57.005474 ignition[1363]: INFO : mount: mount passed Aug 5 21:35:57.008199 ignition[1363]: INFO : Ignition finished successfully Aug 5 21:35:57.011208 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 21:35:57.024336 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 21:35:57.054973 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:35:57.075177 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1375) Aug 5 21:35:57.075240 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:35:57.078547 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:35:57.078586 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 21:35:57.085167 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 21:35:57.088582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:35:57.121473 ignition[1392]: INFO : Ignition 2.19.0 Aug 5 21:35:57.121473 ignition[1392]: INFO : Stage: files Aug 5 21:35:57.125958 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:57.125958 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:57.125958 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:57.138841 ignition[1392]: INFO : PUT result: OK Aug 5 21:35:57.143686 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Aug 5 21:35:57.156480 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 21:35:57.156480 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 21:35:57.176731 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 21:35:57.180205 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 21:35:57.183890 unknown[1392]: wrote ssh authorized keys file for user: core Aug 5 21:35:57.187557 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 21:35:57.190922 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 21:35:57.195207 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 5 21:35:57.195207 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:35:57.195207 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 21:35:57.267288 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 5 21:35:57.384234 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:35:57.384234 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:35:57.396555 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Aug 5 21:35:57.885606 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 5 21:35:58.278188 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Aug 5 21:35:58.278188 ignition[1392]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:35:58.291770 ignition[1392]: INFO : files: files passed Aug 5 21:35:58.291770 ignition[1392]: INFO : Ignition finished successfully Aug 5 21:35:58.287760 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 21:35:58.356551 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 21:35:58.364916 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 21:35:58.384673 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 21:35:58.385383 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 21:35:58.403739 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:35:58.403739 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:35:58.412900 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:35:58.422192 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:35:58.427290 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 21:35:58.451522 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 21:35:58.517286 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 21:35:58.517700 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 21:35:58.527435 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 21:35:58.527600 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 21:35:58.541355 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 21:35:58.557450 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 21:35:58.583667 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:35:58.599452 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 21:35:58.625755 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:35:58.632898 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:35:58.636658 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 21:35:58.642111 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 21:35:58.642366 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:35:58.648996 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 21:35:58.651486 systemd[1]: Stopped target basic.target - Basic System. Aug 5 21:35:58.658588 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 21:35:58.661446 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:35:58.664434 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 21:35:58.674257 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 21:35:58.676735 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:35:58.679825 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 21:35:58.689389 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 21:35:58.692366 systemd[1]: Stopped target swap.target - Swaps. Aug 5 21:35:58.697596 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 21:35:58.697828 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:35:58.700887 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:35:58.710315 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:35:58.713346 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 21:35:58.717649 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:35:58.721155 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 21:35:58.721406 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 21:35:58.730887 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 21:35:58.731615 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:35:58.739428 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 21:35:58.740111 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 21:35:58.753468 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 21:35:58.755461 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 21:35:58.755751 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:35:58.767696 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 21:35:58.776360 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 21:35:58.778271 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:35:58.787656 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 21:35:58.787900 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:35:58.809938 ignition[1445]: INFO : Ignition 2.19.0 Aug 5 21:35:58.813751 ignition[1445]: INFO : Stage: umount Aug 5 21:35:58.815915 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:35:58.818715 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 21:35:58.818715 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 21:35:58.824698 ignition[1445]: INFO : PUT result: OK Aug 5 21:35:58.827011 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 21:35:58.827335 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 21:35:58.838929 ignition[1445]: INFO : umount: umount passed Aug 5 21:35:58.840932 ignition[1445]: INFO : Ignition finished successfully Aug 5 21:35:58.855433 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 21:35:58.858021 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 21:35:58.868433 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 21:35:58.870726 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 21:35:58.870815 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 21:35:58.886907 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 21:35:58.888197 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 21:35:58.891467 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 21:35:58.891560 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 21:35:58.893552 systemd[1]: Stopped target network.target - Network. Aug 5 21:35:58.896300 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 21:35:58.896419 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:35:58.898843 systemd[1]: Stopped target paths.target - Path Units. Aug 5 21:35:58.900733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 21:35:58.905227 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:35:58.910108 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 21:35:58.912383 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 21:35:58.914646 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 21:35:58.914727 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:35:58.916956 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 21:35:58.917025 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:35:58.919330 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 21:35:58.919421 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 21:35:58.921699 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 21:35:58.921780 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 21:35:58.925115 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 21:35:58.929804 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 21:35:58.935237 systemd-networkd[1197]: eth0: DHCPv6 lease lost Aug 5 21:35:58.942235 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 21:35:58.942433 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 21:35:58.948763 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 21:35:58.949352 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 21:35:58.979943 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 21:35:58.980075 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:35:58.983813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 21:35:58.983929 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 21:35:59.011679 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 21:35:59.017641 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 21:35:59.017765 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:35:59.021769 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:35:59.032358 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 21:35:59.035219 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 21:35:59.052925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:35:59.053593 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:35:59.064008 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 21:35:59.066330 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 21:35:59.071821 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 21:35:59.071920 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:35:59.075280 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 21:35:59.075549 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:35:59.080896 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 21:35:59.081105 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 21:35:59.088259 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 21:35:59.088396 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 21:35:59.103346 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 21:35:59.103425 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:35:59.106013 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 21:35:59.106103 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:35:59.108965 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 21:35:59.109050 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 21:35:59.124839 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:35:59.124933 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:35:59.137435 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 21:35:59.139888 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 21:35:59.140069 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:35:59.149107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:35:59.149237 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:35:59.170483 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 21:35:59.170701 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 21:35:59.174936 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 21:35:59.192437 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 21:35:59.219424 systemd[1]: Switching root. Aug 5 21:35:59.250451 systemd-journald[251]: Journal stopped Aug 5 21:36:02.251911 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Aug 5 21:36:02.252063 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 21:36:02.252109 kernel: SELinux: policy capability open_perms=1 Aug 5 21:36:02.252171 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 21:36:02.252206 kernel: SELinux: policy capability always_check_network=0 Aug 5 21:36:02.252238 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 21:36:02.252270 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 21:36:02.252305 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 21:36:02.252336 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 21:36:02.252364 kernel: audit: type=1403 audit(1722893760.537:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 21:36:02.252404 systemd[1]: Successfully loaded SELinux policy in 69.798ms. Aug 5 21:36:02.252455 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.756ms. Aug 5 21:36:02.252497 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:36:02.252531 systemd[1]: Detected virtualization amazon. Aug 5 21:36:02.252562 systemd[1]: Detected architecture arm64. Aug 5 21:36:02.252592 systemd[1]: Detected first boot. Aug 5 21:36:02.252629 systemd[1]: Initializing machine ID from VM UUID. Aug 5 21:36:02.252665 zram_generator::config[1504]: No configuration found. Aug 5 21:36:02.252703 systemd[1]: Populated /etc with preset unit settings. Aug 5 21:36:02.252735 systemd[1]: Queued start job for default target multi-user.target. Aug 5 21:36:02.252766 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 5 21:36:02.252801 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 21:36:02.252842 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 21:36:02.252875 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 21:36:02.252913 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 21:36:02.252947 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 21:36:02.252982 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 21:36:02.253015 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 21:36:02.253047 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 21:36:02.253079 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:36:02.253111 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:36:02.257371 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 21:36:02.257423 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 21:36:02.257469 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 21:36:02.257652 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:36:02.257684 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 21:36:02.257715 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:36:02.257804 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 21:36:02.257851 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:36:02.257884 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:36:02.257917 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:36:02.257958 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:36:02.257989 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 21:36:02.258020 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 21:36:02.258056 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:36:02.258087 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:36:02.258159 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:36:02.258199 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:36:02.258244 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:36:02.258281 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 21:36:02.258323 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 21:36:02.258363 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 21:36:02.258399 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 21:36:02.258434 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 21:36:02.258470 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 21:36:02.258503 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 21:36:02.258538 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 21:36:02.258574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:36:02.258633 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:36:02.258679 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 21:36:02.258711 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:36:02.258746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:36:02.258779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:36:02.258815 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 21:36:02.258847 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:36:02.258889 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 21:36:02.258922 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 5 21:36:02.258963 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 5 21:36:02.259006 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:36:02.259042 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:36:02.259078 kernel: fuse: init (API version 7.39) Aug 5 21:36:02.259109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 21:36:02.271212 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 21:36:02.271271 kernel: loop: module loaded Aug 5 21:36:02.271306 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:36:02.271398 systemd-journald[1607]: Collecting audit messages is disabled. Aug 5 21:36:02.271474 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 21:36:02.271509 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 21:36:02.271543 kernel: ACPI: bus type drm_connector registered Aug 5 21:36:02.271574 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 21:36:02.271605 systemd-journald[1607]: Journal started Aug 5 21:36:02.271657 systemd-journald[1607]: Runtime Journal (/run/log/journal/ec2a1feb7e7520f302c91a5a1753147f) is 8.0M, max 75.3M, 67.3M free. Aug 5 21:36:02.291304 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:36:02.283573 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 21:36:02.288698 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 21:36:02.296629 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 21:36:02.304222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:36:02.311523 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 21:36:02.311931 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 21:36:02.324116 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:36:02.324767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:36:02.330567 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:36:02.330949 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:36:02.336517 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:36:02.336904 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:36:02.343318 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 21:36:02.343696 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 21:36:02.349334 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:36:02.349769 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:36:02.359578 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 21:36:02.367602 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:36:02.373881 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 21:36:02.380722 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 21:36:02.409557 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 21:36:02.423444 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 21:36:02.439416 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 21:36:02.446512 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 21:36:02.466467 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 21:36:02.482503 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 21:36:02.487523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:36:02.501414 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 21:36:02.508715 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:36:02.516591 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:36:02.534990 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:36:02.538099 systemd-journald[1607]: Time spent on flushing to /var/log/journal/ec2a1feb7e7520f302c91a5a1753147f is 69.332ms for 892 entries. Aug 5 21:36:02.538099 systemd-journald[1607]: System Journal (/var/log/journal/ec2a1feb7e7520f302c91a5a1753147f) is 8.0M, max 195.6M, 187.6M free. Aug 5 21:36:02.645773 systemd-journald[1607]: Received client request to flush runtime journal. Aug 5 21:36:02.552918 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 21:36:02.562506 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 21:36:02.603576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:36:02.611697 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 21:36:02.626836 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 21:36:02.637658 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 21:36:02.652840 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 21:36:02.700714 udevadm[1666]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 21:36:02.706106 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:36:02.730966 systemd-tmpfiles[1656]: ACLs are not supported, ignoring. Aug 5 21:36:02.731007 systemd-tmpfiles[1656]: ACLs are not supported, ignoring. Aug 5 21:36:02.743225 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:36:02.767597 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 21:36:02.860000 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 21:36:02.874443 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:36:02.909499 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Aug 5 21:36:02.909535 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Aug 5 21:36:02.921471 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:36:03.594003 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 21:36:03.606446 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:36:03.668618 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Aug 5 21:36:03.729162 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:36:03.760503 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:36:03.786424 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 21:36:03.845931 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 5 21:36:03.873167 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1696) Aug 5 21:36:03.878891 (udev-worker)[1704]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:36:03.999324 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 21:36:04.165501 systemd-networkd[1694]: lo: Link UP Aug 5 21:36:04.165524 systemd-networkd[1694]: lo: Gained carrier Aug 5 21:36:04.175649 systemd-networkd[1694]: Enumeration completed Aug 5 21:36:04.175850 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:36:04.182979 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:36:04.183005 systemd-networkd[1694]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:36:04.184157 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (1697) Aug 5 21:36:04.189226 systemd-networkd[1694]: eth0: Link UP Aug 5 21:36:04.189556 systemd-networkd[1694]: eth0: Gained carrier Aug 5 21:36:04.189604 systemd-networkd[1694]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:36:04.207747 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 21:36:04.225310 systemd-networkd[1694]: eth0: DHCPv4 address 172.31.22.168/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 21:36:04.269934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:36:04.441024 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 21:36:04.446026 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:36:04.465611 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 21:36:04.485580 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 21:36:04.523152 lvm[1814]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:36:04.562831 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 21:36:04.569598 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:36:04.577667 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 21:36:04.593242 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:36:04.630940 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 21:36:04.637493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:36:04.640729 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 21:36:04.640913 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:36:04.646829 systemd[1]: Reached target machines.target - Containers. Aug 5 21:36:04.650818 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 21:36:04.660635 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 21:36:04.669356 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 21:36:04.672388 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:36:04.680706 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 21:36:04.694455 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 21:36:04.703415 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 21:36:04.709911 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 21:36:04.748393 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 21:36:04.758193 kernel: loop0: detected capacity change from 0 to 59688 Aug 5 21:36:04.761266 kernel: block loop0: the capability attribute has been deprecated. Aug 5 21:36:04.774416 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 21:36:04.781414 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 21:36:04.838904 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 21:36:04.876161 kernel: loop1: detected capacity change from 0 to 113712 Aug 5 21:36:04.976186 kernel: loop2: detected capacity change from 0 to 51896 Aug 5 21:36:05.078325 kernel: loop3: detected capacity change from 0 to 193208 Aug 5 21:36:05.126239 kernel: loop4: detected capacity change from 0 to 59688 Aug 5 21:36:05.139177 kernel: loop5: detected capacity change from 0 to 113712 Aug 5 21:36:05.152176 kernel: loop6: detected capacity change from 0 to 51896 Aug 5 21:36:05.163167 kernel: loop7: detected capacity change from 0 to 193208 Aug 5 21:36:05.177118 (sd-merge)[1839]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 5 21:36:05.178165 (sd-merge)[1839]: Merged extensions into '/usr'. Aug 5 21:36:05.186410 systemd[1]: Reloading requested from client PID 1825 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 21:36:05.186624 systemd[1]: Reloading... Aug 5 21:36:05.305194 zram_generator::config[1865]: No configuration found. Aug 5 21:36:05.482842 systemd-networkd[1694]: eth0: Gained IPv6LL Aug 5 21:36:05.605388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:36:05.750314 systemd[1]: Reloading finished in 562 ms. Aug 5 21:36:05.775374 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 21:36:05.779703 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 21:36:05.803730 systemd[1]: Starting ensure-sysext.service... Aug 5 21:36:05.809559 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:36:05.828411 systemd[1]: Reloading requested from client PID 1924 ('systemctl') (unit ensure-sysext.service)... Aug 5 21:36:05.828443 systemd[1]: Reloading... Aug 5 21:36:05.895472 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 21:36:05.896117 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 21:36:05.899676 systemd-tmpfiles[1925]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 21:36:05.900325 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Aug 5 21:36:05.900478 systemd-tmpfiles[1925]: ACLs are not supported, ignoring. Aug 5 21:36:05.908635 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:36:05.908662 systemd-tmpfiles[1925]: Skipping /boot Aug 5 21:36:05.928855 systemd-tmpfiles[1925]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:36:05.928888 systemd-tmpfiles[1925]: Skipping /boot Aug 5 21:36:05.998307 ldconfig[1821]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 21:36:06.031175 zram_generator::config[1958]: No configuration found. Aug 5 21:36:06.273429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:36:06.417869 systemd[1]: Reloading finished in 588 ms. Aug 5 21:36:06.444554 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 21:36:06.455245 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:36:06.474460 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:36:06.484466 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 21:36:06.496476 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 21:36:06.510874 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:36:06.524552 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 21:36:06.551168 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:36:06.564105 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:36:06.582038 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:36:06.602703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:36:06.608832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:36:06.620876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:36:06.621287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:36:06.642776 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 21:36:06.655865 augenrules[2039]: No rules Aug 5 21:36:06.663809 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:36:06.670058 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:36:06.673738 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:36:06.680514 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:36:06.682648 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:36:06.709408 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:36:06.717655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:36:06.729744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:36:06.743078 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:36:06.750436 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:36:06.755755 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 21:36:06.768494 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 21:36:06.776962 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:36:06.786385 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:36:06.793430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:36:06.793794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:36:06.812707 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:36:06.815507 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:36:06.826950 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 21:36:06.844736 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:36:06.857617 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:36:06.867631 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:36:06.882632 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:36:06.891020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:36:06.896921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:36:06.897350 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 21:36:06.902817 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:36:06.907436 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 21:36:06.916841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:36:06.917285 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:36:06.928778 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:36:06.930945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:36:06.935940 systemd-resolved[2018]: Positive Trust Anchors: Aug 5 21:36:06.936798 systemd-resolved[2018]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:36:06.936957 systemd-resolved[2018]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:36:06.944845 systemd[1]: Finished ensure-sysext.service. Aug 5 21:36:06.951690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:36:06.952068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:36:06.960207 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:36:06.960590 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:36:06.964643 systemd-resolved[2018]: Defaulting to hostname 'linux'. Aug 5 21:36:06.976866 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:36:06.982310 systemd[1]: Reached target network.target - Network. Aug 5 21:36:06.985208 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 21:36:06.988468 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:36:06.991803 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:36:06.991855 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:36:06.994297 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 21:36:06.997068 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 21:36:07.000363 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 21:36:07.003048 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 21:36:07.005780 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 21:36:07.008642 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 21:36:07.008696 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:36:07.010692 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:36:07.013540 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 21:36:07.019565 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 21:36:07.024841 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 21:36:07.028490 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:36:07.030552 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 21:36:07.033266 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:36:07.035590 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:36:07.040536 systemd[1]: System is tainted: cgroupsv1 Aug 5 21:36:07.040619 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:36:07.040669 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:36:07.049293 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 21:36:07.060414 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 21:36:07.071350 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 21:36:07.088340 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 21:36:07.096450 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 21:36:07.102526 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 21:36:07.123283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:07.125309 jq[2093]: false Aug 5 21:36:07.149052 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 21:36:07.170380 systemd[1]: Started ntpd.service - Network Time Service. Aug 5 21:36:07.176909 dbus-daemon[2092]: [system] SELinux support is enabled Aug 5 21:36:07.193349 dbus-daemon[2092]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1694 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 5 21:36:07.197368 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 21:36:07.213497 coreos-metadata[2090]: Aug 05 21:36:07.213 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 21:36:07.218720 coreos-metadata[2090]: Aug 05 21:36:07.218 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 5 21:36:07.217357 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 21:36:07.228302 coreos-metadata[2090]: Aug 05 21:36:07.223 INFO Fetch successful Aug 5 21:36:07.228302 coreos-metadata[2090]: Aug 05 21:36:07.224 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 5 21:36:07.230610 coreos-metadata[2090]: Aug 05 21:36:07.230 INFO Fetch successful Aug 5 21:36:07.230610 coreos-metadata[2090]: Aug 05 21:36:07.230 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 5 21:36:07.231567 extend-filesystems[2094]: Found loop4 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found loop5 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found loop6 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found loop7 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found nvme0n1 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found nvme0n1p2 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found nvme0n1p3 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found usr Aug 5 21:36:07.243922 extend-filesystems[2094]: Found nvme0n1p1 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found nvme0n1p4 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found nvme0n1p6 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found nvme0n1p7 Aug 5 21:36:07.243922 extend-filesystems[2094]: Found nvme0n1p9 Aug 5 21:36:07.243922 extend-filesystems[2094]: Checking size of /dev/nvme0n1p9 Aug 5 21:36:07.234354 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.233 INFO Fetch successful Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.233 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.243 INFO Fetch successful Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.243 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.248 INFO Fetch failed with 404: resource not found Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.248 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.255 INFO Fetch successful Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.255 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.259 INFO Fetch successful Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.259 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.270 INFO Fetch successful Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.271 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.280 INFO Fetch successful Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.281 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 5 21:36:07.317003 coreos-metadata[2090]: Aug 05 21:36:07.292 INFO Fetch successful Aug 5 21:36:07.265060 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 21:36:07.281581 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 21:36:07.327437 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 21:36:07.333023 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 21:36:07.349796 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 21:36:07.357383 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 21:36:07.367182 extend-filesystems[2094]: Resized partition /dev/nvme0n1p9 Aug 5 21:36:07.363695 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 21:36:07.359284 ntpd[2101]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:53:08 UTC 2024 (1): Starting Aug 5 21:36:07.380101 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:53:08 UTC 2024 (1): Starting Aug 5 21:36:07.380101 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 21:36:07.380101 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: ---------------------------------------------------- Aug 5 21:36:07.380101 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: ntp-4 is maintained by Network Time Foundation, Aug 5 21:36:07.380101 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 21:36:07.380101 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: corporation. Support and training for ntp-4 are Aug 5 21:36:07.380101 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: available at https://www.nwtime.org/support Aug 5 21:36:07.380101 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: ---------------------------------------------------- Aug 5 21:36:07.380721 extend-filesystems[2129]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 21:36:07.403059 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 5 21:36:07.359336 ntpd[2101]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: proto: precision = 0.096 usec (-23) Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: basedate set to 2024-07-24 Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: gps base set to 2024-07-28 (week 2325) Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: Listen normally on 3 eth0 172.31.22.168:123 Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: Listen normally on 4 lo [::1]:123 Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: Listen normally on 5 eth0 [fe80::465:6aff:fee5:6675%2]:123 Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: Listening on routing socket on fd #22 for interface updates Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:36:07.431495 ntpd[2101]: 5 Aug 21:36:07 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:36:07.359356 ntpd[2101]: ---------------------------------------------------- Aug 5 21:36:07.359375 ntpd[2101]: ntp-4 is maintained by Network Time Foundation, Aug 5 21:36:07.359395 ntpd[2101]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 21:36:07.359414 ntpd[2101]: corporation. Support and training for ntp-4 are Aug 5 21:36:07.359433 ntpd[2101]: available at https://www.nwtime.org/support Aug 5 21:36:07.359451 ntpd[2101]: ---------------------------------------------------- Aug 5 21:36:07.382440 ntpd[2101]: proto: precision = 0.096 usec (-23) Aug 5 21:36:07.391652 ntpd[2101]: basedate set to 2024-07-24 Aug 5 21:36:07.391687 ntpd[2101]: gps base set to 2024-07-28 (week 2325) Aug 5 21:36:07.404736 ntpd[2101]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 21:36:07.404822 ntpd[2101]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 21:36:07.405090 ntpd[2101]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 21:36:07.408206 ntpd[2101]: Listen normally on 3 eth0 172.31.22.168:123 Aug 5 21:36:07.408318 ntpd[2101]: Listen normally on 4 lo [::1]:123 Aug 5 21:36:07.408412 ntpd[2101]: Listen normally on 5 eth0 [fe80::465:6aff:fee5:6675%2]:123 Aug 5 21:36:07.408493 ntpd[2101]: Listening on routing socket on fd #22 for interface updates Aug 5 21:36:07.418093 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:36:07.423937 ntpd[2101]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 21:36:07.443995 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 21:36:07.445702 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 21:36:07.452002 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 21:36:07.452597 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 21:36:07.490225 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 5 21:36:07.506407 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 21:36:07.516264 jq[2127]: true Aug 5 21:36:07.532897 update_engine[2126]: I0805 21:36:07.521739 2126 main.cc:92] Flatcar Update Engine starting Aug 5 21:36:07.517112 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 21:36:07.552053 update_engine[2126]: I0805 21:36:07.548292 2126 update_check_scheduler.cc:74] Next update check in 2m12s Aug 5 21:36:07.552105 extend-filesystems[2129]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 5 21:36:07.552105 extend-filesystems[2129]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 21:36:07.552105 extend-filesystems[2129]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 5 21:36:07.535678 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 21:36:07.567857 extend-filesystems[2094]: Resized filesystem in /dev/nvme0n1p9 Aug 5 21:36:07.567108 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 21:36:07.574822 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 21:36:07.645828 (ntainerd)[2153]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 21:36:07.666988 jq[2152]: true Aug 5 21:36:07.669904 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 21:36:07.753535 dbus-daemon[2092]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 5 21:36:07.754583 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 5 21:36:07.769918 tar[2140]: linux-arm64/helm Aug 5 21:36:07.767670 systemd[1]: Started update-engine.service - Update Engine. Aug 5 21:36:07.789078 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 5 21:36:07.794179 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 21:36:07.794349 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 21:36:07.794391 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 21:36:07.849433 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 5 21:36:07.857691 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 21:36:07.857745 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 21:36:07.889091 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (2183) Aug 5 21:36:07.892088 systemd-logind[2118]: Watching system buttons on /dev/input/event0 (Power Button) Aug 5 21:36:07.892163 systemd-logind[2118]: Watching system buttons on /dev/input/event1 (Sleep Button) Aug 5 21:36:07.949077 bash[2210]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:36:07.892611 systemd-logind[2118]: New seat seat0. Aug 5 21:36:07.951673 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 21:36:07.956411 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 21:36:07.961220 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 21:36:08.043714 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 21:36:08.069393 systemd[1]: Starting sshkeys.service... Aug 5 21:36:08.165656 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 21:36:08.197382 amazon-ssm-agent[2194]: Initializing new seelog logger Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: New Seelog Logger Creation Complete Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: 2024/08/05 21:36:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: 2024/08/05 21:36:08 processing appconfig overrides Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: 2024/08/05 21:36:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: 2024/08/05 21:36:08 processing appconfig overrides Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: 2024/08/05 21:36:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: 2024/08/05 21:36:08 processing appconfig overrides Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO Proxy environment variables: Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: 2024/08/05 21:36:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 21:36:08.270500 amazon-ssm-agent[2194]: 2024/08/05 21:36:08 processing appconfig overrides Aug 5 21:36:08.223079 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 21:36:08.318630 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO http_proxy: Aug 5 21:36:08.420266 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO no_proxy: Aug 5 21:36:08.526324 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO https_proxy: Aug 5 21:36:08.598290 dbus-daemon[2092]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 5 21:36:08.598921 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 5 21:36:08.604876 dbus-daemon[2092]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=2197 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 5 21:36:08.629710 systemd[1]: Starting polkit.service - Authorization Manager... Aug 5 21:36:08.637952 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO Checking if agent identity type OnPrem can be assumed Aug 5 21:36:08.653204 coreos-metadata[2249]: Aug 05 21:36:08.652 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 21:36:08.661542 coreos-metadata[2249]: Aug 05 21:36:08.660 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 5 21:36:08.665767 coreos-metadata[2249]: Aug 05 21:36:08.665 INFO Fetch successful Aug 5 21:36:08.665767 coreos-metadata[2249]: Aug 05 21:36:08.665 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 5 21:36:08.670797 coreos-metadata[2249]: Aug 05 21:36:08.670 INFO Fetch successful Aug 5 21:36:08.676324 unknown[2249]: wrote ssh authorized keys file for user: core Aug 5 21:36:08.690193 locksmithd[2226]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 21:36:08.713741 polkitd[2301]: Started polkitd version 121 Aug 5 21:36:08.743440 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO Checking if agent identity type EC2 can be assumed Aug 5 21:36:08.765576 polkitd[2301]: Loading rules from directory /etc/polkit-1/rules.d Aug 5 21:36:08.767412 polkitd[2301]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 5 21:36:08.771921 update-ssh-keys[2306]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:36:08.775608 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 21:36:08.792694 polkitd[2301]: Finished loading, compiling and executing 2 rules Aug 5 21:36:08.793830 systemd[1]: Finished sshkeys.service. Aug 5 21:36:08.813579 dbus-daemon[2092]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 5 21:36:08.816229 polkitd[2301]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 5 21:36:08.817294 systemd[1]: Started polkit.service - Authorization Manager. Aug 5 21:36:08.842290 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO Agent will take identity from EC2 Aug 5 21:36:08.856719 containerd[2153]: time="2024-08-05T21:36:08.853095111Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 21:36:08.924643 systemd-resolved[2018]: System hostname changed to 'ip-172-31-22-168'. Aug 5 21:36:08.924657 systemd-hostnamed[2197]: Hostname set to (transient) Aug 5 21:36:08.947684 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 21:36:09.042153 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 21:36:09.090897 containerd[2153]: time="2024-08-05T21:36:09.089977392Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 21:36:09.090897 containerd[2153]: time="2024-08-05T21:36:09.090050460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:36:09.095567 containerd[2153]: time="2024-08-05T21:36:09.095488140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:36:09.095567 containerd[2153]: time="2024-08-05T21:36:09.095556348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.095966232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.096012072Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.097231740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.097358640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.097386360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.097533480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.097906644Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.097940280Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.097963812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.098232816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:36:09.098814 containerd[2153]: time="2024-08-05T21:36:09.098265720Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 21:36:09.099367 containerd[2153]: time="2024-08-05T21:36:09.098378004Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 21:36:09.099367 containerd[2153]: time="2024-08-05T21:36:09.098401728Z" level=info msg="metadata content store policy set" policy=shared Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.110911752Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.110985432Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111020508Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111098184Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111214056Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111242688Z" level=info msg="NRI interface is disabled by configuration." Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111278016Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111524076Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111557652Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111586980Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111623580Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111659040Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111697044Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 21:36:09.112328 containerd[2153]: time="2024-08-05T21:36:09.111727740Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 21:36:09.112947 containerd[2153]: time="2024-08-05T21:36:09.111758064Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 21:36:09.112947 containerd[2153]: time="2024-08-05T21:36:09.111791232Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 21:36:09.112947 containerd[2153]: time="2024-08-05T21:36:09.111822300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 21:36:09.112947 containerd[2153]: time="2024-08-05T21:36:09.111854352Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 21:36:09.112947 containerd[2153]: time="2024-08-05T21:36:09.111881604Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 21:36:09.112947 containerd[2153]: time="2024-08-05T21:36:09.112059720Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114553848Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114635136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114669528Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114727584Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114845220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114876096Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114906768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114935148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114966120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.114995400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.115028220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.115056672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.115089828Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 21:36:09.118214 containerd[2153]: time="2024-08-05T21:36:09.115422576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118902 containerd[2153]: time="2024-08-05T21:36:09.115460688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118902 containerd[2153]: time="2024-08-05T21:36:09.115490796Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118902 containerd[2153]: time="2024-08-05T21:36:09.115519752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118902 containerd[2153]: time="2024-08-05T21:36:09.115548708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118902 containerd[2153]: time="2024-08-05T21:36:09.115581252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118902 containerd[2153]: time="2024-08-05T21:36:09.115609560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.118902 containerd[2153]: time="2024-08-05T21:36:09.115635900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 21:36:09.119241 containerd[2153]: time="2024-08-05T21:36:09.116114964Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 21:36:09.119241 containerd[2153]: time="2024-08-05T21:36:09.116318436Z" level=info msg="Connect containerd service" Aug 5 21:36:09.119241 containerd[2153]: time="2024-08-05T21:36:09.116378172Z" level=info msg="using legacy CRI server" Aug 5 21:36:09.119241 containerd[2153]: time="2024-08-05T21:36:09.116395416Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 21:36:09.119241 containerd[2153]: time="2024-08-05T21:36:09.116546412Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 21:36:09.119641 containerd[2153]: time="2024-08-05T21:36:09.119511744Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:36:09.119641 containerd[2153]: time="2024-08-05T21:36:09.119574192Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 21:36:09.119641 containerd[2153]: time="2024-08-05T21:36:09.119610300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 21:36:09.119641 containerd[2153]: time="2024-08-05T21:36:09.119636136Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 21:36:09.119808 containerd[2153]: time="2024-08-05T21:36:09.119664588Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 21:36:09.130263 containerd[2153]: time="2024-08-05T21:36:09.122398908Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 21:36:09.130263 containerd[2153]: time="2024-08-05T21:36:09.122516544Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 21:36:09.130263 containerd[2153]: time="2024-08-05T21:36:09.122687700Z" level=info msg="Start subscribing containerd event" Aug 5 21:36:09.130263 containerd[2153]: time="2024-08-05T21:36:09.122767800Z" level=info msg="Start recovering state" Aug 5 21:36:09.130263 containerd[2153]: time="2024-08-05T21:36:09.122875968Z" level=info msg="Start event monitor" Aug 5 21:36:09.130263 containerd[2153]: time="2024-08-05T21:36:09.122899188Z" level=info msg="Start snapshots syncer" Aug 5 21:36:09.130263 containerd[2153]: time="2024-08-05T21:36:09.122920236Z" level=info msg="Start cni network conf syncer for default" Aug 5 21:36:09.130263 containerd[2153]: time="2024-08-05T21:36:09.122937408Z" level=info msg="Start streaming server" Aug 5 21:36:09.130263 containerd[2153]: time="2024-08-05T21:36:09.123427476Z" level=info msg="containerd successfully booted in 0.279822s" Aug 5 21:36:09.123245 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 21:36:09.141262 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 21:36:09.240467 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 5 21:36:09.341196 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Aug 5 21:36:09.441530 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO [amazon-ssm-agent] Starting Core Agent Aug 5 21:36:09.541749 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 5 21:36:09.642201 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO [Registrar] Starting registrar module Aug 5 21:36:09.742794 amazon-ssm-agent[2194]: 2024-08-05 21:36:08 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 5 21:36:09.854612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:09.873926 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:36:09.929731 tar[2140]: linux-arm64/LICENSE Aug 5 21:36:09.930376 tar[2140]: linux-arm64/README.md Aug 5 21:36:09.977450 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 21:36:10.053806 amazon-ssm-agent[2194]: 2024-08-05 21:36:10 INFO [EC2Identity] EC2 registration was successful. Aug 5 21:36:10.096044 amazon-ssm-agent[2194]: 2024-08-05 21:36:10 INFO [CredentialRefresher] credentialRefresher has started Aug 5 21:36:10.096180 amazon-ssm-agent[2194]: 2024-08-05 21:36:10 INFO [CredentialRefresher] Starting credentials refresher loop Aug 5 21:36:10.096180 amazon-ssm-agent[2194]: 2024-08-05 21:36:10 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 5 21:36:10.155100 amazon-ssm-agent[2194]: 2024-08-05 21:36:10 INFO [CredentialRefresher] Next credential rotation will be in 31.4999778419 minutes Aug 5 21:36:10.709486 kubelet[2358]: E0805 21:36:10.709345 2358 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:36:10.715352 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:36:10.715824 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:36:10.967480 sshd_keygen[2141]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 21:36:11.007819 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 21:36:11.023802 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 21:36:11.042779 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 21:36:11.043411 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 21:36:11.052818 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 21:36:11.085328 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 21:36:11.097785 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 21:36:11.112868 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 21:36:11.119643 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 21:36:11.122855 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 21:36:11.126685 systemd[1]: Startup finished in 10.890s (kernel) + 10.657s (userspace) = 21.547s. Aug 5 21:36:11.149909 amazon-ssm-agent[2194]: 2024-08-05 21:36:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 5 21:36:11.251483 amazon-ssm-agent[2194]: 2024-08-05 21:36:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2395) started Aug 5 21:36:11.353309 amazon-ssm-agent[2194]: 2024-08-05 21:36:11 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 5 21:36:14.109670 systemd-resolved[2018]: Clock change detected. Flushing caches. Aug 5 21:36:15.163014 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 21:36:15.172221 systemd[1]: Started sshd@0-172.31.22.168:22-139.178.68.195:33208.service - OpenSSH per-connection server daemon (139.178.68.195:33208). Aug 5 21:36:15.358828 sshd[2409]: Accepted publickey for core from 139.178.68.195 port 33208 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:36:15.362418 sshd[2409]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:36:15.381895 systemd-logind[2118]: New session 1 of user core. Aug 5 21:36:15.383003 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 21:36:15.390233 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 21:36:15.428201 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 21:36:15.439041 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 21:36:15.459220 (systemd)[2415]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:36:15.676663 systemd[2415]: Queued start job for default target default.target. Aug 5 21:36:15.677925 systemd[2415]: Created slice app.slice - User Application Slice. Aug 5 21:36:15.678320 systemd[2415]: Reached target paths.target - Paths. Aug 5 21:36:15.678411 systemd[2415]: Reached target timers.target - Timers. Aug 5 21:36:15.688019 systemd[2415]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 21:36:15.703160 systemd[2415]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 21:36:15.703428 systemd[2415]: Reached target sockets.target - Sockets. Aug 5 21:36:15.703617 systemd[2415]: Reached target basic.target - Basic System. Aug 5 21:36:15.703852 systemd[2415]: Reached target default.target - Main User Target. Aug 5 21:36:15.704018 systemd[2415]: Startup finished in 231ms. Aug 5 21:36:15.704386 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 21:36:15.711286 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 21:36:15.861113 systemd[1]: Started sshd@1-172.31.22.168:22-139.178.68.195:33218.service - OpenSSH per-connection server daemon (139.178.68.195:33218). Aug 5 21:36:16.036324 sshd[2427]: Accepted publickey for core from 139.178.68.195 port 33218 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:36:16.038934 sshd[2427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:36:16.047904 systemd-logind[2118]: New session 2 of user core. Aug 5 21:36:16.055384 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 21:36:16.187075 sshd[2427]: pam_unix(sshd:session): session closed for user core Aug 5 21:36:16.192625 systemd[1]: sshd@1-172.31.22.168:22-139.178.68.195:33218.service: Deactivated successfully. Aug 5 21:36:16.200577 systemd-logind[2118]: Session 2 logged out. Waiting for processes to exit. Aug 5 21:36:16.201869 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 21:36:16.203815 systemd-logind[2118]: Removed session 2. Aug 5 21:36:16.216231 systemd[1]: Started sshd@2-172.31.22.168:22-139.178.68.195:33230.service - OpenSSH per-connection server daemon (139.178.68.195:33230). Aug 5 21:36:16.397933 sshd[2435]: Accepted publickey for core from 139.178.68.195 port 33230 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:36:16.400847 sshd[2435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:36:16.408814 systemd-logind[2118]: New session 3 of user core. Aug 5 21:36:16.419314 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 21:36:16.539057 sshd[2435]: pam_unix(sshd:session): session closed for user core Aug 5 21:36:16.546965 systemd-logind[2118]: Session 3 logged out. Waiting for processes to exit. Aug 5 21:36:16.547954 systemd[1]: sshd@2-172.31.22.168:22-139.178.68.195:33230.service: Deactivated successfully. Aug 5 21:36:16.553310 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 21:36:16.554917 systemd-logind[2118]: Removed session 3. Aug 5 21:36:16.574171 systemd[1]: Started sshd@3-172.31.22.168:22-139.178.68.195:33236.service - OpenSSH per-connection server daemon (139.178.68.195:33236). Aug 5 21:36:16.741399 sshd[2443]: Accepted publickey for core from 139.178.68.195 port 33236 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:36:16.743865 sshd[2443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:36:16.753280 systemd-logind[2118]: New session 4 of user core. Aug 5 21:36:16.762289 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 21:36:16.894048 sshd[2443]: pam_unix(sshd:session): session closed for user core Aug 5 21:36:16.899110 systemd[1]: sshd@3-172.31.22.168:22-139.178.68.195:33236.service: Deactivated successfully. Aug 5 21:36:16.905223 systemd-logind[2118]: Session 4 logged out. Waiting for processes to exit. Aug 5 21:36:16.907048 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 21:36:16.908733 systemd-logind[2118]: Removed session 4. Aug 5 21:36:16.926206 systemd[1]: Started sshd@4-172.31.22.168:22-139.178.68.195:33248.service - OpenSSH per-connection server daemon (139.178.68.195:33248). Aug 5 21:36:17.091814 sshd[2451]: Accepted publickey for core from 139.178.68.195 port 33248 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:36:17.094674 sshd[2451]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:36:17.104081 systemd-logind[2118]: New session 5 of user core. Aug 5 21:36:17.112218 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 21:36:17.228190 sudo[2455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 21:36:17.228777 sudo[2455]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:36:17.250627 sudo[2455]: pam_unix(sudo:session): session closed for user root Aug 5 21:36:17.274335 sshd[2451]: pam_unix(sshd:session): session closed for user core Aug 5 21:36:17.283183 systemd[1]: sshd@4-172.31.22.168:22-139.178.68.195:33248.service: Deactivated successfully. Aug 5 21:36:17.288469 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 21:36:17.290212 systemd-logind[2118]: Session 5 logged out. Waiting for processes to exit. Aug 5 21:36:17.292706 systemd-logind[2118]: Removed session 5. Aug 5 21:36:17.309212 systemd[1]: Started sshd@5-172.31.22.168:22-139.178.68.195:33264.service - OpenSSH per-connection server daemon (139.178.68.195:33264). Aug 5 21:36:17.473336 sshd[2460]: Accepted publickey for core from 139.178.68.195 port 33264 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:36:17.475829 sshd[2460]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:36:17.484068 systemd-logind[2118]: New session 6 of user core. Aug 5 21:36:17.493217 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 21:36:17.600314 sudo[2465]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 21:36:17.600856 sudo[2465]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:36:17.607641 sudo[2465]: pam_unix(sudo:session): session closed for user root Aug 5 21:36:17.619056 sudo[2464]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 21:36:17.619633 sudo[2464]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:36:17.646248 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 21:36:17.652169 auditctl[2468]: No rules Aug 5 21:36:17.653190 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 21:36:17.653681 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 21:36:17.664460 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:36:17.718629 augenrules[2487]: No rules Aug 5 21:36:17.722026 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:36:17.726622 sudo[2464]: pam_unix(sudo:session): session closed for user root Aug 5 21:36:17.749957 sshd[2460]: pam_unix(sshd:session): session closed for user core Aug 5 21:36:17.756605 systemd-logind[2118]: Session 6 logged out. Waiting for processes to exit. Aug 5 21:36:17.757306 systemd[1]: sshd@5-172.31.22.168:22-139.178.68.195:33264.service: Deactivated successfully. Aug 5 21:36:17.763346 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 21:36:17.765157 systemd-logind[2118]: Removed session 6. Aug 5 21:36:17.782193 systemd[1]: Started sshd@6-172.31.22.168:22-139.178.68.195:33276.service - OpenSSH per-connection server daemon (139.178.68.195:33276). Aug 5 21:36:17.946580 sshd[2496]: Accepted publickey for core from 139.178.68.195 port 33276 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:36:17.949290 sshd[2496]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:36:17.958667 systemd-logind[2118]: New session 7 of user core. Aug 5 21:36:17.965212 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 21:36:18.072078 sudo[2500]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 21:36:18.072665 sudo[2500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:36:18.257175 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 21:36:18.270438 (dockerd)[2510]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 21:36:18.703951 dockerd[2510]: time="2024-08-05T21:36:18.703851058Z" level=info msg="Starting up" Aug 5 21:36:19.585415 dockerd[2510]: time="2024-08-05T21:36:19.585342587Z" level=info msg="Loading containers: start." Aug 5 21:36:19.763914 kernel: Initializing XFRM netlink socket Aug 5 21:36:19.817928 (udev-worker)[2523]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:36:19.914158 systemd-networkd[1694]: docker0: Link UP Aug 5 21:36:19.942072 dockerd[2510]: time="2024-08-05T21:36:19.941838852Z" level=info msg="Loading containers: done." Aug 5 21:36:20.048499 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2444066915-merged.mount: Deactivated successfully. Aug 5 21:36:20.052776 dockerd[2510]: time="2024-08-05T21:36:20.052007937Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 21:36:20.052776 dockerd[2510]: time="2024-08-05T21:36:20.052327065Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 21:36:20.052776 dockerd[2510]: time="2024-08-05T21:36:20.052522809Z" level=info msg="Daemon has completed initialization" Aug 5 21:36:20.107770 dockerd[2510]: time="2024-08-05T21:36:20.107100921Z" level=info msg="API listen on /run/docker.sock" Aug 5 21:36:20.109463 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 21:36:20.556404 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 21:36:20.564523 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:21.285147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:21.305343 containerd[2153]: time="2024-08-05T21:36:21.304430915Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 21:36:21.310550 (kubelet)[2654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:36:21.430308 kubelet[2654]: E0805 21:36:21.430197 2654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:36:21.438287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:36:21.438654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:36:22.153347 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789775703.mount: Deactivated successfully. Aug 5 21:36:23.908404 containerd[2153]: time="2024-08-05T21:36:23.908311912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:23.910545 containerd[2153]: time="2024-08-05T21:36:23.910473952Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=31601516" Aug 5 21:36:23.911879 containerd[2153]: time="2024-08-05T21:36:23.911819440Z" level=info msg="ImageCreate event name:\"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:23.917885 containerd[2153]: time="2024-08-05T21:36:23.917769688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:23.920565 containerd[2153]: time="2024-08-05T21:36:23.920495932Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"31598316\" in 2.615991289s" Aug 5 21:36:23.921296 containerd[2153]: time="2024-08-05T21:36:23.920747560Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:57305d93b5cb5db7c2dd71c2936b30c6c300a568c571d915f30e2677e4472260\"" Aug 5 21:36:23.966272 containerd[2153]: time="2024-08-05T21:36:23.966085660Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 21:36:25.959418 containerd[2153]: time="2024-08-05T21:36:25.959338482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:25.961379 containerd[2153]: time="2024-08-05T21:36:25.961309122Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=29018270" Aug 5 21:36:25.962842 containerd[2153]: time="2024-08-05T21:36:25.962779710Z" level=info msg="ImageCreate event name:\"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:25.970649 containerd[2153]: time="2024-08-05T21:36:25.970550898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:25.973591 containerd[2153]: time="2024-08-05T21:36:25.972869142Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"30505537\" in 2.006725282s" Aug 5 21:36:25.973591 containerd[2153]: time="2024-08-05T21:36:25.972933930Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:fc5c912cb9569e3e61d6507db0c88360a3e23d7e0cfc589aefe633e02aed582a\"" Aug 5 21:36:26.016928 containerd[2153]: time="2024-08-05T21:36:26.016882107Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 21:36:27.241338 containerd[2153]: time="2024-08-05T21:36:27.241212149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:27.243631 containerd[2153]: time="2024-08-05T21:36:27.243538265Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=15534520" Aug 5 21:36:27.245190 containerd[2153]: time="2024-08-05T21:36:27.245104421Z" level=info msg="ImageCreate event name:\"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:27.251068 containerd[2153]: time="2024-08-05T21:36:27.250987805Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:27.254751 containerd[2153]: time="2024-08-05T21:36:27.254224325Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"17021805\" in 1.237111818s" Aug 5 21:36:27.254751 containerd[2153]: time="2024-08-05T21:36:27.254286293Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:662db3bc8add7dd68943303fde6906c5c4b372a71ed52107b4272181f3041869\"" Aug 5 21:36:27.296231 containerd[2153]: time="2024-08-05T21:36:27.296112641Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 21:36:28.604377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2645857017.mount: Deactivated successfully. Aug 5 21:36:29.138311 containerd[2153]: time="2024-08-05T21:36:29.137665734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:29.139815 containerd[2153]: time="2024-08-05T21:36:29.139228326Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=24977919" Aug 5 21:36:29.140840 containerd[2153]: time="2024-08-05T21:36:29.140777826Z" level=info msg="ImageCreate event name:\"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:29.145621 containerd[2153]: time="2024-08-05T21:36:29.145537590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:29.147275 containerd[2153]: time="2024-08-05T21:36:29.147185790Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"24976938\" in 1.851006489s" Aug 5 21:36:29.147476 containerd[2153]: time="2024-08-05T21:36:29.147274734Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:d3c27a9ad523d0e17d8e5f3f587a49f9c4b611f30f1851fe0bc1240e53a2084b\"" Aug 5 21:36:29.185799 containerd[2153]: time="2024-08-05T21:36:29.185744202Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 21:36:29.687014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362640555.mount: Deactivated successfully. Aug 5 21:36:29.695356 containerd[2153]: time="2024-08-05T21:36:29.695279397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:29.697383 containerd[2153]: time="2024-08-05T21:36:29.697315593Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Aug 5 21:36:29.699235 containerd[2153]: time="2024-08-05T21:36:29.699163485Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:29.703653 containerd[2153]: time="2024-08-05T21:36:29.703588053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:29.705580 containerd[2153]: time="2024-08-05T21:36:29.705413457Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 519.428175ms" Aug 5 21:36:29.705580 containerd[2153]: time="2024-08-05T21:36:29.705472221Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 21:36:29.746916 containerd[2153]: time="2024-08-05T21:36:29.746844801Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 21:36:30.469148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013090019.mount: Deactivated successfully. Aug 5 21:36:31.556409 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 21:36:31.563395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:33.159500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:33.184459 (kubelet)[2793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:36:33.282162 kubelet[2793]: E0805 21:36:33.281747 2793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:36:33.287766 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:36:33.288154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:36:34.445695 containerd[2153]: time="2024-08-05T21:36:34.445588189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:34.448484 containerd[2153]: time="2024-08-05T21:36:34.448386517Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Aug 5 21:36:34.450206 containerd[2153]: time="2024-08-05T21:36:34.450129169Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:34.457511 containerd[2153]: time="2024-08-05T21:36:34.457458889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:34.459458 containerd[2153]: time="2024-08-05T21:36:34.459214153Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 4.712301876s" Aug 5 21:36:34.459458 containerd[2153]: time="2024-08-05T21:36:34.459281641Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 21:36:34.501176 containerd[2153]: time="2024-08-05T21:36:34.501073489Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 21:36:35.119211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4251826860.mount: Deactivated successfully. Aug 5 21:36:35.560996 containerd[2153]: time="2024-08-05T21:36:35.560926262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:35.562701 containerd[2153]: time="2024-08-05T21:36:35.562634138Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Aug 5 21:36:35.564295 containerd[2153]: time="2024-08-05T21:36:35.564201722Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:35.575957 containerd[2153]: time="2024-08-05T21:36:35.575583446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:36:35.579063 containerd[2153]: time="2024-08-05T21:36:35.579008222Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.077843113s" Aug 5 21:36:35.579262 containerd[2153]: time="2024-08-05T21:36:35.579228854Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Aug 5 21:36:38.710459 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 5 21:36:43.306520 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 21:36:43.317687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:43.591522 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 21:36:43.591849 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 21:36:43.593144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:43.609941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:43.663787 systemd[1]: Reloading requested from client PID 2918 ('systemctl') (unit session-7.scope)... Aug 5 21:36:43.664013 systemd[1]: Reloading... Aug 5 21:36:43.878842 zram_generator::config[2959]: No configuration found. Aug 5 21:36:44.146869 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:36:44.308422 systemd[1]: Reloading finished in 643 ms. Aug 5 21:36:44.389160 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 21:36:44.389371 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 21:36:44.390258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:44.400347 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:44.756118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:44.779434 (kubelet)[3030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:36:44.865462 kubelet[3030]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:36:44.866275 kubelet[3030]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:36:44.866275 kubelet[3030]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:36:44.870776 kubelet[3030]: I0805 21:36:44.870155 3030 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:36:46.665378 kubelet[3030]: I0805 21:36:46.665306 3030 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:36:46.665378 kubelet[3030]: I0805 21:36:46.665370 3030 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:36:46.666345 kubelet[3030]: I0805 21:36:46.665798 3030 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:36:46.704590 kubelet[3030]: I0805 21:36:46.704294 3030 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:36:46.707692 kubelet[3030]: E0805 21:36:46.707621 3030 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.22.168:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:46.718750 kubelet[3030]: W0805 21:36:46.718434 3030 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:36:46.719842 kubelet[3030]: I0805 21:36:46.719808 3030 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:36:46.720663 kubelet[3030]: I0805 21:36:46.720628 3030 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:36:46.721054 kubelet[3030]: I0805 21:36:46.721023 3030 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:36:46.721253 kubelet[3030]: I0805 21:36:46.721079 3030 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:36:46.721253 kubelet[3030]: I0805 21:36:46.721101 3030 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:36:46.721347 kubelet[3030]: I0805 21:36:46.721300 3030 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:36:46.725070 kubelet[3030]: I0805 21:36:46.724344 3030 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:36:46.725070 kubelet[3030]: I0805 21:36:46.724415 3030 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:36:46.725070 kubelet[3030]: I0805 21:36:46.724520 3030 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:36:46.725070 kubelet[3030]: I0805 21:36:46.724548 3030 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:36:46.727145 kubelet[3030]: W0805 21:36:46.727051 3030 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.22.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-168&limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:46.727366 kubelet[3030]: E0805 21:36:46.727347 3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-168&limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:46.727598 kubelet[3030]: W0805 21:36:46.727553 3030 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.22.168:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:46.728823 kubelet[3030]: E0805 21:36:46.728076 3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.168:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:46.728823 kubelet[3030]: I0805 21:36:46.728263 3030 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:36:46.731332 kubelet[3030]: W0805 21:36:46.731295 3030 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 21:36:46.733041 kubelet[3030]: I0805 21:36:46.732981 3030 server.go:1232] "Started kubelet" Aug 5 21:36:46.735363 kubelet[3030]: I0805 21:36:46.735177 3030 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:36:46.737057 kubelet[3030]: I0805 21:36:46.736607 3030 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:36:46.738739 kubelet[3030]: I0805 21:36:46.738656 3030 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:36:46.739415 kubelet[3030]: I0805 21:36:46.739387 3030 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:36:46.740644 kubelet[3030]: E0805 21:36:46.739761 3030 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-22-168.17e8f2c9f8098fae", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-22-168", UID:"ip-172-31-22-168", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-22-168"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 36, 46, 732939182, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 36, 46, 732939182, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-22-168"}': 'Post "https://172.31.22.168:6443/api/v1/namespaces/default/events": dial tcp 172.31.22.168:6443: connect: connection refused'(may retry after sleeping) Aug 5 21:36:46.743775 kubelet[3030]: I0805 21:36:46.742469 3030 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:36:46.743775 kubelet[3030]: E0805 21:36:46.743189 3030 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:36:46.743775 kubelet[3030]: E0805 21:36:46.743240 3030 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:36:46.750476 kubelet[3030]: E0805 21:36:46.750440 3030 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ip-172-31-22-168\" not found" Aug 5 21:36:46.751803 kubelet[3030]: I0805 21:36:46.751769 3030 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:36:46.752182 kubelet[3030]: I0805 21:36:46.752155 3030 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:36:46.752419 kubelet[3030]: I0805 21:36:46.752400 3030 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:36:46.753257 kubelet[3030]: W0805 21:36:46.753180 3030 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.22.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:46.754050 kubelet[3030]: E0805 21:36:46.753536 3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:46.759251 kubelet[3030]: E0805 21:36:46.759040 3030 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-168?timeout=10s\": dial tcp 172.31.22.168:6443: connect: connection refused" interval="200ms" Aug 5 21:36:46.793857 kubelet[3030]: I0805 21:36:46.793149 3030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:36:46.798688 kubelet[3030]: I0805 21:36:46.798530 3030 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:36:46.798688 kubelet[3030]: I0805 21:36:46.798685 3030 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:36:46.799173 kubelet[3030]: I0805 21:36:46.799135 3030 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:36:46.800524 kubelet[3030]: E0805 21:36:46.800488 3030 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:36:46.803079 kubelet[3030]: W0805 21:36:46.801380 3030 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.22.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:46.803355 kubelet[3030]: E0805 21:36:46.803332 3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:46.858049 kubelet[3030]: I0805 21:36:46.858012 3030 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-168" Aug 5 21:36:46.858916 kubelet[3030]: E0805 21:36:46.858889 3030 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.22.168:6443/api/v1/nodes\": dial tcp 172.31.22.168:6443: connect: connection refused" node="ip-172-31-22-168" Aug 5 21:36:46.868577 kubelet[3030]: I0805 21:36:46.868522 3030 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:36:46.868577 kubelet[3030]: I0805 21:36:46.868564 3030 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:36:46.868792 kubelet[3030]: I0805 21:36:46.868598 3030 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:36:46.872209 kubelet[3030]: I0805 21:36:46.872155 3030 policy_none.go:49] "None policy: Start" Aug 5 21:36:46.873224 kubelet[3030]: I0805 21:36:46.873197 3030 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:36:46.874033 kubelet[3030]: I0805 21:36:46.873556 3030 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:36:46.882746 kubelet[3030]: I0805 21:36:46.881610 3030 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:36:46.882746 kubelet[3030]: I0805 21:36:46.882023 3030 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:36:46.889417 kubelet[3030]: E0805 21:36:46.889382 3030 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-22-168\" not found" Aug 5 21:36:46.904184 kubelet[3030]: I0805 21:36:46.904159 3030 topology_manager.go:215] "Topology Admit Handler" podUID="349185eff3c69cc080b8107f6ffcfc10" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-168" Aug 5 21:36:46.906231 kubelet[3030]: I0805 21:36:46.906177 3030 topology_manager.go:215] "Topology Admit Handler" podUID="4611830be925cbf7a41a9a02f51fb778" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-168" Aug 5 21:36:46.909060 kubelet[3030]: I0805 21:36:46.909028 3030 topology_manager.go:215] "Topology Admit Handler" podUID="f6a607a86b55d5c805333c8754f7c774" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:46.959791 kubelet[3030]: E0805 21:36:46.959629 3030 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-168?timeout=10s\": dial tcp 172.31.22.168:6443: connect: connection refused" interval="400ms" Aug 5 21:36:47.054101 kubelet[3030]: I0805 21:36:47.054013 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4611830be925cbf7a41a9a02f51fb778-ca-certs\") pod \"kube-apiserver-ip-172-31-22-168\" (UID: \"4611830be925cbf7a41a9a02f51fb778\") " pod="kube-system/kube-apiserver-ip-172-31-22-168" Aug 5 21:36:47.054101 kubelet[3030]: I0805 21:36:47.054085 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4611830be925cbf7a41a9a02f51fb778-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-168\" (UID: \"4611830be925cbf7a41a9a02f51fb778\") " pod="kube-system/kube-apiserver-ip-172-31-22-168" Aug 5 21:36:47.054492 kubelet[3030]: I0805 21:36:47.054136 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:47.054492 kubelet[3030]: I0805 21:36:47.054185 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:47.054492 kubelet[3030]: I0805 21:36:47.054230 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/349185eff3c69cc080b8107f6ffcfc10-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-168\" (UID: \"349185eff3c69cc080b8107f6ffcfc10\") " pod="kube-system/kube-scheduler-ip-172-31-22-168" Aug 5 21:36:47.054492 kubelet[3030]: I0805 21:36:47.054272 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4611830be925cbf7a41a9a02f51fb778-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-168\" (UID: \"4611830be925cbf7a41a9a02f51fb778\") " pod="kube-system/kube-apiserver-ip-172-31-22-168" Aug 5 21:36:47.054492 kubelet[3030]: I0805 21:36:47.054314 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:47.054802 kubelet[3030]: I0805 21:36:47.054359 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:47.054802 kubelet[3030]: I0805 21:36:47.054403 3030 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:47.062406 kubelet[3030]: I0805 21:36:47.061932 3030 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-168" Aug 5 21:36:47.062655 kubelet[3030]: E0805 21:36:47.062632 3030 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.22.168:6443/api/v1/nodes\": dial tcp 172.31.22.168:6443: connect: connection refused" node="ip-172-31-22-168" Aug 5 21:36:47.218333 containerd[2153]: time="2024-08-05T21:36:47.218175480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-168,Uid:4611830be925cbf7a41a9a02f51fb778,Namespace:kube-system,Attempt:0,}" Aug 5 21:36:47.220470 containerd[2153]: time="2024-08-05T21:36:47.219965244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-168,Uid:349185eff3c69cc080b8107f6ffcfc10,Namespace:kube-system,Attempt:0,}" Aug 5 21:36:47.226457 containerd[2153]: time="2024-08-05T21:36:47.226298388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-168,Uid:f6a607a86b55d5c805333c8754f7c774,Namespace:kube-system,Attempt:0,}" Aug 5 21:36:47.360422 kubelet[3030]: E0805 21:36:47.360373 3030 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-168?timeout=10s\": dial tcp 172.31.22.168:6443: connect: connection refused" interval="800ms" Aug 5 21:36:47.465788 kubelet[3030]: I0805 21:36:47.465553 3030 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-168" Aug 5 21:36:47.466540 kubelet[3030]: E0805 21:36:47.466517 3030 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.22.168:6443/api/v1/nodes\": dial tcp 172.31.22.168:6443: connect: connection refused" node="ip-172-31-22-168" Aug 5 21:36:47.635675 kubelet[3030]: W0805 21:36:47.635592 3030 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.22.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-168&limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:47.635675 kubelet[3030]: E0805 21:36:47.635684 3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.22.168:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-22-168&limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:47.715674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123278086.mount: Deactivated successfully. Aug 5 21:36:47.729071 containerd[2153]: time="2024-08-05T21:36:47.728990450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:47.730815 containerd[2153]: time="2024-08-05T21:36:47.730740974Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:47.732674 containerd[2153]: time="2024-08-05T21:36:47.732605187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:36:47.734093 containerd[2153]: time="2024-08-05T21:36:47.734041011Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 5 21:36:47.736457 containerd[2153]: time="2024-08-05T21:36:47.736388475Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:47.738658 containerd[2153]: time="2024-08-05T21:36:47.738610491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:36:47.739067 containerd[2153]: time="2024-08-05T21:36:47.738949503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:47.744953 containerd[2153]: time="2024-08-05T21:36:47.744866451Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:36:47.748625 containerd[2153]: time="2024-08-05T21:36:47.748192935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 521.738427ms" Aug 5 21:36:47.752536 containerd[2153]: time="2024-08-05T21:36:47.752464359Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 532.370127ms" Aug 5 21:36:47.753914 kubelet[3030]: W0805 21:36:47.753816 3030 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.22.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:47.754550 kubelet[3030]: E0805 21:36:47.754511 3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.22.168:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:47.822565 containerd[2153]: time="2024-08-05T21:36:47.822005703Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 603.684135ms" Aug 5 21:36:47.977929 containerd[2153]: time="2024-08-05T21:36:47.977358808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:47.977929 containerd[2153]: time="2024-08-05T21:36:47.977435008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:47.977929 containerd[2153]: time="2024-08-05T21:36:47.977478136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:47.977929 containerd[2153]: time="2024-08-05T21:36:47.977512168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:47.977929 containerd[2153]: time="2024-08-05T21:36:47.976965784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:47.977929 containerd[2153]: time="2024-08-05T21:36:47.977056360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:47.977929 containerd[2153]: time="2024-08-05T21:36:47.977103112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:47.977929 containerd[2153]: time="2024-08-05T21:36:47.977142712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:47.982541 containerd[2153]: time="2024-08-05T21:36:47.980607808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:36:47.982541 containerd[2153]: time="2024-08-05T21:36:47.980700004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:47.982541 containerd[2153]: time="2024-08-05T21:36:47.980760496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:36:47.982541 containerd[2153]: time="2024-08-05T21:36:47.980786212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:36:48.045865 kubelet[3030]: W0805 21:36:48.043095 3030 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.22.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:48.048747 kubelet[3030]: E0805 21:36:48.046616 3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.22.168:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:48.151006 containerd[2153]: time="2024-08-05T21:36:48.150947917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-22-168,Uid:4611830be925cbf7a41a9a02f51fb778,Namespace:kube-system,Attempt:0,} returns sandbox id \"72fec725ddf31f9002453fbaab1d565d11da450e3a7ca3b52d981eeade24c0ef\"" Aug 5 21:36:48.161772 kubelet[3030]: E0805 21:36:48.161641 3030 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-168?timeout=10s\": dial tcp 172.31.22.168:6443: connect: connection refused" interval="1.6s" Aug 5 21:36:48.162362 containerd[2153]: time="2024-08-05T21:36:48.162199993Z" level=info msg="CreateContainer within sandbox \"72fec725ddf31f9002453fbaab1d565d11da450e3a7ca3b52d981eeade24c0ef\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 21:36:48.171937 containerd[2153]: time="2024-08-05T21:36:48.171096373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-22-168,Uid:f6a607a86b55d5c805333c8754f7c774,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f040257413d2f08be6f9bc9ca4121cc180fd2491e84664fa847e151562937c1\"" Aug 5 21:36:48.174704 containerd[2153]: time="2024-08-05T21:36:48.174639145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-22-168,Uid:349185eff3c69cc080b8107f6ffcfc10,Namespace:kube-system,Attempt:0,} returns sandbox id \"870369b9f87498f5ea92491448e0b9901a7ad3328d024ca678b89d83be19fd5d\"" Aug 5 21:36:48.180193 containerd[2153]: time="2024-08-05T21:36:48.180097825Z" level=info msg="CreateContainer within sandbox \"2f040257413d2f08be6f9bc9ca4121cc180fd2491e84664fa847e151562937c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 21:36:48.183188 containerd[2153]: time="2024-08-05T21:36:48.183115477Z" level=info msg="CreateContainer within sandbox \"870369b9f87498f5ea92491448e0b9901a7ad3328d024ca678b89d83be19fd5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 21:36:48.195552 containerd[2153]: time="2024-08-05T21:36:48.195311677Z" level=info msg="CreateContainer within sandbox \"72fec725ddf31f9002453fbaab1d565d11da450e3a7ca3b52d981eeade24c0ef\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e40f8cfa2a5b9d41a3db205977dc87d72600e407370d368217c78d12d84b656f\"" Aug 5 21:36:48.196980 containerd[2153]: time="2024-08-05T21:36:48.196878949Z" level=info msg="StartContainer for \"e40f8cfa2a5b9d41a3db205977dc87d72600e407370d368217c78d12d84b656f\"" Aug 5 21:36:48.246939 containerd[2153]: time="2024-08-05T21:36:48.245772673Z" level=info msg="CreateContainer within sandbox \"2f040257413d2f08be6f9bc9ca4121cc180fd2491e84664fa847e151562937c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fdf0514d9380935c97a5d79c93fca82376d458ea48e0c546997c6b9d22e61ab5\"" Aug 5 21:36:48.249908 containerd[2153]: time="2024-08-05T21:36:48.248023225Z" level=info msg="StartContainer for \"fdf0514d9380935c97a5d79c93fca82376d458ea48e0c546997c6b9d22e61ab5\"" Aug 5 21:36:48.256462 containerd[2153]: time="2024-08-05T21:36:48.256391701Z" level=info msg="CreateContainer within sandbox \"870369b9f87498f5ea92491448e0b9901a7ad3328d024ca678b89d83be19fd5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05fc7514915ee33786b8ec3e7d1e878d289c445a342461669528649b149abe0d\"" Aug 5 21:36:48.258191 containerd[2153]: time="2024-08-05T21:36:48.258034129Z" level=info msg="StartContainer for \"05fc7514915ee33786b8ec3e7d1e878d289c445a342461669528649b149abe0d\"" Aug 5 21:36:48.271260 kubelet[3030]: I0805 21:36:48.271213 3030 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-168" Aug 5 21:36:48.272924 kubelet[3030]: E0805 21:36:48.272860 3030 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.22.168:6443/api/v1/nodes\": dial tcp 172.31.22.168:6443: connect: connection refused" node="ip-172-31-22-168" Aug 5 21:36:48.310765 kubelet[3030]: W0805 21:36:48.310187 3030 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.22.168:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:48.312372 kubelet[3030]: E0805 21:36:48.311979 3030 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.22.168:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.22.168:6443: connect: connection refused Aug 5 21:36:48.412173 containerd[2153]: time="2024-08-05T21:36:48.412095722Z" level=info msg="StartContainer for \"e40f8cfa2a5b9d41a3db205977dc87d72600e407370d368217c78d12d84b656f\" returns successfully" Aug 5 21:36:48.523918 containerd[2153]: time="2024-08-05T21:36:48.522896498Z" level=info msg="StartContainer for \"05fc7514915ee33786b8ec3e7d1e878d289c445a342461669528649b149abe0d\" returns successfully" Aug 5 21:36:48.538742 containerd[2153]: time="2024-08-05T21:36:48.538520487Z" level=info msg="StartContainer for \"fdf0514d9380935c97a5d79c93fca82376d458ea48e0c546997c6b9d22e61ab5\" returns successfully" Aug 5 21:36:49.880900 kubelet[3030]: I0805 21:36:49.877910 3030 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-168" Aug 5 21:36:52.347756 update_engine[2126]: I0805 21:36:52.344774 2126 update_attempter.cc:509] Updating boot flags... Aug 5 21:36:52.565952 kubelet[3030]: E0805 21:36:52.565908 3030 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-22-168\" not found" node="ip-172-31-22-168" Aug 5 21:36:52.571821 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3318) Aug 5 21:36:52.649840 kubelet[3030]: I0805 21:36:52.647794 3030 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-22-168" Aug 5 21:36:52.711296 kubelet[3030]: E0805 21:36:52.710994 3030 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-22-168.17e8f2c9f8098fae", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-22-168", UID:"ip-172-31-22-168", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-22-168"}, FirstTimestamp:time.Date(2024, time.August, 5, 21, 36, 46, 732939182, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 21, 36, 46, 732939182, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-22-168"}': 'namespaces "default" not found' (will not retry!) Aug 5 21:36:52.728604 kubelet[3030]: I0805 21:36:52.728553 3030 apiserver.go:52] "Watching apiserver" Aug 5 21:36:52.756502 kubelet[3030]: I0805 21:36:52.756262 3030 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:36:53.472155 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 38 scanned by (udev-worker) (3317) Aug 5 21:36:55.921852 systemd[1]: Reloading requested from client PID 3487 ('systemctl') (unit session-7.scope)... Aug 5 21:36:55.921887 systemd[1]: Reloading... Aug 5 21:36:56.103811 zram_generator::config[3531]: No configuration found. Aug 5 21:36:56.402444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:36:56.587877 systemd[1]: Reloading finished in 665 ms. Aug 5 21:36:56.659389 kubelet[3030]: I0805 21:36:56.658759 3030 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:36:56.660929 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:56.672376 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:36:56.673129 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:56.685431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:36:57.061102 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:36:57.069481 (kubelet)[3595]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:36:57.203039 kubelet[3595]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:36:57.203039 kubelet[3595]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:36:57.203039 kubelet[3595]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:36:57.203039 kubelet[3595]: I0805 21:36:57.202896 3595 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:36:57.213240 kubelet[3595]: I0805 21:36:57.213169 3595 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 21:36:57.213240 kubelet[3595]: I0805 21:36:57.213223 3595 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:36:57.214913 kubelet[3595]: I0805 21:36:57.213671 3595 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 21:36:57.216825 kubelet[3595]: I0805 21:36:57.216782 3595 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 21:36:57.219180 kubelet[3595]: I0805 21:36:57.218683 3595 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:36:57.228707 kubelet[3595]: W0805 21:36:57.228670 3595 machine.go:65] Cannot read vendor id correctly, set empty. Aug 5 21:36:57.236216 kubelet[3595]: I0805 21:36:57.236175 3595 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:36:57.237561 kubelet[3595]: I0805 21:36:57.237504 3595 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:36:57.241901 kubelet[3595]: I0805 21:36:57.241166 3595 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:36:57.241901 kubelet[3595]: I0805 21:36:57.241249 3595 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:36:57.241901 kubelet[3595]: I0805 21:36:57.241271 3595 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:36:57.241901 kubelet[3595]: I0805 21:36:57.241346 3595 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:36:57.241901 kubelet[3595]: I0805 21:36:57.241536 3595 kubelet.go:393] "Attempting to sync node with API server" Aug 5 21:36:57.241901 kubelet[3595]: I0805 21:36:57.241566 3595 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:36:57.241901 kubelet[3595]: I0805 21:36:57.241616 3595 kubelet.go:309] "Adding apiserver pod source" Aug 5 21:36:57.242495 kubelet[3595]: I0805 21:36:57.241641 3595 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:36:57.246776 kubelet[3595]: I0805 21:36:57.245247 3595 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:36:57.253445 kubelet[3595]: I0805 21:36:57.249044 3595 server.go:1232] "Started kubelet" Aug 5 21:36:57.258047 kubelet[3595]: I0805 21:36:57.258006 3595 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:36:57.261237 kubelet[3595]: I0805 21:36:57.261177 3595 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:36:57.266677 kubelet[3595]: I0805 21:36:57.266528 3595 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 21:36:57.279275 kubelet[3595]: I0805 21:36:57.272641 3595 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:36:57.283849 kubelet[3595]: I0805 21:36:57.279520 3595 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:36:57.283849 kubelet[3595]: E0805 21:36:57.280027 3595 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 21:36:57.283849 kubelet[3595]: E0805 21:36:57.280075 3595 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:36:57.283849 kubelet[3595]: I0805 21:36:57.272691 3595 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:36:57.283849 kubelet[3595]: I0805 21:36:57.281310 3595 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:36:57.283849 kubelet[3595]: I0805 21:36:57.281659 3595 server.go:462] "Adding debug handlers to kubelet server" Aug 5 21:36:57.371553 kubelet[3595]: I0805 21:36:57.370629 3595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:36:57.382216 kubelet[3595]: I0805 21:36:57.382099 3595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:36:57.382629 kubelet[3595]: I0805 21:36:57.382402 3595 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:36:57.382629 kubelet[3595]: I0805 21:36:57.382442 3595 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 21:36:57.382629 kubelet[3595]: E0805 21:36:57.382526 3595 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:36:57.393680 kubelet[3595]: E0805 21:36:57.393494 3595 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Aug 5 21:36:57.430624 kubelet[3595]: I0805 21:36:57.429658 3595 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-22-168" Aug 5 21:36:57.467879 kubelet[3595]: I0805 21:36:57.467702 3595 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-22-168" Aug 5 21:36:57.472338 kubelet[3595]: I0805 21:36:57.470378 3595 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-22-168" Aug 5 21:36:57.483604 kubelet[3595]: E0805 21:36:57.483301 3595 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:36:57.648476 kubelet[3595]: I0805 21:36:57.648349 3595 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:36:57.648685 kubelet[3595]: I0805 21:36:57.648664 3595 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:36:57.648911 kubelet[3595]: I0805 21:36:57.648891 3595 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:36:57.649306 kubelet[3595]: I0805 21:36:57.649285 3595 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 21:36:57.649485 kubelet[3595]: I0805 21:36:57.649466 3595 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 21:36:57.649601 kubelet[3595]: I0805 21:36:57.649581 3595 policy_none.go:49] "None policy: Start" Aug 5 21:36:57.651782 kubelet[3595]: I0805 21:36:57.651386 3595 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 21:36:57.651782 kubelet[3595]: I0805 21:36:57.651436 3595 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:36:57.652521 kubelet[3595]: I0805 21:36:57.652467 3595 state_mem.go:75] "Updated machine memory state" Aug 5 21:36:57.658011 kubelet[3595]: I0805 21:36:57.657599 3595 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:36:57.661278 kubelet[3595]: I0805 21:36:57.660975 3595 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:36:57.685074 kubelet[3595]: I0805 21:36:57.683597 3595 topology_manager.go:215] "Topology Admit Handler" podUID="4611830be925cbf7a41a9a02f51fb778" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-22-168" Aug 5 21:36:57.685074 kubelet[3595]: I0805 21:36:57.683794 3595 topology_manager.go:215] "Topology Admit Handler" podUID="f6a607a86b55d5c805333c8754f7c774" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:57.685074 kubelet[3595]: I0805 21:36:57.683893 3595 topology_manager.go:215] "Topology Admit Handler" podUID="349185eff3c69cc080b8107f6ffcfc10" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-22-168" Aug 5 21:36:57.703943 kubelet[3595]: E0805 21:36:57.703900 3595 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-168\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-168" Aug 5 21:36:57.786829 kubelet[3595]: I0805 21:36:57.786787 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-k8s-certs\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:57.787046 kubelet[3595]: I0805 21:36:57.787026 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-kubeconfig\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:57.787295 kubelet[3595]: I0805 21:36:57.787273 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:57.787470 kubelet[3595]: I0805 21:36:57.787452 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-ca-certs\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:57.787693 kubelet[3595]: I0805 21:36:57.787641 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f6a607a86b55d5c805333c8754f7c774-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-22-168\" (UID: \"f6a607a86b55d5c805333c8754f7c774\") " pod="kube-system/kube-controller-manager-ip-172-31-22-168" Aug 5 21:36:57.787905 kubelet[3595]: I0805 21:36:57.787885 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/349185eff3c69cc080b8107f6ffcfc10-kubeconfig\") pod \"kube-scheduler-ip-172-31-22-168\" (UID: \"349185eff3c69cc080b8107f6ffcfc10\") " pod="kube-system/kube-scheduler-ip-172-31-22-168" Aug 5 21:36:57.788248 kubelet[3595]: I0805 21:36:57.788039 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4611830be925cbf7a41a9a02f51fb778-ca-certs\") pod \"kube-apiserver-ip-172-31-22-168\" (UID: \"4611830be925cbf7a41a9a02f51fb778\") " pod="kube-system/kube-apiserver-ip-172-31-22-168" Aug 5 21:36:57.788248 kubelet[3595]: I0805 21:36:57.788092 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4611830be925cbf7a41a9a02f51fb778-k8s-certs\") pod \"kube-apiserver-ip-172-31-22-168\" (UID: \"4611830be925cbf7a41a9a02f51fb778\") " pod="kube-system/kube-apiserver-ip-172-31-22-168" Aug 5 21:36:57.788248 kubelet[3595]: I0805 21:36:57.788141 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4611830be925cbf7a41a9a02f51fb778-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-22-168\" (UID: \"4611830be925cbf7a41a9a02f51fb778\") " pod="kube-system/kube-apiserver-ip-172-31-22-168" Aug 5 21:36:58.264344 kubelet[3595]: I0805 21:36:58.263969 3595 apiserver.go:52] "Watching apiserver" Aug 5 21:36:58.281218 kubelet[3595]: I0805 21:36:58.281144 3595 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:36:58.612202 kubelet[3595]: E0805 21:36:58.610449 3595 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-22-168\" already exists" pod="kube-system/kube-apiserver-ip-172-31-22-168" Aug 5 21:36:58.651127 kubelet[3595]: I0805 21:36:58.651068 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-22-168" podStartSLOduration=3.650986009 podCreationTimestamp="2024-08-05 21:36:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:36:58.622471813 +0000 UTC m=+1.539876921" watchObservedRunningTime="2024-08-05 21:36:58.650986009 +0000 UTC m=+1.568391117" Aug 5 21:36:58.679339 kubelet[3595]: I0805 21:36:58.678710 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-22-168" podStartSLOduration=1.678440089 podCreationTimestamp="2024-08-05 21:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:36:58.653074753 +0000 UTC m=+1.570479849" watchObservedRunningTime="2024-08-05 21:36:58.678440089 +0000 UTC m=+1.595845185" Aug 5 21:36:58.700257 kubelet[3595]: I0805 21:36:58.699911 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-22-168" podStartSLOduration=1.699859441 podCreationTimestamp="2024-08-05 21:36:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:36:58.679068037 +0000 UTC m=+1.596473121" watchObservedRunningTime="2024-08-05 21:36:58.699859441 +0000 UTC m=+1.617264549" Aug 5 21:37:04.667559 sudo[2500]: pam_unix(sudo:session): session closed for user root Aug 5 21:37:04.690692 sshd[2496]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:04.700992 systemd[1]: sshd@6-172.31.22.168:22-139.178.68.195:33276.service: Deactivated successfully. Aug 5 21:37:04.708243 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 21:37:04.710206 systemd-logind[2118]: Session 7 logged out. Waiting for processes to exit. Aug 5 21:37:04.713358 systemd-logind[2118]: Removed session 7. Aug 5 21:37:09.297744 kubelet[3595]: I0805 21:37:09.297669 3595 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 21:37:09.299246 containerd[2153]: time="2024-08-05T21:37:09.299083234Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 21:37:09.300479 kubelet[3595]: I0805 21:37:09.299493 3595 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 21:37:09.961776 kubelet[3595]: I0805 21:37:09.961692 3595 topology_manager.go:215] "Topology Admit Handler" podUID="adbc8764-4dde-40f5-b3a0-82126267d8da" podNamespace="kube-system" podName="kube-proxy-fsdtg" Aug 5 21:37:09.976830 kubelet[3595]: I0805 21:37:09.976704 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adbc8764-4dde-40f5-b3a0-82126267d8da-lib-modules\") pod \"kube-proxy-fsdtg\" (UID: \"adbc8764-4dde-40f5-b3a0-82126267d8da\") " pod="kube-system/kube-proxy-fsdtg" Aug 5 21:37:09.977301 kubelet[3595]: I0805 21:37:09.977002 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vknqx\" (UniqueName: \"kubernetes.io/projected/adbc8764-4dde-40f5-b3a0-82126267d8da-kube-api-access-vknqx\") pod \"kube-proxy-fsdtg\" (UID: \"adbc8764-4dde-40f5-b3a0-82126267d8da\") " pod="kube-system/kube-proxy-fsdtg" Aug 5 21:37:09.978909 kubelet[3595]: I0805 21:37:09.978858 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/adbc8764-4dde-40f5-b3a0-82126267d8da-kube-proxy\") pod \"kube-proxy-fsdtg\" (UID: \"adbc8764-4dde-40f5-b3a0-82126267d8da\") " pod="kube-system/kube-proxy-fsdtg" Aug 5 21:37:09.979080 kubelet[3595]: I0805 21:37:09.978935 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adbc8764-4dde-40f5-b3a0-82126267d8da-xtables-lock\") pod \"kube-proxy-fsdtg\" (UID: \"adbc8764-4dde-40f5-b3a0-82126267d8da\") " pod="kube-system/kube-proxy-fsdtg" Aug 5 21:37:10.207197 kubelet[3595]: I0805 21:37:10.207131 3595 topology_manager.go:215] "Topology Admit Handler" podUID="9ceeaa28-5a4b-459f-84e7-aa2d3f5877d8" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-wjxt5" Aug 5 21:37:10.273211 containerd[2153]: time="2024-08-05T21:37:10.273014386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fsdtg,Uid:adbc8764-4dde-40f5-b3a0-82126267d8da,Namespace:kube-system,Attempt:0,}" Aug 5 21:37:10.281422 kubelet[3595]: I0805 21:37:10.281361 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9ceeaa28-5a4b-459f-84e7-aa2d3f5877d8-var-lib-calico\") pod \"tigera-operator-76c4974c85-wjxt5\" (UID: \"9ceeaa28-5a4b-459f-84e7-aa2d3f5877d8\") " pod="tigera-operator/tigera-operator-76c4974c85-wjxt5" Aug 5 21:37:10.281596 kubelet[3595]: I0805 21:37:10.281443 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdnrw\" (UniqueName: \"kubernetes.io/projected/9ceeaa28-5a4b-459f-84e7-aa2d3f5877d8-kube-api-access-xdnrw\") pod \"tigera-operator-76c4974c85-wjxt5\" (UID: \"9ceeaa28-5a4b-459f-84e7-aa2d3f5877d8\") " pod="tigera-operator/tigera-operator-76c4974c85-wjxt5" Aug 5 21:37:10.484982 containerd[2153]: time="2024-08-05T21:37:10.484815588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:10.485573 containerd[2153]: time="2024-08-05T21:37:10.485427936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:10.485573 containerd[2153]: time="2024-08-05T21:37:10.485543376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:10.486302 containerd[2153]: time="2024-08-05T21:37:10.485634216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:10.531539 containerd[2153]: time="2024-08-05T21:37:10.531369744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-wjxt5,Uid:9ceeaa28-5a4b-459f-84e7-aa2d3f5877d8,Namespace:tigera-operator,Attempt:0,}" Aug 5 21:37:10.585464 containerd[2153]: time="2024-08-05T21:37:10.585407196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fsdtg,Uid:adbc8764-4dde-40f5-b3a0-82126267d8da,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c1669701ddff7707702cbb813fb49aeec57605e8b9e144b0aab2f6bc15c885f\"" Aug 5 21:37:10.590980 containerd[2153]: time="2024-08-05T21:37:10.590889012Z" level=info msg="CreateContainer within sandbox \"3c1669701ddff7707702cbb813fb49aeec57605e8b9e144b0aab2f6bc15c885f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 21:37:10.633687 containerd[2153]: time="2024-08-05T21:37:10.632969448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:10.633894 containerd[2153]: time="2024-08-05T21:37:10.633649968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:10.633894 containerd[2153]: time="2024-08-05T21:37:10.633683496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:10.633894 containerd[2153]: time="2024-08-05T21:37:10.633708300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:10.662838 containerd[2153]: time="2024-08-05T21:37:10.662601324Z" level=info msg="CreateContainer within sandbox \"3c1669701ddff7707702cbb813fb49aeec57605e8b9e144b0aab2f6bc15c885f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"962c8efb4d2fc40ef8a33ff7c84247d163240248d6ef6f2cb1e4b5cc95a12881\"" Aug 5 21:37:10.669969 containerd[2153]: time="2024-08-05T21:37:10.668501988Z" level=info msg="StartContainer for \"962c8efb4d2fc40ef8a33ff7c84247d163240248d6ef6f2cb1e4b5cc95a12881\"" Aug 5 21:37:10.759552 containerd[2153]: time="2024-08-05T21:37:10.759346765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-wjxt5,Uid:9ceeaa28-5a4b-459f-84e7-aa2d3f5877d8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"951850b6e05c70d13b799864d9f0e38cbabfa4f3d97490a14081cd1462ac7949\"" Aug 5 21:37:10.763301 containerd[2153]: time="2024-08-05T21:37:10.763235377Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 21:37:10.820348 containerd[2153]: time="2024-08-05T21:37:10.820256437Z" level=info msg="StartContainer for \"962c8efb4d2fc40ef8a33ff7c84247d163240248d6ef6f2cb1e4b5cc95a12881\" returns successfully" Aug 5 21:37:12.241277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2777120664.mount: Deactivated successfully. Aug 5 21:37:12.838346 containerd[2153]: time="2024-08-05T21:37:12.838276671Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:12.839928 containerd[2153]: time="2024-08-05T21:37:12.839870535Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473622" Aug 5 21:37:12.841942 containerd[2153]: time="2024-08-05T21:37:12.841842447Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:12.849195 containerd[2153]: time="2024-08-05T21:37:12.849106731Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:12.850843 containerd[2153]: time="2024-08-05T21:37:12.850632003Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.087330554s" Aug 5 21:37:12.850843 containerd[2153]: time="2024-08-05T21:37:12.850691223Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Aug 5 21:37:12.856459 containerd[2153]: time="2024-08-05T21:37:12.856370799Z" level=info msg="CreateContainer within sandbox \"951850b6e05c70d13b799864d9f0e38cbabfa4f3d97490a14081cd1462ac7949\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 21:37:12.884126 containerd[2153]: time="2024-08-05T21:37:12.883532019Z" level=info msg="CreateContainer within sandbox \"951850b6e05c70d13b799864d9f0e38cbabfa4f3d97490a14081cd1462ac7949\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8c5ff97e07266c80389e7e7fafce00f6563ac6865a84d74c4737ba1f0c3d33c0\"" Aug 5 21:37:12.883625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3972654069.mount: Deactivated successfully. Aug 5 21:37:12.886931 containerd[2153]: time="2024-08-05T21:37:12.886228047Z" level=info msg="StartContainer for \"8c5ff97e07266c80389e7e7fafce00f6563ac6865a84d74c4737ba1f0c3d33c0\"" Aug 5 21:37:12.990102 containerd[2153]: time="2024-08-05T21:37:12.989930860Z" level=info msg="StartContainer for \"8c5ff97e07266c80389e7e7fafce00f6563ac6865a84d74c4737ba1f0c3d33c0\" returns successfully" Aug 5 21:37:13.587798 kubelet[3595]: I0805 21:37:13.587331 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fsdtg" podStartSLOduration=4.587274783 podCreationTimestamp="2024-08-05 21:37:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:37:11.573342325 +0000 UTC m=+14.490747577" watchObservedRunningTime="2024-08-05 21:37:13.587274783 +0000 UTC m=+16.504679879" Aug 5 21:37:17.404739 kubelet[3595]: I0805 21:37:17.404611 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-wjxt5" podStartSLOduration=5.315591448 podCreationTimestamp="2024-08-05 21:37:10 +0000 UTC" firstStartedPulling="2024-08-05 21:37:10.762290533 +0000 UTC m=+13.679695617" lastFinishedPulling="2024-08-05 21:37:12.851252991 +0000 UTC m=+15.768658075" observedRunningTime="2024-08-05 21:37:13.589229907 +0000 UTC m=+16.506634991" watchObservedRunningTime="2024-08-05 21:37:17.404553906 +0000 UTC m=+20.321958990" Aug 5 21:37:17.507698 kubelet[3595]: I0805 21:37:17.505762 3595 topology_manager.go:215] "Topology Admit Handler" podUID="eed642cd-3605-4ef0-ac40-b1f192711c70" podNamespace="calico-system" podName="calico-typha-78b967b7bb-bn9cf" Aug 5 21:37:17.537696 kubelet[3595]: I0805 21:37:17.537635 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eed642cd-3605-4ef0-ac40-b1f192711c70-typha-certs\") pod \"calico-typha-78b967b7bb-bn9cf\" (UID: \"eed642cd-3605-4ef0-ac40-b1f192711c70\") " pod="calico-system/calico-typha-78b967b7bb-bn9cf" Aug 5 21:37:17.540421 kubelet[3595]: I0805 21:37:17.540353 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eed642cd-3605-4ef0-ac40-b1f192711c70-tigera-ca-bundle\") pod \"calico-typha-78b967b7bb-bn9cf\" (UID: \"eed642cd-3605-4ef0-ac40-b1f192711c70\") " pod="calico-system/calico-typha-78b967b7bb-bn9cf" Aug 5 21:37:17.540595 kubelet[3595]: I0805 21:37:17.540552 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5ddb\" (UniqueName: \"kubernetes.io/projected/eed642cd-3605-4ef0-ac40-b1f192711c70-kube-api-access-d5ddb\") pod \"calico-typha-78b967b7bb-bn9cf\" (UID: \"eed642cd-3605-4ef0-ac40-b1f192711c70\") " pod="calico-system/calico-typha-78b967b7bb-bn9cf" Aug 5 21:37:17.721843 kubelet[3595]: I0805 21:37:17.718991 3595 topology_manager.go:215] "Topology Admit Handler" podUID="2443c384-8b77-4d3d-8e6c-3dd415f0533b" podNamespace="calico-system" podName="calico-node-lz792" Aug 5 21:37:17.815614 containerd[2153]: time="2024-08-05T21:37:17.815496440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b967b7bb-bn9cf,Uid:eed642cd-3605-4ef0-ac40-b1f192711c70,Namespace:calico-system,Attempt:0,}" Aug 5 21:37:17.850194 kubelet[3595]: I0805 21:37:17.849991 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2443c384-8b77-4d3d-8e6c-3dd415f0533b-var-run-calico\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.850373 kubelet[3595]: I0805 21:37:17.850251 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2443c384-8b77-4d3d-8e6c-3dd415f0533b-cni-log-dir\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.851531 kubelet[3595]: I0805 21:37:17.851094 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2443c384-8b77-4d3d-8e6c-3dd415f0533b-tigera-ca-bundle\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.851531 kubelet[3595]: I0805 21:37:17.851259 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2443c384-8b77-4d3d-8e6c-3dd415f0533b-lib-modules\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.852352 kubelet[3595]: I0805 21:37:17.852279 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2443c384-8b77-4d3d-8e6c-3dd415f0533b-node-certs\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.852485 kubelet[3595]: I0805 21:37:17.852405 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2443c384-8b77-4d3d-8e6c-3dd415f0533b-cni-bin-dir\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.853193 kubelet[3595]: I0805 21:37:17.852577 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2443c384-8b77-4d3d-8e6c-3dd415f0533b-flexvol-driver-host\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.854130 kubelet[3595]: I0805 21:37:17.853552 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2443c384-8b77-4d3d-8e6c-3dd415f0533b-policysync\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.854130 kubelet[3595]: I0805 21:37:17.853649 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2443c384-8b77-4d3d-8e6c-3dd415f0533b-var-lib-calico\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.855968 kubelet[3595]: I0805 21:37:17.853711 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7r9g\" (UniqueName: \"kubernetes.io/projected/2443c384-8b77-4d3d-8e6c-3dd415f0533b-kube-api-access-k7r9g\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.857080 kubelet[3595]: I0805 21:37:17.857033 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2443c384-8b77-4d3d-8e6c-3dd415f0533b-xtables-lock\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.858163 kubelet[3595]: I0805 21:37:17.857292 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2443c384-8b77-4d3d-8e6c-3dd415f0533b-cni-net-dir\") pod \"calico-node-lz792\" (UID: \"2443c384-8b77-4d3d-8e6c-3dd415f0533b\") " pod="calico-system/calico-node-lz792" Aug 5 21:37:17.894308 containerd[2153]: time="2024-08-05T21:37:17.893428484Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:17.894308 containerd[2153]: time="2024-08-05T21:37:17.893593676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:17.894308 containerd[2153]: time="2024-08-05T21:37:17.893640440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:17.894308 containerd[2153]: time="2024-08-05T21:37:17.893675780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:17.916798 kubelet[3595]: I0805 21:37:17.913760 3595 topology_manager.go:215] "Topology Admit Handler" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" podNamespace="calico-system" podName="csi-node-driver-wcf9z" Aug 5 21:37:17.923138 kubelet[3595]: E0805 21:37:17.918465 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcf9z" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" Aug 5 21:37:18.010298 kubelet[3595]: E0805 21:37:18.008310 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.011009 kubelet[3595]: W0805 21:37:18.009939 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.011009 kubelet[3595]: E0805 21:37:18.010939 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.019610 kubelet[3595]: E0805 21:37:18.019103 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.019610 kubelet[3595]: W0805 21:37:18.019166 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.019610 kubelet[3595]: E0805 21:37:18.019224 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.021300 kubelet[3595]: E0805 21:37:18.020856 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.021300 kubelet[3595]: W0805 21:37:18.020888 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.021300 kubelet[3595]: E0805 21:37:18.021035 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.026559 kubelet[3595]: E0805 21:37:18.026201 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.026559 kubelet[3595]: W0805 21:37:18.026238 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.026559 kubelet[3595]: E0805 21:37:18.026301 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.035398 kubelet[3595]: E0805 21:37:18.034806 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.036806 kubelet[3595]: W0805 21:37:18.034846 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.042173 kubelet[3595]: E0805 21:37:18.038711 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.048932 kubelet[3595]: E0805 21:37:18.048881 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.048932 kubelet[3595]: W0805 21:37:18.048923 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.049548 kubelet[3595]: E0805 21:37:18.048976 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.054550 kubelet[3595]: E0805 21:37:18.053925 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.054550 kubelet[3595]: W0805 21:37:18.053989 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.056156 kubelet[3595]: E0805 21:37:18.055431 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.057473 kubelet[3595]: E0805 21:37:18.056937 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.057473 kubelet[3595]: W0805 21:37:18.056968 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.059862 kubelet[3595]: E0805 21:37:18.058624 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.061936 kubelet[3595]: E0805 21:37:18.061382 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.061936 kubelet[3595]: W0805 21:37:18.061422 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.061936 kubelet[3595]: E0805 21:37:18.061876 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.064407 kubelet[3595]: E0805 21:37:18.063861 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.064407 kubelet[3595]: W0805 21:37:18.064012 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.064407 kubelet[3595]: E0805 21:37:18.064124 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.066144 kubelet[3595]: E0805 21:37:18.065687 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.066144 kubelet[3595]: W0805 21:37:18.065780 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.066144 kubelet[3595]: E0805 21:37:18.065819 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.067750 kubelet[3595]: E0805 21:37:18.067441 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.067750 kubelet[3595]: W0805 21:37:18.067491 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.067750 kubelet[3595]: E0805 21:37:18.067527 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.069424 kubelet[3595]: E0805 21:37:18.069187 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.069424 kubelet[3595]: W0805 21:37:18.069259 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.069424 kubelet[3595]: E0805 21:37:18.069298 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.071754 kubelet[3595]: E0805 21:37:18.070459 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.071754 kubelet[3595]: W0805 21:37:18.070511 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.071754 kubelet[3595]: E0805 21:37:18.070564 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.073162 kubelet[3595]: E0805 21:37:18.072583 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.073162 kubelet[3595]: W0805 21:37:18.072641 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.073162 kubelet[3595]: E0805 21:37:18.072708 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.075696 kubelet[3595]: E0805 21:37:18.075375 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.075696 kubelet[3595]: W0805 21:37:18.075461 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.077026 kubelet[3595]: E0805 21:37:18.075519 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.077026 kubelet[3595]: I0805 21:37:18.076263 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0cd76c5e-7095-481c-905a-80c88f6f6b05-socket-dir\") pod \"csi-node-driver-wcf9z\" (UID: \"0cd76c5e-7095-481c-905a-80c88f6f6b05\") " pod="calico-system/csi-node-driver-wcf9z" Aug 5 21:37:18.079119 containerd[2153]: time="2024-08-05T21:37:18.078947345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lz792,Uid:2443c384-8b77-4d3d-8e6c-3dd415f0533b,Namespace:calico-system,Attempt:0,}" Aug 5 21:37:18.080255 kubelet[3595]: E0805 21:37:18.079829 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.080255 kubelet[3595]: W0805 21:37:18.080019 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.081947 kubelet[3595]: E0805 21:37:18.080824 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.081947 kubelet[3595]: I0805 21:37:18.081064 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0cd76c5e-7095-481c-905a-80c88f6f6b05-varrun\") pod \"csi-node-driver-wcf9z\" (UID: \"0cd76c5e-7095-481c-905a-80c88f6f6b05\") " pod="calico-system/csi-node-driver-wcf9z" Aug 5 21:37:18.088855 kubelet[3595]: E0805 21:37:18.088611 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.088855 kubelet[3595]: W0805 21:37:18.088644 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.089597 kubelet[3595]: E0805 21:37:18.089539 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.091807 kubelet[3595]: E0805 21:37:18.090889 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.091807 kubelet[3595]: W0805 21:37:18.091756 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.093144 kubelet[3595]: E0805 21:37:18.092330 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.094199 kubelet[3595]: E0805 21:37:18.093816 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.094199 kubelet[3595]: W0805 21:37:18.093850 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.094199 kubelet[3595]: E0805 21:37:18.094069 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.094199 kubelet[3595]: I0805 21:37:18.094161 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0cd76c5e-7095-481c-905a-80c88f6f6b05-kubelet-dir\") pod \"csi-node-driver-wcf9z\" (UID: \"0cd76c5e-7095-481c-905a-80c88f6f6b05\") " pod="calico-system/csi-node-driver-wcf9z" Aug 5 21:37:18.097364 kubelet[3595]: E0805 21:37:18.096884 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.097364 kubelet[3595]: W0805 21:37:18.096919 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.097364 kubelet[3595]: E0805 21:37:18.097062 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.101267 kubelet[3595]: E0805 21:37:18.097790 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.101267 kubelet[3595]: W0805 21:37:18.097816 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.101267 kubelet[3595]: E0805 21:37:18.098052 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.101267 kubelet[3595]: E0805 21:37:18.098283 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.101267 kubelet[3595]: W0805 21:37:18.098299 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.101267 kubelet[3595]: E0805 21:37:18.098367 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.101267 kubelet[3595]: E0805 21:37:18.099371 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.101267 kubelet[3595]: W0805 21:37:18.099396 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.101267 kubelet[3595]: E0805 21:37:18.099438 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.101267 kubelet[3595]: E0805 21:37:18.099905 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.102663 kubelet[3595]: W0805 21:37:18.099924 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.102663 kubelet[3595]: E0805 21:37:18.100041 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.102663 kubelet[3595]: E0805 21:37:18.100379 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.102663 kubelet[3595]: W0805 21:37:18.100398 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.102663 kubelet[3595]: E0805 21:37:18.100485 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.105429 kubelet[3595]: E0805 21:37:18.103475 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.105429 kubelet[3595]: W0805 21:37:18.103503 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.105429 kubelet[3595]: E0805 21:37:18.103549 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.107891 kubelet[3595]: E0805 21:37:18.107028 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.107891 kubelet[3595]: W0805 21:37:18.107198 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.107891 kubelet[3595]: E0805 21:37:18.107703 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.114986 kubelet[3595]: E0805 21:37:18.113180 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.114986 kubelet[3595]: W0805 21:37:18.113220 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.114986 kubelet[3595]: E0805 21:37:18.113274 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.119740 kubelet[3595]: E0805 21:37:18.118746 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.119740 kubelet[3595]: W0805 21:37:18.118783 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.119740 kubelet[3595]: E0805 21:37:18.119270 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.119740 kubelet[3595]: W0805 21:37:18.119292 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.119740 kubelet[3595]: E0805 21:37:18.119324 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.123373 kubelet[3595]: E0805 21:37:18.120265 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.123373 kubelet[3595]: W0805 21:37:18.120295 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.123373 kubelet[3595]: E0805 21:37:18.120333 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.123373 kubelet[3595]: E0805 21:37:18.120391 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.178382 containerd[2153]: time="2024-08-05T21:37:18.177276234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:18.178382 containerd[2153]: time="2024-08-05T21:37:18.177409842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:18.178382 containerd[2153]: time="2024-08-05T21:37:18.177452910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:18.178382 containerd[2153]: time="2024-08-05T21:37:18.177486462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:18.224316 kubelet[3595]: E0805 21:37:18.221544 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.224316 kubelet[3595]: W0805 21:37:18.221606 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.224316 kubelet[3595]: E0805 21:37:18.221665 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.224316 kubelet[3595]: E0805 21:37:18.222820 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.224316 kubelet[3595]: W0805 21:37:18.222869 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.224316 kubelet[3595]: E0805 21:37:18.222934 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.224316 kubelet[3595]: I0805 21:37:18.222982 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0cd76c5e-7095-481c-905a-80c88f6f6b05-registration-dir\") pod \"csi-node-driver-wcf9z\" (UID: \"0cd76c5e-7095-481c-905a-80c88f6f6b05\") " pod="calico-system/csi-node-driver-wcf9z" Aug 5 21:37:18.224316 kubelet[3595]: E0805 21:37:18.223496 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.224316 kubelet[3595]: W0805 21:37:18.223515 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.225030 kubelet[3595]: E0805 21:37:18.223566 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.225030 kubelet[3595]: I0805 21:37:18.223605 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7wkr\" (UniqueName: \"kubernetes.io/projected/0cd76c5e-7095-481c-905a-80c88f6f6b05-kube-api-access-l7wkr\") pod \"csi-node-driver-wcf9z\" (UID: \"0cd76c5e-7095-481c-905a-80c88f6f6b05\") " pod="calico-system/csi-node-driver-wcf9z" Aug 5 21:37:18.225030 kubelet[3595]: E0805 21:37:18.224120 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.225030 kubelet[3595]: W0805 21:37:18.224159 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.225030 kubelet[3595]: E0805 21:37:18.224188 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.225030 kubelet[3595]: E0805 21:37:18.224796 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.225030 kubelet[3595]: W0805 21:37:18.224839 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.225030 kubelet[3595]: E0805 21:37:18.224867 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.226403 kubelet[3595]: E0805 21:37:18.225335 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.226403 kubelet[3595]: W0805 21:37:18.225764 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.226403 kubelet[3595]: E0805 21:37:18.225922 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.226558 kubelet[3595]: E0805 21:37:18.226515 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.226558 kubelet[3595]: W0805 21:37:18.226537 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.226687 kubelet[3595]: E0805 21:37:18.226565 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.227910 kubelet[3595]: E0805 21:37:18.226976 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.227910 kubelet[3595]: W0805 21:37:18.227018 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.227910 kubelet[3595]: E0805 21:37:18.227047 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.229527 kubelet[3595]: E0805 21:37:18.228820 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.229527 kubelet[3595]: W0805 21:37:18.228856 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.229527 kubelet[3595]: E0805 21:37:18.229190 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.229527 kubelet[3595]: E0805 21:37:18.229249 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.229527 kubelet[3595]: W0805 21:37:18.229264 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.229939 kubelet[3595]: E0805 21:37:18.229893 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.229994 kubelet[3595]: W0805 21:37:18.229915 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.230388 kubelet[3595]: E0805 21:37:18.230213 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.230388 kubelet[3595]: E0805 21:37:18.230267 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.230388 kubelet[3595]: E0805 21:37:18.230335 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.230388 kubelet[3595]: W0805 21:37:18.230352 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.231080 kubelet[3595]: E0805 21:37:18.230677 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.231080 kubelet[3595]: E0805 21:37:18.230820 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.231080 kubelet[3595]: W0805 21:37:18.230836 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.231080 kubelet[3595]: E0805 21:37:18.230985 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.231996 kubelet[3595]: E0805 21:37:18.231577 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.231996 kubelet[3595]: W0805 21:37:18.231607 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.231996 kubelet[3595]: E0805 21:37:18.231661 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.233959 kubelet[3595]: E0805 21:37:18.233610 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.233959 kubelet[3595]: W0805 21:37:18.233651 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.233959 kubelet[3595]: E0805 21:37:18.233704 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.234847 kubelet[3595]: E0805 21:37:18.234639 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.234847 kubelet[3595]: W0805 21:37:18.234666 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.234847 kubelet[3595]: E0805 21:37:18.234773 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.237708 kubelet[3595]: E0805 21:37:18.237276 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.237708 kubelet[3595]: W0805 21:37:18.237321 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.237708 kubelet[3595]: E0805 21:37:18.237409 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.239686 kubelet[3595]: E0805 21:37:18.239315 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.239686 kubelet[3595]: W0805 21:37:18.239348 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.240534 kubelet[3595]: E0805 21:37:18.240293 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.241026 kubelet[3595]: E0805 21:37:18.240876 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.241026 kubelet[3595]: W0805 21:37:18.240904 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.241523 kubelet[3595]: E0805 21:37:18.241372 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.242865 kubelet[3595]: E0805 21:37:18.242294 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.242865 kubelet[3595]: W0805 21:37:18.242324 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.242865 kubelet[3595]: E0805 21:37:18.242362 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.244151 kubelet[3595]: E0805 21:37:18.244002 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.244849 kubelet[3595]: W0805 21:37:18.244657 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.244849 kubelet[3595]: E0805 21:37:18.244710 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.298966 containerd[2153]: time="2024-08-05T21:37:18.298140498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b967b7bb-bn9cf,Uid:eed642cd-3605-4ef0-ac40-b1f192711c70,Namespace:calico-system,Attempt:0,} returns sandbox id \"a64d434bdfb26a0d3b146e6ed79b18399892a014af7b8a980bb31006ee5d18ba\"" Aug 5 21:37:18.319031 containerd[2153]: time="2024-08-05T21:37:18.310169766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 21:37:18.329314 kubelet[3595]: E0805 21:37:18.328985 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.329706 kubelet[3595]: W0805 21:37:18.329565 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.329706 kubelet[3595]: E0805 21:37:18.329662 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.332937 kubelet[3595]: E0805 21:37:18.332837 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.334337 kubelet[3595]: W0805 21:37:18.332907 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.334534 kubelet[3595]: E0805 21:37:18.334493 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.335795 kubelet[3595]: E0805 21:37:18.335281 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.335795 kubelet[3595]: W0805 21:37:18.335309 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.335795 kubelet[3595]: E0805 21:37:18.335356 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.337106 kubelet[3595]: E0805 21:37:18.336837 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.337106 kubelet[3595]: W0805 21:37:18.336905 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.337106 kubelet[3595]: E0805 21:37:18.337007 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.339516 kubelet[3595]: E0805 21:37:18.339144 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.339516 kubelet[3595]: W0805 21:37:18.339175 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.339516 kubelet[3595]: E0805 21:37:18.339212 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.342925 kubelet[3595]: E0805 21:37:18.342195 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.342925 kubelet[3595]: W0805 21:37:18.342225 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.344804 kubelet[3595]: E0805 21:37:18.344772 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.345790 kubelet[3595]: E0805 21:37:18.345741 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.346510 kubelet[3595]: W0805 21:37:18.346229 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.346510 kubelet[3595]: E0805 21:37:18.346285 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.349841 kubelet[3595]: E0805 21:37:18.348706 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.349841 kubelet[3595]: W0805 21:37:18.348755 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.349841 kubelet[3595]: E0805 21:37:18.348790 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.349841 kubelet[3595]: E0805 21:37:18.349146 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.349841 kubelet[3595]: W0805 21:37:18.349165 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.349841 kubelet[3595]: E0805 21:37:18.349191 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.349841 kubelet[3595]: E0805 21:37:18.349649 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.349841 kubelet[3595]: W0805 21:37:18.349667 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.349841 kubelet[3595]: E0805 21:37:18.349692 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.386873 kubelet[3595]: E0805 21:37:18.386816 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:18.387222 kubelet[3595]: W0805 21:37:18.387154 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:18.387632 kubelet[3595]: E0805 21:37:18.387447 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:18.396978 containerd[2153]: time="2024-08-05T21:37:18.396630475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lz792,Uid:2443c384-8b77-4d3d-8e6c-3dd415f0533b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a7f87dbcfc94cd5362109bbcf129b39fb369f70122bcb925c5d13a53c5a87a3\"" Aug 5 21:37:19.385371 kubelet[3595]: E0805 21:37:19.383262 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcf9z" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" Aug 5 21:37:21.090697 containerd[2153]: time="2024-08-05T21:37:21.090644900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:21.094625 containerd[2153]: time="2024-08-05T21:37:21.094518680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Aug 5 21:37:21.097063 containerd[2153]: time="2024-08-05T21:37:21.096975068Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:21.105952 containerd[2153]: time="2024-08-05T21:37:21.105873320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:21.109958 containerd[2153]: time="2024-08-05T21:37:21.109626212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.799389078s" Aug 5 21:37:21.111335 containerd[2153]: time="2024-08-05T21:37:21.111254972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Aug 5 21:37:21.116270 containerd[2153]: time="2024-08-05T21:37:21.114964544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 21:37:21.134093 containerd[2153]: time="2024-08-05T21:37:21.134043368Z" level=info msg="CreateContainer within sandbox \"a64d434bdfb26a0d3b146e6ed79b18399892a014af7b8a980bb31006ee5d18ba\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 21:37:21.176589 containerd[2153]: time="2024-08-05T21:37:21.175677369Z" level=info msg="CreateContainer within sandbox \"a64d434bdfb26a0d3b146e6ed79b18399892a014af7b8a980bb31006ee5d18ba\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0efdcb0a810c7d88f5ae2b46bc50415e894b9486205a02497ef5162a3fa28115\"" Aug 5 21:37:21.179001 containerd[2153]: time="2024-08-05T21:37:21.178613097Z" level=info msg="StartContainer for \"0efdcb0a810c7d88f5ae2b46bc50415e894b9486205a02497ef5162a3fa28115\"" Aug 5 21:37:21.364922 containerd[2153]: time="2024-08-05T21:37:21.364571734Z" level=info msg="StartContainer for \"0efdcb0a810c7d88f5ae2b46bc50415e894b9486205a02497ef5162a3fa28115\" returns successfully" Aug 5 21:37:21.386777 kubelet[3595]: E0805 21:37:21.384127 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcf9z" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" Aug 5 21:37:21.654416 kubelet[3595]: E0805 21:37:21.653043 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.654416 kubelet[3595]: W0805 21:37:21.653098 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.654416 kubelet[3595]: E0805 21:37:21.653136 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.654416 kubelet[3595]: E0805 21:37:21.653585 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.654416 kubelet[3595]: W0805 21:37:21.653629 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.654416 kubelet[3595]: E0805 21:37:21.653661 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.657415 kubelet[3595]: E0805 21:37:21.657359 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.657415 kubelet[3595]: W0805 21:37:21.657401 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.657621 kubelet[3595]: E0805 21:37:21.657444 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.659993 kubelet[3595]: E0805 21:37:21.659941 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.659993 kubelet[3595]: W0805 21:37:21.659994 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.660186 kubelet[3595]: E0805 21:37:21.660036 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.663046 kubelet[3595]: E0805 21:37:21.662996 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.663046 kubelet[3595]: W0805 21:37:21.663035 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.663234 kubelet[3595]: E0805 21:37:21.663113 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.666015 kubelet[3595]: E0805 21:37:21.665966 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.666015 kubelet[3595]: W0805 21:37:21.666004 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.666225 kubelet[3595]: E0805 21:37:21.666053 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.666603 kubelet[3595]: E0805 21:37:21.666566 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.666603 kubelet[3595]: W0805 21:37:21.666595 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.666757 kubelet[3595]: E0805 21:37:21.666627 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.669335 kubelet[3595]: E0805 21:37:21.669283 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.669335 kubelet[3595]: W0805 21:37:21.669325 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.669551 kubelet[3595]: E0805 21:37:21.669365 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.669883 kubelet[3595]: E0805 21:37:21.669847 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.669883 kubelet[3595]: W0805 21:37:21.669877 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.670035 kubelet[3595]: E0805 21:37:21.669907 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.671824 kubelet[3595]: E0805 21:37:21.671775 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.671824 kubelet[3595]: W0805 21:37:21.671814 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.674646 kubelet[3595]: E0805 21:37:21.671857 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.676021 kubelet[3595]: E0805 21:37:21.675985 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.676208 kubelet[3595]: W0805 21:37:21.676181 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.676451 kubelet[3595]: E0805 21:37:21.676425 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.678070 kubelet[3595]: E0805 21:37:21.678035 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.679084 kubelet[3595]: W0805 21:37:21.678333 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.679396 kubelet[3595]: E0805 21:37:21.679368 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.680574 kubelet[3595]: E0805 21:37:21.680375 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.681820 kubelet[3595]: W0805 21:37:21.680743 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.681820 kubelet[3595]: E0805 21:37:21.680832 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.682559 kubelet[3595]: E0805 21:37:21.682315 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.682559 kubelet[3595]: W0805 21:37:21.682347 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.682559 kubelet[3595]: E0805 21:37:21.682383 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.683503 kubelet[3595]: E0805 21:37:21.683301 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.683503 kubelet[3595]: W0805 21:37:21.683333 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.683503 kubelet[3595]: E0805 21:37:21.683370 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.685700 kubelet[3595]: E0805 21:37:21.685661 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.686046 kubelet[3595]: W0805 21:37:21.685892 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.686046 kubelet[3595]: E0805 21:37:21.685937 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.687456 kubelet[3595]: E0805 21:37:21.687193 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.687456 kubelet[3595]: W0805 21:37:21.687226 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.687456 kubelet[3595]: E0805 21:37:21.687270 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.689441 kubelet[3595]: E0805 21:37:21.689296 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.689441 kubelet[3595]: W0805 21:37:21.689327 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.689441 kubelet[3595]: E0805 21:37:21.689411 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.690474 kubelet[3595]: E0805 21:37:21.690006 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.690474 kubelet[3595]: W0805 21:37:21.690032 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.690474 kubelet[3595]: E0805 21:37:21.690082 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.691675 kubelet[3595]: E0805 21:37:21.691382 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.691675 kubelet[3595]: W0805 21:37:21.691413 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.691675 kubelet[3595]: E0805 21:37:21.691468 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.694110 kubelet[3595]: E0805 21:37:21.694059 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.694110 kubelet[3595]: W0805 21:37:21.694097 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.694931 kubelet[3595]: E0805 21:37:21.694153 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.695917 kubelet[3595]: E0805 21:37:21.695868 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.695917 kubelet[3595]: W0805 21:37:21.695907 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.697319 kubelet[3595]: E0805 21:37:21.696789 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.699941 kubelet[3595]: E0805 21:37:21.699893 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.699941 kubelet[3595]: W0805 21:37:21.699930 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.700345 kubelet[3595]: E0805 21:37:21.700082 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.703044 kubelet[3595]: E0805 21:37:21.702978 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.703044 kubelet[3595]: W0805 21:37:21.703027 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.703564 kubelet[3595]: E0805 21:37:21.703196 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.706387 kubelet[3595]: E0805 21:37:21.706327 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.706387 kubelet[3595]: W0805 21:37:21.706373 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.707013 kubelet[3595]: E0805 21:37:21.706641 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.714340 kubelet[3595]: E0805 21:37:21.714287 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.714340 kubelet[3595]: W0805 21:37:21.714329 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.714673 kubelet[3595]: E0805 21:37:21.714480 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.717153 kubelet[3595]: E0805 21:37:21.717086 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.717153 kubelet[3595]: W0805 21:37:21.717137 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.717863 kubelet[3595]: E0805 21:37:21.717224 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.721042 kubelet[3595]: E0805 21:37:21.720987 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.721042 kubelet[3595]: W0805 21:37:21.721027 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.721583 kubelet[3595]: E0805 21:37:21.721261 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.726021 kubelet[3595]: E0805 21:37:21.725970 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.726021 kubelet[3595]: W0805 21:37:21.726008 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.726503 kubelet[3595]: E0805 21:37:21.726064 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.728553 kubelet[3595]: E0805 21:37:21.728330 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.728553 kubelet[3595]: W0805 21:37:21.728365 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.729215 kubelet[3595]: E0805 21:37:21.728818 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.730367 kubelet[3595]: E0805 21:37:21.730334 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.731481 kubelet[3595]: W0805 21:37:21.730515 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.731481 kubelet[3595]: E0805 21:37:21.730564 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.733804 kubelet[3595]: E0805 21:37:21.733199 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.735160 kubelet[3595]: W0805 21:37:21.734869 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.735160 kubelet[3595]: E0805 21:37:21.735097 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:21.737209 kubelet[3595]: E0805 21:37:21.736931 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:21.737209 kubelet[3595]: W0805 21:37:21.736964 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:21.737209 kubelet[3595]: E0805 21:37:21.737003 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.645928 kubelet[3595]: I0805 21:37:22.642740 3595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:37:22.695490 kubelet[3595]: E0805 21:37:22.694944 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.695490 kubelet[3595]: W0805 21:37:22.694973 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.695490 kubelet[3595]: E0805 21:37:22.695122 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.698145 kubelet[3595]: E0805 21:37:22.697798 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.698145 kubelet[3595]: W0805 21:37:22.697993 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.698145 kubelet[3595]: E0805 21:37:22.698030 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.701999 kubelet[3595]: E0805 21:37:22.700488 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.701999 kubelet[3595]: W0805 21:37:22.700555 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.701999 kubelet[3595]: E0805 21:37:22.700595 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.704170 kubelet[3595]: E0805 21:37:22.704135 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.704826 kubelet[3595]: W0805 21:37:22.704380 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.704826 kubelet[3595]: E0805 21:37:22.704428 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.706849 kubelet[3595]: E0805 21:37:22.706455 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.706849 kubelet[3595]: W0805 21:37:22.706487 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.706849 kubelet[3595]: E0805 21:37:22.706519 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.708221 kubelet[3595]: E0805 21:37:22.708175 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.708566 kubelet[3595]: W0805 21:37:22.708399 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.708566 kubelet[3595]: E0805 21:37:22.708445 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.709786 kubelet[3595]: E0805 21:37:22.709580 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.709786 kubelet[3595]: W0805 21:37:22.709608 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.709786 kubelet[3595]: E0805 21:37:22.709666 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.711466 kubelet[3595]: E0805 21:37:22.711022 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.711466 kubelet[3595]: W0805 21:37:22.711052 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.711466 kubelet[3595]: E0805 21:37:22.711132 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.713059 kubelet[3595]: E0805 21:37:22.712483 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.713059 kubelet[3595]: W0805 21:37:22.712635 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.713059 kubelet[3595]: E0805 21:37:22.712765 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.714687 kubelet[3595]: E0805 21:37:22.714017 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.714687 kubelet[3595]: W0805 21:37:22.714048 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.714687 kubelet[3595]: E0805 21:37:22.714140 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.716058 kubelet[3595]: E0805 21:37:22.715501 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.716058 kubelet[3595]: W0805 21:37:22.715622 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.716058 kubelet[3595]: E0805 21:37:22.715941 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.717866 kubelet[3595]: E0805 21:37:22.717154 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.717866 kubelet[3595]: W0805 21:37:22.717180 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.717866 kubelet[3595]: E0805 21:37:22.717213 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.719102 kubelet[3595]: E0805 21:37:22.718586 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.719102 kubelet[3595]: W0805 21:37:22.718621 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.719102 kubelet[3595]: E0805 21:37:22.718657 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.719850 kubelet[3595]: E0805 21:37:22.719816 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.720581 kubelet[3595]: W0805 21:37:22.720159 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.720581 kubelet[3595]: E0805 21:37:22.720207 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.721131 kubelet[3595]: E0805 21:37:22.721103 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.721795 kubelet[3595]: W0805 21:37:22.721424 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.721795 kubelet[3595]: E0805 21:37:22.721488 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.724060 kubelet[3595]: E0805 21:37:22.724010 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.724060 kubelet[3595]: W0805 21:37:22.724049 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.724289 kubelet[3595]: E0805 21:37:22.724090 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.727815 kubelet[3595]: E0805 21:37:22.726583 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.727815 kubelet[3595]: W0805 21:37:22.726691 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.727815 kubelet[3595]: E0805 21:37:22.726810 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.730021 kubelet[3595]: E0805 21:37:22.727806 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.730021 kubelet[3595]: W0805 21:37:22.727872 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.730021 kubelet[3595]: E0805 21:37:22.727925 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.730021 kubelet[3595]: E0805 21:37:22.728423 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.730021 kubelet[3595]: W0805 21:37:22.728510 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.730021 kubelet[3595]: E0805 21:37:22.728848 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.730021 kubelet[3595]: E0805 21:37:22.729565 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.730021 kubelet[3595]: W0805 21:37:22.729629 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.730021 kubelet[3595]: E0805 21:37:22.729961 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.730531 kubelet[3595]: E0805 21:37:22.730486 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.730531 kubelet[3595]: W0805 21:37:22.730507 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.732442 kubelet[3595]: E0805 21:37:22.730677 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.732442 kubelet[3595]: E0805 21:37:22.731130 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.732442 kubelet[3595]: W0805 21:37:22.731150 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.732442 kubelet[3595]: E0805 21:37:22.731227 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.732442 kubelet[3595]: E0805 21:37:22.731953 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.733214 kubelet[3595]: W0805 21:37:22.733098 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.735485 kubelet[3595]: E0805 21:37:22.733344 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.735485 kubelet[3595]: E0805 21:37:22.733962 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.735485 kubelet[3595]: W0805 21:37:22.733986 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.735485 kubelet[3595]: E0805 21:37:22.734164 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.735485 kubelet[3595]: E0805 21:37:22.734545 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.735485 kubelet[3595]: W0805 21:37:22.734568 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.735485 kubelet[3595]: E0805 21:37:22.734852 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.735485 kubelet[3595]: E0805 21:37:22.735141 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.735485 kubelet[3595]: W0805 21:37:22.735159 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.735485 kubelet[3595]: E0805 21:37:22.735419 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.739130 kubelet[3595]: E0805 21:37:22.736299 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.739130 kubelet[3595]: W0805 21:37:22.736324 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.739130 kubelet[3595]: E0805 21:37:22.737103 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.739130 kubelet[3595]: E0805 21:37:22.737796 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.739130 kubelet[3595]: W0805 21:37:22.737901 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.740286 kubelet[3595]: E0805 21:37:22.739568 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.743692 kubelet[3595]: E0805 21:37:22.742812 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.743692 kubelet[3595]: W0805 21:37:22.742879 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.747101 kubelet[3595]: E0805 21:37:22.746910 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.747101 kubelet[3595]: W0805 21:37:22.746953 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.747936 kubelet[3595]: E0805 21:37:22.747698 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.747936 kubelet[3595]: W0805 21:37:22.747796 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.747936 kubelet[3595]: E0805 21:37:22.747882 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.753365 kubelet[3595]: E0805 21:37:22.749314 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.753365 kubelet[3595]: W0805 21:37:22.749342 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.753365 kubelet[3595]: E0805 21:37:22.749392 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.753365 kubelet[3595]: E0805 21:37:22.749445 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.753365 kubelet[3595]: E0805 21:37:22.750810 3595 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:37:22.753365 kubelet[3595]: W0805 21:37:22.750837 3595 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:37:22.753365 kubelet[3595]: E0805 21:37:22.750884 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.753365 kubelet[3595]: E0805 21:37:22.750936 3595 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:37:22.764129 containerd[2153]: time="2024-08-05T21:37:22.764066797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:22.766749 containerd[2153]: time="2024-08-05T21:37:22.766666333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Aug 5 21:37:22.768943 containerd[2153]: time="2024-08-05T21:37:22.768884761Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:22.777937 containerd[2153]: time="2024-08-05T21:37:22.777841585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:22.781458 containerd[2153]: time="2024-08-05T21:37:22.780683329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.665650169s" Aug 5 21:37:22.781458 containerd[2153]: time="2024-08-05T21:37:22.780791089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Aug 5 21:37:22.787405 containerd[2153]: time="2024-08-05T21:37:22.786669409Z" level=info msg="CreateContainer within sandbox \"0a7f87dbcfc94cd5362109bbcf129b39fb369f70122bcb925c5d13a53c5a87a3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 21:37:22.815597 containerd[2153]: time="2024-08-05T21:37:22.815339281Z" level=info msg="CreateContainer within sandbox \"0a7f87dbcfc94cd5362109bbcf129b39fb369f70122bcb925c5d13a53c5a87a3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f68cc1b8f5fdf9416543d83758c22f7862da846c4abced113ecf9a8ab0c8be5d\"" Aug 5 21:37:22.817133 containerd[2153]: time="2024-08-05T21:37:22.816407533Z" level=info msg="StartContainer for \"f68cc1b8f5fdf9416543d83758c22f7862da846c4abced113ecf9a8ab0c8be5d\"" Aug 5 21:37:23.024659 containerd[2153]: time="2024-08-05T21:37:23.024273598Z" level=info msg="StartContainer for \"f68cc1b8f5fdf9416543d83758c22f7862da846c4abced113ecf9a8ab0c8be5d\" returns successfully" Aug 5 21:37:23.149940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f68cc1b8f5fdf9416543d83758c22f7862da846c4abced113ecf9a8ab0c8be5d-rootfs.mount: Deactivated successfully. Aug 5 21:37:23.383966 kubelet[3595]: E0805 21:37:23.383356 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcf9z" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" Aug 5 21:37:23.693377 kubelet[3595]: I0805 21:37:23.690473 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-78b967b7bb-bn9cf" podStartSLOduration=3.8863302109999998 podCreationTimestamp="2024-08-05 21:37:17 +0000 UTC" firstStartedPulling="2024-08-05 21:37:18.308216418 +0000 UTC m=+21.225621502" lastFinishedPulling="2024-08-05 21:37:21.112266476 +0000 UTC m=+24.029671548" observedRunningTime="2024-08-05 21:37:21.673888355 +0000 UTC m=+24.591293463" watchObservedRunningTime="2024-08-05 21:37:23.690380257 +0000 UTC m=+26.607785341" Aug 5 21:37:23.886220 containerd[2153]: time="2024-08-05T21:37:23.886077266Z" level=info msg="shim disconnected" id=f68cc1b8f5fdf9416543d83758c22f7862da846c4abced113ecf9a8ab0c8be5d namespace=k8s.io Aug 5 21:37:23.886220 containerd[2153]: time="2024-08-05T21:37:23.886148438Z" level=warning msg="cleaning up after shim disconnected" id=f68cc1b8f5fdf9416543d83758c22f7862da846c4abced113ecf9a8ab0c8be5d namespace=k8s.io Aug 5 21:37:23.886220 containerd[2153]: time="2024-08-05T21:37:23.886170902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:37:24.666153 containerd[2153]: time="2024-08-05T21:37:24.666089078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 21:37:25.389867 kubelet[3595]: E0805 21:37:25.389813 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcf9z" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" Aug 5 21:37:27.385501 kubelet[3595]: E0805 21:37:27.385297 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcf9z" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" Aug 5 21:37:28.345860 containerd[2153]: time="2024-08-05T21:37:28.345784996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:28.348369 containerd[2153]: time="2024-08-05T21:37:28.348295936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Aug 5 21:37:28.350065 containerd[2153]: time="2024-08-05T21:37:28.349987120Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:28.355744 containerd[2153]: time="2024-08-05T21:37:28.355622332Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:28.357300 containerd[2153]: time="2024-08-05T21:37:28.357079480Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 3.690923646s" Aug 5 21:37:28.357300 containerd[2153]: time="2024-08-05T21:37:28.357138604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Aug 5 21:37:28.361955 containerd[2153]: time="2024-08-05T21:37:28.360506416Z" level=info msg="CreateContainer within sandbox \"0a7f87dbcfc94cd5362109bbcf129b39fb369f70122bcb925c5d13a53c5a87a3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 21:37:28.389646 containerd[2153]: time="2024-08-05T21:37:28.389586940Z" level=info msg="CreateContainer within sandbox \"0a7f87dbcfc94cd5362109bbcf129b39fb369f70122bcb925c5d13a53c5a87a3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e52df26af084d30e77d8df518e5590383dcbad4066c7136ba71dc97b6086d053\"" Aug 5 21:37:28.390883 containerd[2153]: time="2024-08-05T21:37:28.390836980Z" level=info msg="StartContainer for \"e52df26af084d30e77d8df518e5590383dcbad4066c7136ba71dc97b6086d053\"" Aug 5 21:37:28.509979 containerd[2153]: time="2024-08-05T21:37:28.509913077Z" level=info msg="StartContainer for \"e52df26af084d30e77d8df518e5590383dcbad4066c7136ba71dc97b6086d053\" returns successfully" Aug 5 21:37:29.386343 kubelet[3595]: E0805 21:37:29.385827 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wcf9z" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" Aug 5 21:37:29.876989 containerd[2153]: time="2024-08-05T21:37:29.876915800Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:37:29.921447 kubelet[3595]: I0805 21:37:29.921374 3595 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 21:37:29.924030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e52df26af084d30e77d8df518e5590383dcbad4066c7136ba71dc97b6086d053-rootfs.mount: Deactivated successfully. Aug 5 21:37:29.967117 kubelet[3595]: I0805 21:37:29.967052 3595 topology_manager.go:215] "Topology Admit Handler" podUID="03afc324-8123-4d46-966f-b1a0fc0166c5" podNamespace="kube-system" podName="coredns-5dd5756b68-dwgx8" Aug 5 21:37:29.976875 kubelet[3595]: I0805 21:37:29.972544 3595 topology_manager.go:215] "Topology Admit Handler" podUID="07a990f4-f782-4270-aebc-bc009f6009d9" podNamespace="kube-system" podName="coredns-5dd5756b68-4f7nz" Aug 5 21:37:29.981852 kubelet[3595]: I0805 21:37:29.977859 3595 topology_manager.go:215] "Topology Admit Handler" podUID="b41a057f-c142-46dc-88b9-42e9659de76a" podNamespace="calico-system" podName="calico-kube-controllers-7f4996bdb8-4lhvp" Aug 5 21:37:30.086072 kubelet[3595]: I0805 21:37:30.086030 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07a990f4-f782-4270-aebc-bc009f6009d9-config-volume\") pod \"coredns-5dd5756b68-4f7nz\" (UID: \"07a990f4-f782-4270-aebc-bc009f6009d9\") " pod="kube-system/coredns-5dd5756b68-4f7nz" Aug 5 21:37:30.086334 kubelet[3595]: I0805 21:37:30.086313 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gk4c\" (UniqueName: \"kubernetes.io/projected/03afc324-8123-4d46-966f-b1a0fc0166c5-kube-api-access-6gk4c\") pod \"coredns-5dd5756b68-dwgx8\" (UID: \"03afc324-8123-4d46-966f-b1a0fc0166c5\") " pod="kube-system/coredns-5dd5756b68-dwgx8" Aug 5 21:37:30.086518 kubelet[3595]: I0805 21:37:30.086496 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqgqv\" (UniqueName: \"kubernetes.io/projected/07a990f4-f782-4270-aebc-bc009f6009d9-kube-api-access-kqgqv\") pod \"coredns-5dd5756b68-4f7nz\" (UID: \"07a990f4-f782-4270-aebc-bc009f6009d9\") " pod="kube-system/coredns-5dd5756b68-4f7nz" Aug 5 21:37:30.086685 kubelet[3595]: I0805 21:37:30.086664 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b41a057f-c142-46dc-88b9-42e9659de76a-tigera-ca-bundle\") pod \"calico-kube-controllers-7f4996bdb8-4lhvp\" (UID: \"b41a057f-c142-46dc-88b9-42e9659de76a\") " pod="calico-system/calico-kube-controllers-7f4996bdb8-4lhvp" Aug 5 21:37:30.086872 kubelet[3595]: I0805 21:37:30.086853 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03afc324-8123-4d46-966f-b1a0fc0166c5-config-volume\") pod \"coredns-5dd5756b68-dwgx8\" (UID: \"03afc324-8123-4d46-966f-b1a0fc0166c5\") " pod="kube-system/coredns-5dd5756b68-dwgx8" Aug 5 21:37:30.087068 kubelet[3595]: I0805 21:37:30.087027 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzpg\" (UniqueName: \"kubernetes.io/projected/b41a057f-c142-46dc-88b9-42e9659de76a-kube-api-access-fkzpg\") pod \"calico-kube-controllers-7f4996bdb8-4lhvp\" (UID: \"b41a057f-c142-46dc-88b9-42e9659de76a\") " pod="calico-system/calico-kube-controllers-7f4996bdb8-4lhvp" Aug 5 21:37:30.287968 containerd[2153]: time="2024-08-05T21:37:30.286985766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4996bdb8-4lhvp,Uid:b41a057f-c142-46dc-88b9-42e9659de76a,Namespace:calico-system,Attempt:0,}" Aug 5 21:37:30.287968 containerd[2153]: time="2024-08-05T21:37:30.287009406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dwgx8,Uid:03afc324-8123-4d46-966f-b1a0fc0166c5,Namespace:kube-system,Attempt:0,}" Aug 5 21:37:30.292709 containerd[2153]: time="2024-08-05T21:37:30.292626654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4f7nz,Uid:07a990f4-f782-4270-aebc-bc009f6009d9,Namespace:kube-system,Attempt:0,}" Aug 5 21:37:30.633210 containerd[2153]: time="2024-08-05T21:37:30.633107852Z" level=error msg="Failed to destroy network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.635060 containerd[2153]: time="2024-08-05T21:37:30.634869008Z" level=error msg="encountered an error cleaning up failed sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.635367 containerd[2153]: time="2024-08-05T21:37:30.635198336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4996bdb8-4lhvp,Uid:b41a057f-c142-46dc-88b9-42e9659de76a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.636035 kubelet[3595]: E0805 21:37:30.635974 3595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.636616 kubelet[3595]: E0805 21:37:30.636080 3595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f4996bdb8-4lhvp" Aug 5 21:37:30.636616 kubelet[3595]: E0805 21:37:30.636119 3595 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f4996bdb8-4lhvp" Aug 5 21:37:30.636616 kubelet[3595]: E0805 21:37:30.636231 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f4996bdb8-4lhvp_calico-system(b41a057f-c142-46dc-88b9-42e9659de76a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f4996bdb8-4lhvp_calico-system(b41a057f-c142-46dc-88b9-42e9659de76a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f4996bdb8-4lhvp" podUID="b41a057f-c142-46dc-88b9-42e9659de76a" Aug 5 21:37:30.693750 kubelet[3595]: I0805 21:37:30.692103 3595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:30.698113 containerd[2153]: time="2024-08-05T21:37:30.698054636Z" level=info msg="StopPodSandbox for \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\"" Aug 5 21:37:30.698992 containerd[2153]: time="2024-08-05T21:37:30.698756720Z" level=info msg="Ensure that sandbox 60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2 in task-service has been cleanup successfully" Aug 5 21:37:30.706300 containerd[2153]: time="2024-08-05T21:37:30.706214060Z" level=error msg="Failed to destroy network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.709382 containerd[2153]: time="2024-08-05T21:37:30.708283328Z" level=error msg="encountered an error cleaning up failed sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.709382 containerd[2153]: time="2024-08-05T21:37:30.708387488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4f7nz,Uid:07a990f4-f782-4270-aebc-bc009f6009d9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.710104 kubelet[3595]: E0805 21:37:30.709690 3595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.710104 kubelet[3595]: E0805 21:37:30.709794 3595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4f7nz" Aug 5 21:37:30.710104 kubelet[3595]: E0805 21:37:30.709834 3595 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4f7nz" Aug 5 21:37:30.713484 kubelet[3595]: E0805 21:37:30.713328 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-4f7nz_kube-system(07a990f4-f782-4270-aebc-bc009f6009d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-4f7nz_kube-system(07a990f4-f782-4270-aebc-bc009f6009d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4f7nz" podUID="07a990f4-f782-4270-aebc-bc009f6009d9" Aug 5 21:37:30.729565 containerd[2153]: time="2024-08-05T21:37:30.729487340Z" level=error msg="Failed to destroy network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.730450 containerd[2153]: time="2024-08-05T21:37:30.730210196Z" level=error msg="encountered an error cleaning up failed sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.730450 containerd[2153]: time="2024-08-05T21:37:30.730296812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dwgx8,Uid:03afc324-8123-4d46-966f-b1a0fc0166c5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.730679 kubelet[3595]: E0805 21:37:30.730618 3595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.730776 kubelet[3595]: E0805 21:37:30.730688 3595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-dwgx8" Aug 5 21:37:30.730776 kubelet[3595]: E0805 21:37:30.730752 3595 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-dwgx8" Aug 5 21:37:30.731284 kubelet[3595]: E0805 21:37:30.730852 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-dwgx8_kube-system(03afc324-8123-4d46-966f-b1a0fc0166c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-dwgx8_kube-system(03afc324-8123-4d46-966f-b1a0fc0166c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-dwgx8" podUID="03afc324-8123-4d46-966f-b1a0fc0166c5" Aug 5 21:37:30.765994 containerd[2153]: time="2024-08-05T21:37:30.765887624Z" level=error msg="StopPodSandbox for \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\" failed" error="failed to destroy network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:30.766650 kubelet[3595]: E0805 21:37:30.766321 3595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:30.766650 kubelet[3595]: E0805 21:37:30.766411 3595 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2"} Aug 5 21:37:30.766650 kubelet[3595]: E0805 21:37:30.766477 3595 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b41a057f-c142-46dc-88b9-42e9659de76a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:37:30.766650 kubelet[3595]: E0805 21:37:30.766531 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b41a057f-c142-46dc-88b9-42e9659de76a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f4996bdb8-4lhvp" podUID="b41a057f-c142-46dc-88b9-42e9659de76a" Aug 5 21:37:30.922687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2-shm.mount: Deactivated successfully. Aug 5 21:37:31.029942 containerd[2153]: time="2024-08-05T21:37:31.029869734Z" level=info msg="shim disconnected" id=e52df26af084d30e77d8df518e5590383dcbad4066c7136ba71dc97b6086d053 namespace=k8s.io Aug 5 21:37:31.031029 containerd[2153]: time="2024-08-05T21:37:31.030916218Z" level=warning msg="cleaning up after shim disconnected" id=e52df26af084d30e77d8df518e5590383dcbad4066c7136ba71dc97b6086d053 namespace=k8s.io Aug 5 21:37:31.031029 containerd[2153]: time="2024-08-05T21:37:31.031005378Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:37:31.389846 containerd[2153]: time="2024-08-05T21:37:31.389286811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wcf9z,Uid:0cd76c5e-7095-481c-905a-80c88f6f6b05,Namespace:calico-system,Attempt:0,}" Aug 5 21:37:31.516843 containerd[2153]: time="2024-08-05T21:37:31.516688160Z" level=error msg="Failed to destroy network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:31.521681 containerd[2153]: time="2024-08-05T21:37:31.517779980Z" level=error msg="encountered an error cleaning up failed sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:31.521681 containerd[2153]: time="2024-08-05T21:37:31.517862444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wcf9z,Uid:0cd76c5e-7095-481c-905a-80c88f6f6b05,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:31.522067 kubelet[3595]: E0805 21:37:31.518291 3595 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:31.522067 kubelet[3595]: E0805 21:37:31.518384 3595 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wcf9z" Aug 5 21:37:31.522067 kubelet[3595]: E0805 21:37:31.518450 3595 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wcf9z" Aug 5 21:37:31.522349 kubelet[3595]: E0805 21:37:31.518559 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wcf9z_calico-system(0cd76c5e-7095-481c-905a-80c88f6f6b05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wcf9z_calico-system(0cd76c5e-7095-481c-905a-80c88f6f6b05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wcf9z" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" Aug 5 21:37:31.526684 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f-shm.mount: Deactivated successfully. Aug 5 21:37:31.700872 containerd[2153]: time="2024-08-05T21:37:31.700158597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 21:37:31.702483 kubelet[3595]: I0805 21:37:31.701739 3595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:31.703402 containerd[2153]: time="2024-08-05T21:37:31.703342605Z" level=info msg="StopPodSandbox for \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\"" Aug 5 21:37:31.703764 containerd[2153]: time="2024-08-05T21:37:31.703694985Z" level=info msg="Ensure that sandbox 30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f in task-service has been cleanup successfully" Aug 5 21:37:31.708686 kubelet[3595]: I0805 21:37:31.707882 3595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:31.711800 containerd[2153]: time="2024-08-05T21:37:31.711417261Z" level=info msg="StopPodSandbox for \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\"" Aug 5 21:37:31.714162 containerd[2153]: time="2024-08-05T21:37:31.714103569Z" level=info msg="Ensure that sandbox 7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6 in task-service has been cleanup successfully" Aug 5 21:37:31.717348 kubelet[3595]: I0805 21:37:31.715652 3595 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:31.717526 containerd[2153]: time="2024-08-05T21:37:31.716740989Z" level=info msg="StopPodSandbox for \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\"" Aug 5 21:37:31.717526 containerd[2153]: time="2024-08-05T21:37:31.717074085Z" level=info msg="Ensure that sandbox a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14 in task-service has been cleanup successfully" Aug 5 21:37:31.828314 containerd[2153]: time="2024-08-05T21:37:31.828157378Z" level=error msg="StopPodSandbox for \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\" failed" error="failed to destroy network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:31.828897 containerd[2153]: time="2024-08-05T21:37:31.828773002Z" level=error msg="StopPodSandbox for \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\" failed" error="failed to destroy network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:31.829010 kubelet[3595]: E0805 21:37:31.828988 3595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:31.829115 kubelet[3595]: E0805 21:37:31.829082 3595 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f"} Aug 5 21:37:31.829218 kubelet[3595]: E0805 21:37:31.829187 3595 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0cd76c5e-7095-481c-905a-80c88f6f6b05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:37:31.829354 kubelet[3595]: E0805 21:37:31.829298 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0cd76c5e-7095-481c-905a-80c88f6f6b05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wcf9z" podUID="0cd76c5e-7095-481c-905a-80c88f6f6b05" Aug 5 21:37:31.831317 kubelet[3595]: E0805 21:37:31.829705 3595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:31.831455 kubelet[3595]: E0805 21:37:31.831393 3595 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14"} Aug 5 21:37:31.831538 kubelet[3595]: E0805 21:37:31.831514 3595 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"03afc324-8123-4d46-966f-b1a0fc0166c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:37:31.831669 kubelet[3595]: E0805 21:37:31.831603 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"03afc324-8123-4d46-966f-b1a0fc0166c5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-dwgx8" podUID="03afc324-8123-4d46-966f-b1a0fc0166c5" Aug 5 21:37:31.839299 containerd[2153]: time="2024-08-05T21:37:31.839233858Z" level=error msg="StopPodSandbox for \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\" failed" error="failed to destroy network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:37:31.840031 kubelet[3595]: E0805 21:37:31.839792 3595 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:31.840031 kubelet[3595]: E0805 21:37:31.839852 3595 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6"} Aug 5 21:37:31.840031 kubelet[3595]: E0805 21:37:31.839917 3595 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07a990f4-f782-4270-aebc-bc009f6009d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:37:31.840031 kubelet[3595]: E0805 21:37:31.839973 3595 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07a990f4-f782-4270-aebc-bc009f6009d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4f7nz" podUID="07a990f4-f782-4270-aebc-bc009f6009d9" Aug 5 21:37:37.427403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount495361019.mount: Deactivated successfully. Aug 5 21:37:37.574504 containerd[2153]: time="2024-08-05T21:37:37.573955226Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:37.575673 containerd[2153]: time="2024-08-05T21:37:37.575495762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Aug 5 21:37:37.577368 containerd[2153]: time="2024-08-05T21:37:37.577264970Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:37.582817 containerd[2153]: time="2024-08-05T21:37:37.582672686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:37.584409 containerd[2153]: time="2024-08-05T21:37:37.584101790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 5.883869333s" Aug 5 21:37:37.584409 containerd[2153]: time="2024-08-05T21:37:37.584206274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Aug 5 21:37:37.612830 containerd[2153]: time="2024-08-05T21:37:37.611542526Z" level=info msg="CreateContainer within sandbox \"0a7f87dbcfc94cd5362109bbcf129b39fb369f70122bcb925c5d13a53c5a87a3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 21:37:37.644476 containerd[2153]: time="2024-08-05T21:37:37.644308850Z" level=info msg="CreateContainer within sandbox \"0a7f87dbcfc94cd5362109bbcf129b39fb369f70122bcb925c5d13a53c5a87a3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4814130884d1ba9f60da892623d5bc745509fa8f2c8867ddfef9fb23c3d5f8fd\"" Aug 5 21:37:37.649166 containerd[2153]: time="2024-08-05T21:37:37.646341338Z" level=info msg="StartContainer for \"4814130884d1ba9f60da892623d5bc745509fa8f2c8867ddfef9fb23c3d5f8fd\"" Aug 5 21:37:37.648695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1483680982.mount: Deactivated successfully. Aug 5 21:37:37.769750 containerd[2153]: time="2024-08-05T21:37:37.768624495Z" level=info msg="StartContainer for \"4814130884d1ba9f60da892623d5bc745509fa8f2c8867ddfef9fb23c3d5f8fd\" returns successfully" Aug 5 21:37:37.889146 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 21:37:37.889308 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 21:37:38.784935 kubelet[3595]: I0805 21:37:38.784863 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-lz792" podStartSLOduration=2.601901109 podCreationTimestamp="2024-08-05 21:37:17 +0000 UTC" firstStartedPulling="2024-08-05 21:37:18.402087547 +0000 UTC m=+21.319492631" lastFinishedPulling="2024-08-05 21:37:37.58498751 +0000 UTC m=+40.502392594" observedRunningTime="2024-08-05 21:37:38.781165744 +0000 UTC m=+41.698570840" watchObservedRunningTime="2024-08-05 21:37:38.784801072 +0000 UTC m=+41.702206192" Aug 5 21:37:44.386083 containerd[2153]: time="2024-08-05T21:37:44.386011604Z" level=info msg="StopPodSandbox for \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\"" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.523 [INFO][4820] k8s.go 608: Cleaning up netns ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.524 [INFO][4820] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" iface="eth0" netns="/var/run/netns/cni-e5c892a9-f279-911f-2638-212aa8368250" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.525 [INFO][4820] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" iface="eth0" netns="/var/run/netns/cni-e5c892a9-f279-911f-2638-212aa8368250" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.525 [INFO][4820] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" iface="eth0" netns="/var/run/netns/cni-e5c892a9-f279-911f-2638-212aa8368250" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.525 [INFO][4820] k8s.go 615: Releasing IP address(es) ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.525 [INFO][4820] utils.go 188: Calico CNI releasing IP address ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.582 [INFO][4833] ipam_plugin.go 411: Releasing address using handleID ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" HandleID="k8s-pod-network.60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.583 [INFO][4833] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.583 [INFO][4833] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.596 [WARNING][4833] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" HandleID="k8s-pod-network.60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.596 [INFO][4833] ipam_plugin.go 439: Releasing address using workloadID ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" HandleID="k8s-pod-network.60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.600 [INFO][4833] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:44.613329 containerd[2153]: 2024-08-05 21:37:44.605 [INFO][4820] k8s.go 621: Teardown processing complete. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:44.614481 containerd[2153]: time="2024-08-05T21:37:44.614256033Z" level=info msg="TearDown network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\" successfully" Aug 5 21:37:44.614481 containerd[2153]: time="2024-08-05T21:37:44.614301081Z" level=info msg="StopPodSandbox for \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\" returns successfully" Aug 5 21:37:44.615702 containerd[2153]: time="2024-08-05T21:37:44.615544161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4996bdb8-4lhvp,Uid:b41a057f-c142-46dc-88b9-42e9659de76a,Namespace:calico-system,Attempt:1,}" Aug 5 21:37:44.622977 systemd[1]: run-netns-cni\x2de5c892a9\x2df279\x2d911f\x2d2638\x2d212aa8368250.mount: Deactivated successfully. Aug 5 21:37:44.855305 systemd-networkd[1694]: cali776c936443d: Link UP Aug 5 21:37:44.857833 systemd-networkd[1694]: cali776c936443d: Gained carrier Aug 5 21:37:44.872074 (udev-worker)[4867]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.690 [INFO][4845] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.715 [INFO][4845] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0 calico-kube-controllers-7f4996bdb8- calico-system b41a057f-c142-46dc-88b9-42e9659de76a 718 0 2024-08-05 21:37:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f4996bdb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-22-168 calico-kube-controllers-7f4996bdb8-4lhvp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali776c936443d [] []}} ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Namespace="calico-system" Pod="calico-kube-controllers-7f4996bdb8-4lhvp" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.715 [INFO][4845] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Namespace="calico-system" Pod="calico-kube-controllers-7f4996bdb8-4lhvp" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.766 [INFO][4858] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" HandleID="k8s-pod-network.513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.785 [INFO][4858] ipam_plugin.go 264: Auto assigning IP ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" HandleID="k8s-pod-network.513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028ec90), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-168", "pod":"calico-kube-controllers-7f4996bdb8-4lhvp", "timestamp":"2024-08-05 21:37:44.76610563 +0000 UTC"}, Hostname:"ip-172-31-22-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.785 [INFO][4858] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.785 [INFO][4858] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.785 [INFO][4858] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-168' Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.788 [INFO][4858] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" host="ip-172-31-22-168" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.794 [INFO][4858] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-168" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.802 [INFO][4858] ipam.go 489: Trying affinity for 192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.807 [INFO][4858] ipam.go 155: Attempting to load block cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.812 [INFO][4858] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.812 [INFO][4858] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" host="ip-172-31-22-168" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.816 [INFO][4858] ipam.go 1685: Creating new handle: k8s-pod-network.513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3 Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.824 [INFO][4858] ipam.go 1203: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" host="ip-172-31-22-168" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.834 [INFO][4858] ipam.go 1216: Successfully claimed IPs: [192.168.93.1/26] block=192.168.93.0/26 handle="k8s-pod-network.513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" host="ip-172-31-22-168" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.835 [INFO][4858] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.1/26] handle="k8s-pod-network.513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" host="ip-172-31-22-168" Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.835 [INFO][4858] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:44.882619 containerd[2153]: 2024-08-05 21:37:44.835 [INFO][4858] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.93.1/26] IPv6=[] ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" HandleID="k8s-pod-network.513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.884983 containerd[2153]: 2024-08-05 21:37:44.841 [INFO][4845] k8s.go 386: Populated endpoint ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Namespace="calico-system" Pod="calico-kube-controllers-7f4996bdb8-4lhvp" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0", GenerateName:"calico-kube-controllers-7f4996bdb8-", Namespace:"calico-system", SelfLink:"", UID:"b41a057f-c142-46dc-88b9-42e9659de76a", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4996bdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"", Pod:"calico-kube-controllers-7f4996bdb8-4lhvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali776c936443d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:44.884983 containerd[2153]: 2024-08-05 21:37:44.841 [INFO][4845] k8s.go 387: Calico CNI using IPs: [192.168.93.1/32] ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Namespace="calico-system" Pod="calico-kube-controllers-7f4996bdb8-4lhvp" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.884983 containerd[2153]: 2024-08-05 21:37:44.841 [INFO][4845] dataplane_linux.go 68: Setting the host side veth name to cali776c936443d ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Namespace="calico-system" Pod="calico-kube-controllers-7f4996bdb8-4lhvp" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.884983 containerd[2153]: 2024-08-05 21:37:44.858 [INFO][4845] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Namespace="calico-system" Pod="calico-kube-controllers-7f4996bdb8-4lhvp" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.884983 containerd[2153]: 2024-08-05 21:37:44.859 [INFO][4845] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Namespace="calico-system" Pod="calico-kube-controllers-7f4996bdb8-4lhvp" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0", GenerateName:"calico-kube-controllers-7f4996bdb8-", Namespace:"calico-system", SelfLink:"", UID:"b41a057f-c142-46dc-88b9-42e9659de76a", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4996bdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3", Pod:"calico-kube-controllers-7f4996bdb8-4lhvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali776c936443d", MAC:"7a:c1:2b:95:51:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:44.884983 containerd[2153]: 2024-08-05 21:37:44.877 [INFO][4845] k8s.go 500: Wrote updated endpoint to datastore ContainerID="513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3" Namespace="calico-system" Pod="calico-kube-controllers-7f4996bdb8-4lhvp" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:44.931064 containerd[2153]: time="2024-08-05T21:37:44.930153935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:44.931064 containerd[2153]: time="2024-08-05T21:37:44.930278303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:44.931064 containerd[2153]: time="2024-08-05T21:37:44.930345419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:44.931064 containerd[2153]: time="2024-08-05T21:37:44.930410387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:45.025307 containerd[2153]: time="2024-08-05T21:37:45.025236607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f4996bdb8-4lhvp,Uid:b41a057f-c142-46dc-88b9-42e9659de76a,Namespace:calico-system,Attempt:1,} returns sandbox id \"513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3\"" Aug 5 21:37:45.029774 containerd[2153]: time="2024-08-05T21:37:45.029670931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 21:37:45.388749 containerd[2153]: time="2024-08-05T21:37:45.384290745Z" level=info msg="StopPodSandbox for \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\"" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.472 [INFO][4937] k8s.go 608: Cleaning up netns ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.474 [INFO][4937] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" iface="eth0" netns="/var/run/netns/cni-04d1d6ee-f3f3-aec0-bca9-2ca00d1798e3" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.474 [INFO][4937] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" iface="eth0" netns="/var/run/netns/cni-04d1d6ee-f3f3-aec0-bca9-2ca00d1798e3" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.474 [INFO][4937] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" iface="eth0" netns="/var/run/netns/cni-04d1d6ee-f3f3-aec0-bca9-2ca00d1798e3" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.474 [INFO][4937] k8s.go 615: Releasing IP address(es) ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.474 [INFO][4937] utils.go 188: Calico CNI releasing IP address ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.537 [INFO][4943] ipam_plugin.go 411: Releasing address using handleID ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" HandleID="k8s-pod-network.7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.538 [INFO][4943] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.538 [INFO][4943] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.558 [WARNING][4943] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" HandleID="k8s-pod-network.7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.558 [INFO][4943] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" HandleID="k8s-pod-network.7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.563 [INFO][4943] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:45.574956 containerd[2153]: 2024-08-05 21:37:45.571 [INFO][4937] k8s.go 621: Teardown processing complete. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:45.574956 containerd[2153]: time="2024-08-05T21:37:45.574481518Z" level=info msg="TearDown network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\" successfully" Aug 5 21:37:45.574956 containerd[2153]: time="2024-08-05T21:37:45.574569238Z" level=info msg="StopPodSandbox for \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\" returns successfully" Aug 5 21:37:45.579204 containerd[2153]: time="2024-08-05T21:37:45.578491174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4f7nz,Uid:07a990f4-f782-4270-aebc-bc009f6009d9,Namespace:kube-system,Attempt:1,}" Aug 5 21:37:45.624660 systemd[1]: run-netns-cni\x2d04d1d6ee\x2df3f3\x2daec0\x2dbca9\x2d2ca00d1798e3.mount: Deactivated successfully. Aug 5 21:37:45.889273 (udev-worker)[4866]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:37:45.893196 systemd-networkd[1694]: calicd978bf2beb: Link UP Aug 5 21:37:45.893655 systemd-networkd[1694]: calicd978bf2beb: Gained carrier Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.663 [INFO][4957] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.694 [INFO][4957] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0 coredns-5dd5756b68- kube-system 07a990f4-f782-4270-aebc-bc009f6009d9 725 0 2024-08-05 21:37:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-168 coredns-5dd5756b68-4f7nz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicd978bf2beb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Namespace="kube-system" Pod="coredns-5dd5756b68-4f7nz" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.694 [INFO][4957] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Namespace="kube-system" Pod="coredns-5dd5756b68-4f7nz" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.778 [INFO][4975] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" HandleID="k8s-pod-network.811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.803 [INFO][4975] ipam_plugin.go 264: Auto assigning IP ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" HandleID="k8s-pod-network.811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035b1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-168", "pod":"coredns-5dd5756b68-4f7nz", "timestamp":"2024-08-05 21:37:45.778252067 +0000 UTC"}, Hostname:"ip-172-31-22-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.831 [INFO][4975] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.831 [INFO][4975] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.832 [INFO][4975] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-168' Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.836 [INFO][4975] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" host="ip-172-31-22-168" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.845 [INFO][4975] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-168" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.853 [INFO][4975] ipam.go 489: Trying affinity for 192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.857 [INFO][4975] ipam.go 155: Attempting to load block cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.861 [INFO][4975] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.861 [INFO][4975] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" host="ip-172-31-22-168" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.865 [INFO][4975] ipam.go 1685: Creating new handle: k8s-pod-network.811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248 Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.873 [INFO][4975] ipam.go 1203: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" host="ip-172-31-22-168" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.880 [INFO][4975] ipam.go 1216: Successfully claimed IPs: [192.168.93.2/26] block=192.168.93.0/26 handle="k8s-pod-network.811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" host="ip-172-31-22-168" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.881 [INFO][4975] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.2/26] handle="k8s-pod-network.811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" host="ip-172-31-22-168" Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.881 [INFO][4975] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:45.922467 containerd[2153]: 2024-08-05 21:37:45.881 [INFO][4975] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.93.2/26] IPv6=[] ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" HandleID="k8s-pod-network.811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.924797 containerd[2153]: 2024-08-05 21:37:45.885 [INFO][4957] k8s.go 386: Populated endpoint ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Namespace="kube-system" Pod="coredns-5dd5756b68-4f7nz" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"07a990f4-f782-4270-aebc-bc009f6009d9", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"", Pod:"coredns-5dd5756b68-4f7nz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd978bf2beb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:45.924797 containerd[2153]: 2024-08-05 21:37:45.885 [INFO][4957] k8s.go 387: Calico CNI using IPs: [192.168.93.2/32] ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Namespace="kube-system" Pod="coredns-5dd5756b68-4f7nz" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.924797 containerd[2153]: 2024-08-05 21:37:45.886 [INFO][4957] dataplane_linux.go 68: Setting the host side veth name to calicd978bf2beb ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Namespace="kube-system" Pod="coredns-5dd5756b68-4f7nz" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.924797 containerd[2153]: 2024-08-05 21:37:45.894 [INFO][4957] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Namespace="kube-system" Pod="coredns-5dd5756b68-4f7nz" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.924797 containerd[2153]: 2024-08-05 21:37:45.897 [INFO][4957] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Namespace="kube-system" Pod="coredns-5dd5756b68-4f7nz" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"07a990f4-f782-4270-aebc-bc009f6009d9", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248", Pod:"coredns-5dd5756b68-4f7nz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd978bf2beb", MAC:"d2:53:d5:14:af:6a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:45.924797 containerd[2153]: 2024-08-05 21:37:45.916 [INFO][4957] k8s.go 500: Wrote updated endpoint to datastore ContainerID="811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248" Namespace="kube-system" Pod="coredns-5dd5756b68-4f7nz" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:45.959546 containerd[2153]: time="2024-08-05T21:37:45.959302512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:45.959546 containerd[2153]: time="2024-08-05T21:37:45.959471364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:45.959546 containerd[2153]: time="2024-08-05T21:37:45.959536776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:45.959546 containerd[2153]: time="2024-08-05T21:37:45.959584248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:46.064268 containerd[2153]: time="2024-08-05T21:37:46.064136588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4f7nz,Uid:07a990f4-f782-4270-aebc-bc009f6009d9,Namespace:kube-system,Attempt:1,} returns sandbox id \"811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248\"" Aug 5 21:37:46.071329 containerd[2153]: time="2024-08-05T21:37:46.071164220Z" level=info msg="CreateContainer within sandbox \"811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:37:46.087837 kubelet[3595]: I0805 21:37:46.087801 3595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:37:46.103018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3899764799.mount: Deactivated successfully. Aug 5 21:37:46.105834 containerd[2153]: time="2024-08-05T21:37:46.105711044Z" level=info msg="CreateContainer within sandbox \"811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9535446bde1057111dbad882f4a67d2502792ed52aa508bfc794063fab98525d\"" Aug 5 21:37:46.109559 containerd[2153]: time="2024-08-05T21:37:46.109198448Z" level=info msg="StartContainer for \"9535446bde1057111dbad882f4a67d2502792ed52aa508bfc794063fab98525d\"" Aug 5 21:37:46.327500 containerd[2153]: time="2024-08-05T21:37:46.326937346Z" level=info msg="StartContainer for \"9535446bde1057111dbad882f4a67d2502792ed52aa508bfc794063fab98525d\" returns successfully" Aug 5 21:37:46.351991 systemd-networkd[1694]: cali776c936443d: Gained IPv6LL Aug 5 21:37:46.674980 kubelet[3595]: I0805 21:37:46.673572 3595 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:37:46.834755 kubelet[3595]: I0805 21:37:46.833675 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4f7nz" podStartSLOduration=36.833617212 podCreationTimestamp="2024-08-05 21:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:37:46.827888832 +0000 UTC m=+49.745293976" watchObservedRunningTime="2024-08-05 21:37:46.833617212 +0000 UTC m=+49.751022296" Aug 5 21:37:47.186678 systemd-networkd[1694]: calicd978bf2beb: Gained IPv6LL Aug 5 21:37:47.391230 containerd[2153]: time="2024-08-05T21:37:47.391125275Z" level=info msg="StopPodSandbox for \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\"" Aug 5 21:37:47.396802 containerd[2153]: time="2024-08-05T21:37:47.394945163Z" level=info msg="StopPodSandbox for \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\"" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.633 [INFO][5193] k8s.go 608: Cleaning up netns ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.633 [INFO][5193] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" iface="eth0" netns="/var/run/netns/cni-4293f6e8-c317-a163-a7e0-4f82bae47fcd" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.634 [INFO][5193] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" iface="eth0" netns="/var/run/netns/cni-4293f6e8-c317-a163-a7e0-4f82bae47fcd" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.636 [INFO][5193] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" iface="eth0" netns="/var/run/netns/cni-4293f6e8-c317-a163-a7e0-4f82bae47fcd" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.636 [INFO][5193] k8s.go 615: Releasing IP address(es) ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.637 [INFO][5193] utils.go 188: Calico CNI releasing IP address ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.876 [INFO][5208] ipam_plugin.go 411: Releasing address using handleID ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" HandleID="k8s-pod-network.a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.879 [INFO][5208] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.883 [INFO][5208] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.913 [WARNING][5208] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" HandleID="k8s-pod-network.a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.914 [INFO][5208] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" HandleID="k8s-pod-network.a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.922 [INFO][5208] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:47.937270 containerd[2153]: 2024-08-05 21:37:47.928 [INFO][5193] k8s.go 621: Teardown processing complete. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:47.946147 containerd[2153]: time="2024-08-05T21:37:47.945245342Z" level=info msg="TearDown network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\" successfully" Aug 5 21:37:47.946147 containerd[2153]: time="2024-08-05T21:37:47.945306026Z" level=info msg="StopPodSandbox for \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\" returns successfully" Aug 5 21:37:47.949290 containerd[2153]: time="2024-08-05T21:37:47.949231790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dwgx8,Uid:03afc324-8123-4d46-966f-b1a0fc0166c5,Namespace:kube-system,Attempt:1,}" Aug 5 21:37:47.959270 systemd[1]: run-netns-cni\x2d4293f6e8\x2dc317\x2da163\x2da7e0\x2d4f82bae47fcd.mount: Deactivated successfully. Aug 5 21:37:48.348915 systemd-networkd[1694]: vxlan.calico: Link UP Aug 5 21:37:48.348937 systemd-networkd[1694]: vxlan.calico: Gained carrier Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:47.867 [INFO][5199] k8s.go 608: Cleaning up netns ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:47.869 [INFO][5199] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" iface="eth0" netns="/var/run/netns/cni-8b1816bd-df8b-14ec-9349-5773941a10bb" Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:47.869 [INFO][5199] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" iface="eth0" netns="/var/run/netns/cni-8b1816bd-df8b-14ec-9349-5773941a10bb" Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:47.870 [INFO][5199] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" iface="eth0" netns="/var/run/netns/cni-8b1816bd-df8b-14ec-9349-5773941a10bb" Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:47.870 [INFO][5199] k8s.go 615: Releasing IP address(es) ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:47.870 [INFO][5199] utils.go 188: Calico CNI releasing IP address ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:48.409 [INFO][5232] ipam_plugin.go 411: Releasing address using handleID ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" HandleID="k8s-pod-network.30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:48.413 [INFO][5232] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:48.416 [INFO][5232] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:48.452 [WARNING][5232] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" HandleID="k8s-pod-network.30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:48.453 [INFO][5232] ipam_plugin.go 439: Releasing address using workloadID ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" HandleID="k8s-pod-network.30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:48.457 [INFO][5232] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:48.509599 containerd[2153]: 2024-08-05 21:37:48.499 [INFO][5199] k8s.go 621: Teardown processing complete. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:48.512678 containerd[2153]: time="2024-08-05T21:37:48.510891876Z" level=info msg="TearDown network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\" successfully" Aug 5 21:37:48.512678 containerd[2153]: time="2024-08-05T21:37:48.510937860Z" level=info msg="StopPodSandbox for \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\" returns successfully" Aug 5 21:37:48.518278 containerd[2153]: time="2024-08-05T21:37:48.516567180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wcf9z,Uid:0cd76c5e-7095-481c-905a-80c88f6f6b05,Namespace:calico-system,Attempt:1,}" Aug 5 21:37:48.517760 systemd[1]: run-netns-cni\x2d8b1816bd\x2ddf8b\x2d14ec\x2d9349\x2d5773941a10bb.mount: Deactivated successfully. Aug 5 21:37:49.330559 systemd-networkd[1694]: calidee6d524a79: Link UP Aug 5 21:37:49.335199 systemd-networkd[1694]: calidee6d524a79: Gained carrier Aug 5 21:37:49.361834 systemd-resolved[2018]: Under memory pressure, flushing caches. Aug 5 21:37:49.367074 systemd-journald[1607]: Under memory pressure, flushing caches. Aug 5 21:37:49.361948 systemd-resolved[2018]: Flushed all caches. Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:48.625 [INFO][5253] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0 coredns-5dd5756b68- kube-system 03afc324-8123-4d46-966f-b1a0fc0166c5 754 0 2024-08-05 21:37:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-22-168 coredns-5dd5756b68-dwgx8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidee6d524a79 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Namespace="kube-system" Pod="coredns-5dd5756b68-dwgx8" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:48.625 [INFO][5253] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Namespace="kube-system" Pod="coredns-5dd5756b68-dwgx8" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.088 [INFO][5302] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" HandleID="k8s-pod-network.08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.129 [INFO][5302] ipam_plugin.go 264: Auto assigning IP ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" HandleID="k8s-pod-network.08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035de20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-22-168", "pod":"coredns-5dd5756b68-dwgx8", "timestamp":"2024-08-05 21:37:49.087979967 +0000 UTC"}, Hostname:"ip-172-31-22-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.129 [INFO][5302] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.129 [INFO][5302] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.129 [INFO][5302] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-168' Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.139 [INFO][5302] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" host="ip-172-31-22-168" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.155 [INFO][5302] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-168" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.183 [INFO][5302] ipam.go 489: Trying affinity for 192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.193 [INFO][5302] ipam.go 155: Attempting to load block cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.202 [INFO][5302] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.203 [INFO][5302] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" host="ip-172-31-22-168" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.211 [INFO][5302] ipam.go 1685: Creating new handle: k8s-pod-network.08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.228 [INFO][5302] ipam.go 1203: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" host="ip-172-31-22-168" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.250 [INFO][5302] ipam.go 1216: Successfully claimed IPs: [192.168.93.3/26] block=192.168.93.0/26 handle="k8s-pod-network.08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" host="ip-172-31-22-168" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.253 [INFO][5302] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.3/26] handle="k8s-pod-network.08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" host="ip-172-31-22-168" Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.255 [INFO][5302] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:49.417225 containerd[2153]: 2024-08-05 21:37:49.255 [INFO][5302] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.93.3/26] IPv6=[] ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" HandleID="k8s-pod-network.08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:49.428880 containerd[2153]: 2024-08-05 21:37:49.278 [INFO][5253] k8s.go 386: Populated endpoint ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Namespace="kube-system" Pod="coredns-5dd5756b68-dwgx8" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"03afc324-8123-4d46-966f-b1a0fc0166c5", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"", Pod:"coredns-5dd5756b68-dwgx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidee6d524a79", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:49.428880 containerd[2153]: 2024-08-05 21:37:49.282 [INFO][5253] k8s.go 387: Calico CNI using IPs: [192.168.93.3/32] ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Namespace="kube-system" Pod="coredns-5dd5756b68-dwgx8" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:49.428880 containerd[2153]: 2024-08-05 21:37:49.284 [INFO][5253] dataplane_linux.go 68: Setting the host side veth name to calidee6d524a79 ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Namespace="kube-system" Pod="coredns-5dd5756b68-dwgx8" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:49.428880 containerd[2153]: 2024-08-05 21:37:49.308 [INFO][5253] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Namespace="kube-system" Pod="coredns-5dd5756b68-dwgx8" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:49.428880 containerd[2153]: 2024-08-05 21:37:49.310 [INFO][5253] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Namespace="kube-system" Pod="coredns-5dd5756b68-dwgx8" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"03afc324-8123-4d46-966f-b1a0fc0166c5", ResourceVersion:"754", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec", Pod:"coredns-5dd5756b68-dwgx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidee6d524a79", MAC:"52:3e:d8:f7:10:80", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:49.428880 containerd[2153]: 2024-08-05 21:37:49.394 [INFO][5253] k8s.go 500: Wrote updated endpoint to datastore ContainerID="08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec" Namespace="kube-system" Pod="coredns-5dd5756b68-dwgx8" WorkloadEndpoint="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:49.603184 containerd[2153]: time="2024-08-05T21:37:49.600662558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:49.603184 containerd[2153]: time="2024-08-05T21:37:49.601557542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:49.603184 containerd[2153]: time="2024-08-05T21:37:49.601875578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:49.608834 containerd[2153]: time="2024-08-05T21:37:49.601923290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:49.681052 systemd-networkd[1694]: vxlan.calico: Gained IPv6LL Aug 5 21:37:49.855874 systemd-networkd[1694]: cali9b4913fb560: Link UP Aug 5 21:37:49.861550 systemd-networkd[1694]: cali9b4913fb560: Gained carrier Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.168 [INFO][5286] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0 csi-node-driver- calico-system 0cd76c5e-7095-481c-905a-80c88f6f6b05 755 0 2024-08-05 21:37:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-22-168 csi-node-driver-wcf9z eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali9b4913fb560 [] []}} ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Namespace="calico-system" Pod="csi-node-driver-wcf9z" WorkloadEndpoint="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.168 [INFO][5286] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Namespace="calico-system" Pod="csi-node-driver-wcf9z" WorkloadEndpoint="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.538 [INFO][5329] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" HandleID="k8s-pod-network.b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.586 [INFO][5329] ipam_plugin.go 264: Auto assigning IP ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" HandleID="k8s-pod-network.b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c390), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-22-168", "pod":"csi-node-driver-wcf9z", "timestamp":"2024-08-05 21:37:49.538626434 +0000 UTC"}, Hostname:"ip-172-31-22-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.586 [INFO][5329] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.587 [INFO][5329] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.588 [INFO][5329] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-168' Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.596 [INFO][5329] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" host="ip-172-31-22-168" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.617 [INFO][5329] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-168" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.659 [INFO][5329] ipam.go 489: Trying affinity for 192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.685 [INFO][5329] ipam.go 155: Attempting to load block cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.698 [INFO][5329] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.699 [INFO][5329] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" host="ip-172-31-22-168" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.740 [INFO][5329] ipam.go 1685: Creating new handle: k8s-pod-network.b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7 Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.776 [INFO][5329] ipam.go 1203: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" host="ip-172-31-22-168" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.797 [INFO][5329] ipam.go 1216: Successfully claimed IPs: [192.168.93.4/26] block=192.168.93.0/26 handle="k8s-pod-network.b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" host="ip-172-31-22-168" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.797 [INFO][5329] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.4/26] handle="k8s-pod-network.b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" host="ip-172-31-22-168" Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.797 [INFO][5329] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:49.941119 containerd[2153]: 2024-08-05 21:37:49.797 [INFO][5329] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.93.4/26] IPv6=[] ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" HandleID="k8s-pod-network.b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:49.944217 containerd[2153]: 2024-08-05 21:37:49.821 [INFO][5286] k8s.go 386: Populated endpoint ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Namespace="calico-system" Pod="csi-node-driver-wcf9z" WorkloadEndpoint="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0cd76c5e-7095-481c-905a-80c88f6f6b05", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"", Pod:"csi-node-driver-wcf9z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9b4913fb560", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:49.944217 containerd[2153]: 2024-08-05 21:37:49.822 [INFO][5286] k8s.go 387: Calico CNI using IPs: [192.168.93.4/32] ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Namespace="calico-system" Pod="csi-node-driver-wcf9z" WorkloadEndpoint="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:49.944217 containerd[2153]: 2024-08-05 21:37:49.823 [INFO][5286] dataplane_linux.go 68: Setting the host side veth name to cali9b4913fb560 ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Namespace="calico-system" Pod="csi-node-driver-wcf9z" WorkloadEndpoint="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:49.944217 containerd[2153]: 2024-08-05 21:37:49.867 [INFO][5286] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Namespace="calico-system" Pod="csi-node-driver-wcf9z" WorkloadEndpoint="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:49.944217 containerd[2153]: 2024-08-05 21:37:49.873 [INFO][5286] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Namespace="calico-system" Pod="csi-node-driver-wcf9z" WorkloadEndpoint="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0cd76c5e-7095-481c-905a-80c88f6f6b05", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7", Pod:"csi-node-driver-wcf9z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9b4913fb560", MAC:"72:54:64:ad:87:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:49.944217 containerd[2153]: 2024-08-05 21:37:49.923 [INFO][5286] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7" Namespace="calico-system" Pod="csi-node-driver-wcf9z" WorkloadEndpoint="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:50.000942 containerd[2153]: time="2024-08-05T21:37:49.997493860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-dwgx8,Uid:03afc324-8123-4d46-966f-b1a0fc0166c5,Namespace:kube-system,Attempt:1,} returns sandbox id \"08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec\"" Aug 5 21:37:50.062887 containerd[2153]: time="2024-08-05T21:37:50.058127628Z" level=info msg="CreateContainer within sandbox \"08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:37:50.073365 containerd[2153]: time="2024-08-05T21:37:50.069235248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:37:50.073365 containerd[2153]: time="2024-08-05T21:37:50.069336108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:50.073365 containerd[2153]: time="2024-08-05T21:37:50.069377028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:37:50.073365 containerd[2153]: time="2024-08-05T21:37:50.069410880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:37:50.149432 containerd[2153]: time="2024-08-05T21:37:50.147477793Z" level=info msg="CreateContainer within sandbox \"08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"916d34fd8ee916bcc05b1570228a33f93a50da74eba919054c19169a9b896b40\"" Aug 5 21:37:50.158448 containerd[2153]: time="2024-08-05T21:37:50.151366345Z" level=info msg="StartContainer for \"916d34fd8ee916bcc05b1570228a33f93a50da74eba919054c19169a9b896b40\"" Aug 5 21:37:50.412889 containerd[2153]: time="2024-08-05T21:37:50.412245674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wcf9z,Uid:0cd76c5e-7095-481c-905a-80c88f6f6b05,Namespace:calico-system,Attempt:1,} returns sandbox id \"b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7\"" Aug 5 21:37:50.499753 containerd[2153]: time="2024-08-05T21:37:50.498297614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:50.504455 containerd[2153]: time="2024-08-05T21:37:50.502793834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Aug 5 21:37:50.511920 containerd[2153]: time="2024-08-05T21:37:50.511312622Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:50.520757 containerd[2153]: time="2024-08-05T21:37:50.519106586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:50.526206 containerd[2153]: time="2024-08-05T21:37:50.526118642Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 5.496342891s" Aug 5 21:37:50.526206 containerd[2153]: time="2024-08-05T21:37:50.526193690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Aug 5 21:37:50.533377 containerd[2153]: time="2024-08-05T21:37:50.533306054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 21:37:50.594467 containerd[2153]: time="2024-08-05T21:37:50.588910527Z" level=info msg="CreateContainer within sandbox \"513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 21:37:50.637955 systemd[1]: Started sshd@7-172.31.22.168:22-139.178.68.195:38082.service - OpenSSH per-connection server daemon (139.178.68.195:38082). Aug 5 21:37:50.666088 containerd[2153]: time="2024-08-05T21:37:50.665937795Z" level=info msg="StartContainer for \"916d34fd8ee916bcc05b1570228a33f93a50da74eba919054c19169a9b896b40\" returns successfully" Aug 5 21:37:50.681835 containerd[2153]: time="2024-08-05T21:37:50.678090147Z" level=info msg="CreateContainer within sandbox \"513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"46697af7d43a094e17c3f24b5a362b6c83de21d746d3e2b681f96279915f0525\"" Aug 5 21:37:50.699515 containerd[2153]: time="2024-08-05T21:37:50.683191107Z" level=info msg="StartContainer for \"46697af7d43a094e17c3f24b5a362b6c83de21d746d3e2b681f96279915f0525\"" Aug 5 21:37:50.978017 sshd[5498]: Accepted publickey for core from 139.178.68.195 port 38082 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:50.987577 sshd[5498]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:51.025826 systemd-logind[2118]: New session 8 of user core. Aug 5 21:37:51.040169 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 21:37:51.088335 systemd-networkd[1694]: calidee6d524a79: Gained IPv6LL Aug 5 21:37:51.095747 kubelet[3595]: I0805 21:37:51.094314 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-dwgx8" podStartSLOduration=41.094059853 podCreationTimestamp="2024-08-05 21:37:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:37:51.055078837 +0000 UTC m=+53.972483933" watchObservedRunningTime="2024-08-05 21:37:51.094059853 +0000 UTC m=+54.011464949" Aug 5 21:37:51.151980 systemd-networkd[1694]: cali9b4913fb560: Gained IPv6LL Aug 5 21:37:51.447696 containerd[2153]: time="2024-08-05T21:37:51.446597655Z" level=info msg="StartContainer for \"46697af7d43a094e17c3f24b5a362b6c83de21d746d3e2b681f96279915f0525\" returns successfully" Aug 5 21:37:51.612526 sshd[5498]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:51.623294 systemd[1]: sshd@7-172.31.22.168:22-139.178.68.195:38082.service: Deactivated successfully. Aug 5 21:37:51.636555 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 21:37:51.638120 systemd-logind[2118]: Session 8 logged out. Waiting for processes to exit. Aug 5 21:37:51.648283 systemd-logind[2118]: Removed session 8. Aug 5 21:37:52.281486 kubelet[3595]: I0805 21:37:52.278089 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f4996bdb8-4lhvp" podStartSLOduration=28.777402984 podCreationTimestamp="2024-08-05 21:37:18 +0000 UTC" firstStartedPulling="2024-08-05 21:37:45.027454483 +0000 UTC m=+47.944859579" lastFinishedPulling="2024-08-05 21:37:50.528079226 +0000 UTC m=+53.445484298" observedRunningTime="2024-08-05 21:37:52.047585426 +0000 UTC m=+54.964990534" watchObservedRunningTime="2024-08-05 21:37:52.278027703 +0000 UTC m=+55.195432787" Aug 5 21:37:52.399838 containerd[2153]: time="2024-08-05T21:37:52.399772132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:52.404949 containerd[2153]: time="2024-08-05T21:37:52.404875096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Aug 5 21:37:52.407027 containerd[2153]: time="2024-08-05T21:37:52.406223440Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:52.411995 containerd[2153]: time="2024-08-05T21:37:52.411925192Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:52.417206 containerd[2153]: time="2024-08-05T21:37:52.415955212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.88257837s" Aug 5 21:37:52.417206 containerd[2153]: time="2024-08-05T21:37:52.416017744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Aug 5 21:37:52.424756 containerd[2153]: time="2024-08-05T21:37:52.423618676Z" level=info msg="CreateContainer within sandbox \"b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 21:37:52.472170 containerd[2153]: time="2024-08-05T21:37:52.472104196Z" level=info msg="CreateContainer within sandbox \"b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7c0a135881d2cf62307bd1edd511ef053bb336315353d57c5748d0d641cfb5fe\"" Aug 5 21:37:52.474643 containerd[2153]: time="2024-08-05T21:37:52.473583652Z" level=info msg="StartContainer for \"7c0a135881d2cf62307bd1edd511ef053bb336315353d57c5748d0d641cfb5fe\"" Aug 5 21:37:52.685086 containerd[2153]: time="2024-08-05T21:37:52.685015661Z" level=info msg="StartContainer for \"7c0a135881d2cf62307bd1edd511ef053bb336315353d57c5748d0d641cfb5fe\" returns successfully" Aug 5 21:37:52.690362 containerd[2153]: time="2024-08-05T21:37:52.690276245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 21:37:54.110804 ntpd[2101]: Listen normally on 6 vxlan.calico 192.168.93.0:123 Aug 5 21:37:54.116650 ntpd[2101]: 5 Aug 21:37:54 ntpd[2101]: Listen normally on 6 vxlan.calico 192.168.93.0:123 Aug 5 21:37:54.116650 ntpd[2101]: 5 Aug 21:37:54 ntpd[2101]: Listen normally on 7 cali776c936443d [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 21:37:54.116650 ntpd[2101]: 5 Aug 21:37:54 ntpd[2101]: Listen normally on 8 calicd978bf2beb [fe80::ecee:eeff:feee:eeee%5]:123 Aug 5 21:37:54.116650 ntpd[2101]: 5 Aug 21:37:54 ntpd[2101]: Listen normally on 9 vxlan.calico [fe80::64eb:1dff:fe6c:93cf%6]:123 Aug 5 21:37:54.116650 ntpd[2101]: 5 Aug 21:37:54 ntpd[2101]: Listen normally on 10 calidee6d524a79 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 21:37:54.116650 ntpd[2101]: 5 Aug 21:37:54 ntpd[2101]: Listen normally on 11 cali9b4913fb560 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 21:37:54.110974 ntpd[2101]: Listen normally on 7 cali776c936443d [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 21:37:54.111067 ntpd[2101]: Listen normally on 8 calicd978bf2beb [fe80::ecee:eeff:feee:eeee%5]:123 Aug 5 21:37:54.111135 ntpd[2101]: Listen normally on 9 vxlan.calico [fe80::64eb:1dff:fe6c:93cf%6]:123 Aug 5 21:37:54.111205 ntpd[2101]: Listen normally on 10 calidee6d524a79 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 21:37:54.111274 ntpd[2101]: Listen normally on 11 cali9b4913fb560 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 21:37:54.365797 containerd[2153]: time="2024-08-05T21:37:54.365602913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:54.368624 containerd[2153]: time="2024-08-05T21:37:54.368548745Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Aug 5 21:37:54.370760 containerd[2153]: time="2024-08-05T21:37:54.370673994Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:54.383274 containerd[2153]: time="2024-08-05T21:37:54.383160342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:37:54.388087 containerd[2153]: time="2024-08-05T21:37:54.386743590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.696352097s" Aug 5 21:37:54.388087 containerd[2153]: time="2024-08-05T21:37:54.386812002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Aug 5 21:37:54.394406 containerd[2153]: time="2024-08-05T21:37:54.394158474Z" level=info msg="CreateContainer within sandbox \"b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 21:37:54.424907 containerd[2153]: time="2024-08-05T21:37:54.423496098Z" level=info msg="CreateContainer within sandbox \"b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5eeecb30918c52b7fdbbac4eb9098cf2215672f17d9457c35635a9aa7fafbcc9\"" Aug 5 21:37:54.431319 containerd[2153]: time="2024-08-05T21:37:54.431238054Z" level=info msg="StartContainer for \"5eeecb30918c52b7fdbbac4eb9098cf2215672f17d9457c35635a9aa7fafbcc9\"" Aug 5 21:37:54.655785 containerd[2153]: time="2024-08-05T21:37:54.653580115Z" level=info msg="StartContainer for \"5eeecb30918c52b7fdbbac4eb9098cf2215672f17d9457c35635a9aa7fafbcc9\" returns successfully" Aug 5 21:37:54.696203 kubelet[3595]: I0805 21:37:54.696129 3595 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 21:37:54.698138 kubelet[3595]: I0805 21:37:54.697631 3595 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 21:37:56.645210 systemd[1]: Started sshd@8-172.31.22.168:22-139.178.68.195:44206.service - OpenSSH per-connection server daemon (139.178.68.195:44206). Aug 5 21:37:56.826160 sshd[5663]: Accepted publickey for core from 139.178.68.195 port 44206 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:37:56.829255 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:37:56.838327 systemd-logind[2118]: New session 9 of user core. Aug 5 21:37:56.846213 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 21:37:57.118857 sshd[5663]: pam_unix(sshd:session): session closed for user core Aug 5 21:37:57.129239 systemd[1]: sshd@8-172.31.22.168:22-139.178.68.195:44206.service: Deactivated successfully. Aug 5 21:37:57.135365 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 21:37:57.137762 systemd-logind[2118]: Session 9 logged out. Waiting for processes to exit. Aug 5 21:37:57.140359 systemd-logind[2118]: Removed session 9. Aug 5 21:37:57.393566 containerd[2153]: time="2024-08-05T21:37:57.392789853Z" level=info msg="StopPodSandbox for \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\"" Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.467 [WARNING][5690] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"03afc324-8123-4d46-966f-b1a0fc0166c5", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec", Pod:"coredns-5dd5756b68-dwgx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidee6d524a79", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.468 [INFO][5690] k8s.go 608: Cleaning up netns ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.468 [INFO][5690] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" iface="eth0" netns="" Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.468 [INFO][5690] k8s.go 615: Releasing IP address(es) ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.468 [INFO][5690] utils.go 188: Calico CNI releasing IP address ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.506 [INFO][5698] ipam_plugin.go 411: Releasing address using handleID ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" HandleID="k8s-pod-network.a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.506 [INFO][5698] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.506 [INFO][5698] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.518 [WARNING][5698] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" HandleID="k8s-pod-network.a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.518 [INFO][5698] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" HandleID="k8s-pod-network.a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.520 [INFO][5698] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:57.525387 containerd[2153]: 2024-08-05 21:37:57.522 [INFO][5690] k8s.go 621: Teardown processing complete. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:57.527802 containerd[2153]: time="2024-08-05T21:37:57.525444453Z" level=info msg="TearDown network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\" successfully" Aug 5 21:37:57.527802 containerd[2153]: time="2024-08-05T21:37:57.525482985Z" level=info msg="StopPodSandbox for \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\" returns successfully" Aug 5 21:37:57.527802 containerd[2153]: time="2024-08-05T21:37:57.526691805Z" level=info msg="RemovePodSandbox for \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\"" Aug 5 21:37:57.527802 containerd[2153]: time="2024-08-05T21:37:57.526844061Z" level=info msg="Forcibly stopping sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\"" Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.598 [WARNING][5716] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"03afc324-8123-4d46-966f-b1a0fc0166c5", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"08c7dbc27e04c76cd4a2e81668e0a6d19098176cceb1b194ae05828d5b4241ec", Pod:"coredns-5dd5756b68-dwgx8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidee6d524a79", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.598 [INFO][5716] k8s.go 608: Cleaning up netns ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.598 [INFO][5716] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" iface="eth0" netns="" Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.598 [INFO][5716] k8s.go 615: Releasing IP address(es) ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.599 [INFO][5716] utils.go 188: Calico CNI releasing IP address ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.642 [INFO][5722] ipam_plugin.go 411: Releasing address using handleID ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" HandleID="k8s-pod-network.a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.643 [INFO][5722] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.643 [INFO][5722] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.663 [WARNING][5722] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" HandleID="k8s-pod-network.a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.664 [INFO][5722] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" HandleID="k8s-pod-network.a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--dwgx8-eth0" Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.668 [INFO][5722] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:57.676170 containerd[2153]: 2024-08-05 21:37:57.671 [INFO][5716] k8s.go 621: Teardown processing complete. ContainerID="a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14" Aug 5 21:37:57.676170 containerd[2153]: time="2024-08-05T21:37:57.675384274Z" level=info msg="TearDown network for sandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\" successfully" Aug 5 21:37:57.682760 containerd[2153]: time="2024-08-05T21:37:57.682445062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:37:57.682760 containerd[2153]: time="2024-08-05T21:37:57.682646086Z" level=info msg="RemovePodSandbox \"a4c34924f8966d32dfce62e6c4ffa57eeb2c9da550fc7e2e3e3105392ab54f14\" returns successfully" Aug 5 21:37:57.683870 containerd[2153]: time="2024-08-05T21:37:57.683782042Z" level=info msg="StopPodSandbox for \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\"" Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.758 [WARNING][5740] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0cd76c5e-7095-481c-905a-80c88f6f6b05", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7", Pod:"csi-node-driver-wcf9z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9b4913fb560", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.759 [INFO][5740] k8s.go 608: Cleaning up netns ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.759 [INFO][5740] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" iface="eth0" netns="" Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.759 [INFO][5740] k8s.go 615: Releasing IP address(es) ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.759 [INFO][5740] utils.go 188: Calico CNI releasing IP address ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.807 [INFO][5746] ipam_plugin.go 411: Releasing address using handleID ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" HandleID="k8s-pod-network.30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.807 [INFO][5746] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.807 [INFO][5746] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.821 [WARNING][5746] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" HandleID="k8s-pod-network.30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.821 [INFO][5746] ipam_plugin.go 439: Releasing address using workloadID ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" HandleID="k8s-pod-network.30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.824 [INFO][5746] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:57.830522 containerd[2153]: 2024-08-05 21:37:57.827 [INFO][5740] k8s.go 621: Teardown processing complete. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:57.832033 containerd[2153]: time="2024-08-05T21:37:57.831507467Z" level=info msg="TearDown network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\" successfully" Aug 5 21:37:57.832033 containerd[2153]: time="2024-08-05T21:37:57.831553139Z" level=info msg="StopPodSandbox for \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\" returns successfully" Aug 5 21:37:57.832593 containerd[2153]: time="2024-08-05T21:37:57.832515215Z" level=info msg="RemovePodSandbox for \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\"" Aug 5 21:37:57.832676 containerd[2153]: time="2024-08-05T21:37:57.832598039Z" level=info msg="Forcibly stopping sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\"" Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.901 [WARNING][5765] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0cd76c5e-7095-481c-905a-80c88f6f6b05", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"b62544fedc1a7a3ca50fd339e2ff9d215c5c73bff17ed42227b307ac0c8af8a7", Pod:"csi-node-driver-wcf9z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.93.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali9b4913fb560", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.902 [INFO][5765] k8s.go 608: Cleaning up netns ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.902 [INFO][5765] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" iface="eth0" netns="" Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.902 [INFO][5765] k8s.go 615: Releasing IP address(es) ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.902 [INFO][5765] utils.go 188: Calico CNI releasing IP address ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.946 [INFO][5771] ipam_plugin.go 411: Releasing address using handleID ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" HandleID="k8s-pod-network.30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.946 [INFO][5771] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.946 [INFO][5771] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.959 [WARNING][5771] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" HandleID="k8s-pod-network.30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.959 [INFO][5771] ipam_plugin.go 439: Releasing address using workloadID ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" HandleID="k8s-pod-network.30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Workload="ip--172--31--22--168-k8s-csi--node--driver--wcf9z-eth0" Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.961 [INFO][5771] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:57.968620 containerd[2153]: 2024-08-05 21:37:57.964 [INFO][5765] k8s.go 621: Teardown processing complete. ContainerID="30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f" Aug 5 21:37:57.968620 containerd[2153]: time="2024-08-05T21:37:57.967009931Z" level=info msg="TearDown network for sandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\" successfully" Aug 5 21:37:57.972859 containerd[2153]: time="2024-08-05T21:37:57.972796391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:37:57.972973 containerd[2153]: time="2024-08-05T21:37:57.972907931Z" level=info msg="RemovePodSandbox \"30478beea7b89e83663b7139c9b3c9f471458ca5e41eb39d8ddfc4c26900771f\" returns successfully" Aug 5 21:37:57.974011 containerd[2153]: time="2024-08-05T21:37:57.973489007Z" level=info msg="StopPodSandbox for \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\"" Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.046 [WARNING][5789] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"07a990f4-f782-4270-aebc-bc009f6009d9", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248", Pod:"coredns-5dd5756b68-4f7nz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd978bf2beb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.048 [INFO][5789] k8s.go 608: Cleaning up netns ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.048 [INFO][5789] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" iface="eth0" netns="" Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.048 [INFO][5789] k8s.go 615: Releasing IP address(es) ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.048 [INFO][5789] utils.go 188: Calico CNI releasing IP address ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.100 [INFO][5795] ipam_plugin.go 411: Releasing address using handleID ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" HandleID="k8s-pod-network.7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.100 [INFO][5795] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.100 [INFO][5795] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.113 [WARNING][5795] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" HandleID="k8s-pod-network.7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.113 [INFO][5795] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" HandleID="k8s-pod-network.7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.116 [INFO][5795] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:58.122566 containerd[2153]: 2024-08-05 21:37:58.119 [INFO][5789] k8s.go 621: Teardown processing complete. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:58.124287 containerd[2153]: time="2024-08-05T21:37:58.122620628Z" level=info msg="TearDown network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\" successfully" Aug 5 21:37:58.124287 containerd[2153]: time="2024-08-05T21:37:58.122658548Z" level=info msg="StopPodSandbox for \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\" returns successfully" Aug 5 21:37:58.124287 containerd[2153]: time="2024-08-05T21:37:58.123401888Z" level=info msg="RemovePodSandbox for \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\"" Aug 5 21:37:58.124287 containerd[2153]: time="2024-08-05T21:37:58.123473636Z" level=info msg="Forcibly stopping sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\"" Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.196 [WARNING][5813] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"07a990f4-f782-4270-aebc-bc009f6009d9", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"811936b741d19100ee2ecdcbc72ff9fce098cbe64dd42bf372675bf1ff691248", Pod:"coredns-5dd5756b68-4f7nz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.93.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd978bf2beb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.196 [INFO][5813] k8s.go 608: Cleaning up netns ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.196 [INFO][5813] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" iface="eth0" netns="" Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.196 [INFO][5813] k8s.go 615: Releasing IP address(es) ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.196 [INFO][5813] utils.go 188: Calico CNI releasing IP address ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.232 [INFO][5820] ipam_plugin.go 411: Releasing address using handleID ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" HandleID="k8s-pod-network.7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.233 [INFO][5820] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.233 [INFO][5820] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.253 [WARNING][5820] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" HandleID="k8s-pod-network.7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.253 [INFO][5820] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" HandleID="k8s-pod-network.7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Workload="ip--172--31--22--168-k8s-coredns--5dd5756b68--4f7nz-eth0" Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.256 [INFO][5820] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:58.260579 containerd[2153]: 2024-08-05 21:37:58.258 [INFO][5813] k8s.go 621: Teardown processing complete. ContainerID="7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6" Aug 5 21:37:58.260579 containerd[2153]: time="2024-08-05T21:37:58.260483169Z" level=info msg="TearDown network for sandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\" successfully" Aug 5 21:37:58.266310 containerd[2153]: time="2024-08-05T21:37:58.266140593Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:37:58.266552 containerd[2153]: time="2024-08-05T21:37:58.266236761Z" level=info msg="RemovePodSandbox \"7a97fe9a07fda511c3fe9b8efdfde57214d58e9fc3d74123a5bb76d5fcc56bd6\" returns successfully" Aug 5 21:37:58.267496 containerd[2153]: time="2024-08-05T21:37:58.267448605Z" level=info msg="StopPodSandbox for \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\"" Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.341 [WARNING][5839] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0", GenerateName:"calico-kube-controllers-7f4996bdb8-", Namespace:"calico-system", SelfLink:"", UID:"b41a057f-c142-46dc-88b9-42e9659de76a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4996bdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3", Pod:"calico-kube-controllers-7f4996bdb8-4lhvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali776c936443d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.342 [INFO][5839] k8s.go 608: Cleaning up netns ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.342 [INFO][5839] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" iface="eth0" netns="" Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.342 [INFO][5839] k8s.go 615: Releasing IP address(es) ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.342 [INFO][5839] utils.go 188: Calico CNI releasing IP address ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.382 [INFO][5845] ipam_plugin.go 411: Releasing address using handleID ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" HandleID="k8s-pod-network.60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.383 [INFO][5845] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.383 [INFO][5845] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.395 [WARNING][5845] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" HandleID="k8s-pod-network.60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.395 [INFO][5845] ipam_plugin.go 439: Releasing address using workloadID ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" HandleID="k8s-pod-network.60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.398 [INFO][5845] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:58.403773 containerd[2153]: 2024-08-05 21:37:58.401 [INFO][5839] k8s.go 621: Teardown processing complete. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:58.403773 containerd[2153]: time="2024-08-05T21:37:58.403688518Z" level=info msg="TearDown network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\" successfully" Aug 5 21:37:58.403773 containerd[2153]: time="2024-08-05T21:37:58.403753822Z" level=info msg="StopPodSandbox for \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\" returns successfully" Aug 5 21:37:58.405242 containerd[2153]: time="2024-08-05T21:37:58.404628634Z" level=info msg="RemovePodSandbox for \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\"" Aug 5 21:37:58.405242 containerd[2153]: time="2024-08-05T21:37:58.404676406Z" level=info msg="Forcibly stopping sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\"" Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.499 [WARNING][5864] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0", GenerateName:"calico-kube-controllers-7f4996bdb8-", Namespace:"calico-system", SelfLink:"", UID:"b41a057f-c142-46dc-88b9-42e9659de76a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 37, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f4996bdb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"513e3970cbf6d360975b3a2fe8a9f9c322b9001380a6fcd941291568d119c6d3", Pod:"calico-kube-controllers-7f4996bdb8-4lhvp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.93.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali776c936443d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.500 [INFO][5864] k8s.go 608: Cleaning up netns ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.500 [INFO][5864] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" iface="eth0" netns="" Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.500 [INFO][5864] k8s.go 615: Releasing IP address(es) ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.500 [INFO][5864] utils.go 188: Calico CNI releasing IP address ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.546 [INFO][5871] ipam_plugin.go 411: Releasing address using handleID ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" HandleID="k8s-pod-network.60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.546 [INFO][5871] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.546 [INFO][5871] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.567 [WARNING][5871] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" HandleID="k8s-pod-network.60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.568 [INFO][5871] ipam_plugin.go 439: Releasing address using workloadID ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" HandleID="k8s-pod-network.60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Workload="ip--172--31--22--168-k8s-calico--kube--controllers--7f4996bdb8--4lhvp-eth0" Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.570 [INFO][5871] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:37:58.576624 containerd[2153]: 2024-08-05 21:37:58.573 [INFO][5864] k8s.go 621: Teardown processing complete. ContainerID="60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2" Aug 5 21:37:58.576624 containerd[2153]: time="2024-08-05T21:37:58.576520906Z" level=info msg="TearDown network for sandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\" successfully" Aug 5 21:37:58.582520 containerd[2153]: time="2024-08-05T21:37:58.582437170Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:37:58.582678 containerd[2153]: time="2024-08-05T21:37:58.582538414Z" level=info msg="RemovePodSandbox \"60c2ce3d98e5c501945d641d90987a101184b77d098a63cca1506d0d824e5df2\" returns successfully" Aug 5 21:38:02.154125 systemd[1]: Started sshd@9-172.31.22.168:22-139.178.68.195:35288.service - OpenSSH per-connection server daemon (139.178.68.195:35288). Aug 5 21:38:02.349034 sshd[5910]: Accepted publickey for core from 139.178.68.195 port 35288 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:02.352162 sshd[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:02.360129 systemd-logind[2118]: New session 10 of user core. Aug 5 21:38:02.364202 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 21:38:02.612051 sshd[5910]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:02.620628 systemd[1]: sshd@9-172.31.22.168:22-139.178.68.195:35288.service: Deactivated successfully. Aug 5 21:38:02.626599 systemd-logind[2118]: Session 10 logged out. Waiting for processes to exit. Aug 5 21:38:02.627629 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 21:38:02.644437 systemd[1]: Started sshd@10-172.31.22.168:22-139.178.68.195:35290.service - OpenSSH per-connection server daemon (139.178.68.195:35290). Aug 5 21:38:02.646274 systemd-logind[2118]: Removed session 10. Aug 5 21:38:02.818376 sshd[5925]: Accepted publickey for core from 139.178.68.195 port 35290 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:02.821571 sshd[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:02.830079 systemd-logind[2118]: New session 11 of user core. Aug 5 21:38:02.838346 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 21:38:03.438459 sshd[5925]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:03.457020 systemd-logind[2118]: Session 11 logged out. Waiting for processes to exit. Aug 5 21:38:03.457508 systemd[1]: sshd@10-172.31.22.168:22-139.178.68.195:35290.service: Deactivated successfully. Aug 5 21:38:03.489344 systemd[1]: Started sshd@11-172.31.22.168:22-139.178.68.195:35306.service - OpenSSH per-connection server daemon (139.178.68.195:35306). Aug 5 21:38:03.490240 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 21:38:03.496342 systemd-logind[2118]: Removed session 11. Aug 5 21:38:03.675203 sshd[5938]: Accepted publickey for core from 139.178.68.195 port 35306 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:03.678193 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:03.686504 systemd-logind[2118]: New session 12 of user core. Aug 5 21:38:03.699785 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 21:38:03.950545 sshd[5938]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:03.957033 systemd-logind[2118]: Session 12 logged out. Waiting for processes to exit. Aug 5 21:38:03.958919 systemd[1]: sshd@11-172.31.22.168:22-139.178.68.195:35306.service: Deactivated successfully. Aug 5 21:38:03.966290 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 21:38:03.968527 systemd-logind[2118]: Removed session 12. Aug 5 21:38:08.989153 systemd[1]: Started sshd@12-172.31.22.168:22-139.178.68.195:35312.service - OpenSSH per-connection server daemon (139.178.68.195:35312). Aug 5 21:38:09.157774 sshd[5952]: Accepted publickey for core from 139.178.68.195 port 35312 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:09.161538 sshd[5952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:09.171480 systemd-logind[2118]: New session 13 of user core. Aug 5 21:38:09.178397 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 21:38:09.429113 sshd[5952]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:09.438364 systemd[1]: sshd@12-172.31.22.168:22-139.178.68.195:35312.service: Deactivated successfully. Aug 5 21:38:09.449047 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 21:38:09.450835 systemd-logind[2118]: Session 13 logged out. Waiting for processes to exit. Aug 5 21:38:09.453850 systemd-logind[2118]: Removed session 13. Aug 5 21:38:14.460240 systemd[1]: Started sshd@13-172.31.22.168:22-139.178.68.195:47352.service - OpenSSH per-connection server daemon (139.178.68.195:47352). Aug 5 21:38:14.645269 sshd[5986]: Accepted publickey for core from 139.178.68.195 port 47352 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:14.648268 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:14.657360 systemd-logind[2118]: New session 14 of user core. Aug 5 21:38:14.663650 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 21:38:14.913205 sshd[5986]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:14.919612 systemd[1]: sshd@13-172.31.22.168:22-139.178.68.195:47352.service: Deactivated successfully. Aug 5 21:38:14.929176 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 21:38:14.929669 systemd-logind[2118]: Session 14 logged out. Waiting for processes to exit. Aug 5 21:38:14.932333 systemd-logind[2118]: Removed session 14. Aug 5 21:38:16.242348 kubelet[3595]: I0805 21:38:16.238999 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-wcf9z" podStartSLOduration=55.27089893 podCreationTimestamp="2024-08-05 21:37:17 +0000 UTC" firstStartedPulling="2024-08-05 21:37:50.419264162 +0000 UTC m=+53.336669258" lastFinishedPulling="2024-08-05 21:37:54.387305694 +0000 UTC m=+57.304710790" observedRunningTime="2024-08-05 21:37:55.090850697 +0000 UTC m=+58.008255805" watchObservedRunningTime="2024-08-05 21:38:16.238940462 +0000 UTC m=+79.156345546" Aug 5 21:38:19.334614 update_engine[2126]: I0805 21:38:19.333890 2126 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 5 21:38:19.334614 update_engine[2126]: I0805 21:38:19.333950 2126 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 5 21:38:19.334614 update_engine[2126]: I0805 21:38:19.334288 2126 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 5 21:38:19.336646 update_engine[2126]: I0805 21:38:19.335865 2126 omaha_request_params.cc:62] Current group set to beta Aug 5 21:38:19.336705 locksmithd[2226]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 5 21:38:19.337270 update_engine[2126]: I0805 21:38:19.336779 2126 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 5 21:38:19.337270 update_engine[2126]: I0805 21:38:19.336812 2126 update_attempter.cc:643] Scheduling an action processor start. Aug 5 21:38:19.337270 update_engine[2126]: I0805 21:38:19.336848 2126 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 5 21:38:19.337270 update_engine[2126]: I0805 21:38:19.336944 2126 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 5 21:38:19.337270 update_engine[2126]: I0805 21:38:19.337090 2126 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 5 21:38:19.337270 update_engine[2126]: I0805 21:38:19.337103 2126 omaha_request_action.cc:272] Request: Aug 5 21:38:19.337270 update_engine[2126]: Aug 5 21:38:19.337270 update_engine[2126]: Aug 5 21:38:19.337270 update_engine[2126]: Aug 5 21:38:19.337270 update_engine[2126]: Aug 5 21:38:19.337270 update_engine[2126]: Aug 5 21:38:19.337270 update_engine[2126]: Aug 5 21:38:19.337270 update_engine[2126]: Aug 5 21:38:19.337270 update_engine[2126]: Aug 5 21:38:19.337270 update_engine[2126]: I0805 21:38:19.337114 2126 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 21:38:19.341180 update_engine[2126]: I0805 21:38:19.341064 2126 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 21:38:19.341685 update_engine[2126]: I0805 21:38:19.341640 2126 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 21:38:19.378270 update_engine[2126]: E0805 21:38:19.378199 2126 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 21:38:19.378443 update_engine[2126]: I0805 21:38:19.378331 2126 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 5 21:38:19.945277 systemd[1]: Started sshd@14-172.31.22.168:22-139.178.68.195:47368.service - OpenSSH per-connection server daemon (139.178.68.195:47368). Aug 5 21:38:20.122948 sshd[6027]: Accepted publickey for core from 139.178.68.195 port 47368 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:20.125549 sshd[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:20.134231 systemd-logind[2118]: New session 15 of user core. Aug 5 21:38:20.147211 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 21:38:20.456590 sshd[6027]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:20.467313 systemd[1]: sshd@14-172.31.22.168:22-139.178.68.195:47368.service: Deactivated successfully. Aug 5 21:38:20.480475 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 21:38:20.484485 systemd-logind[2118]: Session 15 logged out. Waiting for processes to exit. Aug 5 21:38:20.488362 systemd-logind[2118]: Removed session 15. Aug 5 21:38:25.489836 systemd[1]: Started sshd@15-172.31.22.168:22-139.178.68.195:34034.service - OpenSSH per-connection server daemon (139.178.68.195:34034). Aug 5 21:38:25.676101 sshd[6041]: Accepted publickey for core from 139.178.68.195 port 34034 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:25.679093 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:25.688343 systemd-logind[2118]: New session 16 of user core. Aug 5 21:38:25.696267 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 21:38:26.002040 sshd[6041]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:26.012201 systemd[1]: sshd@15-172.31.22.168:22-139.178.68.195:34034.service: Deactivated successfully. Aug 5 21:38:26.023019 systemd-logind[2118]: Session 16 logged out. Waiting for processes to exit. Aug 5 21:38:26.024043 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 21:38:26.031414 systemd-logind[2118]: Removed session 16. Aug 5 21:38:29.332780 update_engine[2126]: I0805 21:38:29.332073 2126 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 21:38:29.332780 update_engine[2126]: I0805 21:38:29.332445 2126 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 21:38:29.333401 update_engine[2126]: I0805 21:38:29.332875 2126 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 21:38:29.333401 update_engine[2126]: E0805 21:38:29.333235 2126 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 21:38:29.333401 update_engine[2126]: I0805 21:38:29.333297 2126 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 5 21:38:31.044234 systemd[1]: Started sshd@16-172.31.22.168:22-139.178.68.195:44758.service - OpenSSH per-connection server daemon (139.178.68.195:44758). Aug 5 21:38:31.247486 sshd[6089]: Accepted publickey for core from 139.178.68.195 port 44758 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:31.254105 sshd[6089]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:31.269433 systemd-logind[2118]: New session 17 of user core. Aug 5 21:38:31.276264 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 21:38:31.552053 sshd[6089]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:31.568001 systemd[1]: sshd@16-172.31.22.168:22-139.178.68.195:44758.service: Deactivated successfully. Aug 5 21:38:31.578964 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 21:38:31.580619 systemd-logind[2118]: Session 17 logged out. Waiting for processes to exit. Aug 5 21:38:31.591301 systemd[1]: Started sshd@17-172.31.22.168:22-139.178.68.195:44772.service - OpenSSH per-connection server daemon (139.178.68.195:44772). Aug 5 21:38:31.593955 systemd-logind[2118]: Removed session 17. Aug 5 21:38:31.795770 sshd[6103]: Accepted publickey for core from 139.178.68.195 port 44772 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:31.800440 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:31.830856 systemd-logind[2118]: New session 18 of user core. Aug 5 21:38:31.835293 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 21:38:32.336700 kubelet[3595]: I0805 21:38:32.336629 3595 topology_manager.go:215] "Topology Admit Handler" podUID="f06d28e8-23bf-41ab-adad-2b04d02bd7cf" podNamespace="calico-apiserver" podName="calico-apiserver-58f48468c4-7bcxq" Aug 5 21:38:32.504253 kubelet[3595]: I0805 21:38:32.504181 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f06d28e8-23bf-41ab-adad-2b04d02bd7cf-calico-apiserver-certs\") pod \"calico-apiserver-58f48468c4-7bcxq\" (UID: \"f06d28e8-23bf-41ab-adad-2b04d02bd7cf\") " pod="calico-apiserver/calico-apiserver-58f48468c4-7bcxq" Aug 5 21:38:32.504253 kubelet[3595]: I0805 21:38:32.504284 3595 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258qb\" (UniqueName: \"kubernetes.io/projected/f06d28e8-23bf-41ab-adad-2b04d02bd7cf-kube-api-access-258qb\") pod \"calico-apiserver-58f48468c4-7bcxq\" (UID: \"f06d28e8-23bf-41ab-adad-2b04d02bd7cf\") " pod="calico-apiserver/calico-apiserver-58f48468c4-7bcxq" Aug 5 21:38:32.510470 sshd[6103]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:32.528072 systemd[1]: sshd@17-172.31.22.168:22-139.178.68.195:44772.service: Deactivated successfully. Aug 5 21:38:32.541680 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 21:38:32.548507 systemd-logind[2118]: Session 18 logged out. Waiting for processes to exit. Aug 5 21:38:32.566229 systemd[1]: Started sshd@18-172.31.22.168:22-139.178.68.195:44784.service - OpenSSH per-connection server daemon (139.178.68.195:44784). Aug 5 21:38:32.572500 systemd-logind[2118]: Removed session 18. Aug 5 21:38:32.610941 kubelet[3595]: E0805 21:38:32.605916 3595 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 21:38:32.610941 kubelet[3595]: E0805 21:38:32.606052 3595 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f06d28e8-23bf-41ab-adad-2b04d02bd7cf-calico-apiserver-certs podName:f06d28e8-23bf-41ab-adad-2b04d02bd7cf nodeName:}" failed. No retries permitted until 2024-08-05 21:38:33.106009319 +0000 UTC m=+96.023414403 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/f06d28e8-23bf-41ab-adad-2b04d02bd7cf-calico-apiserver-certs") pod "calico-apiserver-58f48468c4-7bcxq" (UID: "f06d28e8-23bf-41ab-adad-2b04d02bd7cf") : secret "calico-apiserver-certs" not found Aug 5 21:38:32.777857 sshd[6138]: Accepted publickey for core from 139.178.68.195 port 44784 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:32.780208 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:32.792967 systemd-logind[2118]: New session 19 of user core. Aug 5 21:38:32.807330 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 21:38:33.255135 containerd[2153]: time="2024-08-05T21:38:33.255026983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f48468c4-7bcxq,Uid:f06d28e8-23bf-41ab-adad-2b04d02bd7cf,Namespace:calico-apiserver,Attempt:0,}" Aug 5 21:38:33.571876 systemd-networkd[1694]: calid51a1b71e6b: Link UP Aug 5 21:38:33.578299 systemd-networkd[1694]: calid51a1b71e6b: Gained carrier Aug 5 21:38:33.588261 (udev-worker)[6170]: Network interface NamePolicy= disabled on kernel command line. Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.355 [INFO][6153] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0 calico-apiserver-58f48468c4- calico-apiserver f06d28e8-23bf-41ab-adad-2b04d02bd7cf 1058 0 2024-08-05 21:38:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58f48468c4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-22-168 calico-apiserver-58f48468c4-7bcxq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid51a1b71e6b [] []}} ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Namespace="calico-apiserver" Pod="calico-apiserver-58f48468c4-7bcxq" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.355 [INFO][6153] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Namespace="calico-apiserver" Pod="calico-apiserver-58f48468c4-7bcxq" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.423 [INFO][6160] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" HandleID="k8s-pod-network.173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Workload="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.452 [INFO][6160] ipam_plugin.go 264: Auto assigning IP ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" HandleID="k8s-pod-network.173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Workload="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033f720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-22-168", "pod":"calico-apiserver-58f48468c4-7bcxq", "timestamp":"2024-08-05 21:38:33.423777211 +0000 UTC"}, Hostname:"ip-172-31-22-168", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.452 [INFO][6160] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.453 [INFO][6160] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.454 [INFO][6160] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-22-168' Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.462 [INFO][6160] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" host="ip-172-31-22-168" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.473 [INFO][6160] ipam.go 372: Looking up existing affinities for host host="ip-172-31-22-168" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.483 [INFO][6160] ipam.go 489: Trying affinity for 192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.489 [INFO][6160] ipam.go 155: Attempting to load block cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.495 [INFO][6160] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.93.0/26 host="ip-172-31-22-168" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.495 [INFO][6160] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.93.0/26 handle="k8s-pod-network.173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" host="ip-172-31-22-168" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.504 [INFO][6160] ipam.go 1685: Creating new handle: k8s-pod-network.173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88 Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.521 [INFO][6160] ipam.go 1203: Writing block in order to claim IPs block=192.168.93.0/26 handle="k8s-pod-network.173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" host="ip-172-31-22-168" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.540 [INFO][6160] ipam.go 1216: Successfully claimed IPs: [192.168.93.5/26] block=192.168.93.0/26 handle="k8s-pod-network.173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" host="ip-172-31-22-168" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.540 [INFO][6160] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.93.5/26] handle="k8s-pod-network.173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" host="ip-172-31-22-168" Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.541 [INFO][6160] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:38:33.644945 containerd[2153]: 2024-08-05 21:38:33.541 [INFO][6160] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.93.5/26] IPv6=[] ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" HandleID="k8s-pod-network.173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Workload="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" Aug 5 21:38:33.652813 containerd[2153]: 2024-08-05 21:38:33.555 [INFO][6153] k8s.go 386: Populated endpoint ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Namespace="calico-apiserver" Pod="calico-apiserver-58f48468c4-7bcxq" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0", GenerateName:"calico-apiserver-58f48468c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"f06d28e8-23bf-41ab-adad-2b04d02bd7cf", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f48468c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"", Pod:"calico-apiserver-58f48468c4-7bcxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid51a1b71e6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:38:33.652813 containerd[2153]: 2024-08-05 21:38:33.555 [INFO][6153] k8s.go 387: Calico CNI using IPs: [192.168.93.5/32] ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Namespace="calico-apiserver" Pod="calico-apiserver-58f48468c4-7bcxq" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" Aug 5 21:38:33.652813 containerd[2153]: 2024-08-05 21:38:33.555 [INFO][6153] dataplane_linux.go 68: Setting the host side veth name to calid51a1b71e6b ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Namespace="calico-apiserver" Pod="calico-apiserver-58f48468c4-7bcxq" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" Aug 5 21:38:33.652813 containerd[2153]: 2024-08-05 21:38:33.572 [INFO][6153] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Namespace="calico-apiserver" Pod="calico-apiserver-58f48468c4-7bcxq" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" Aug 5 21:38:33.652813 containerd[2153]: 2024-08-05 21:38:33.589 [INFO][6153] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Namespace="calico-apiserver" Pod="calico-apiserver-58f48468c4-7bcxq" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0", GenerateName:"calico-apiserver-58f48468c4-", Namespace:"calico-apiserver", SelfLink:"", UID:"f06d28e8-23bf-41ab-adad-2b04d02bd7cf", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 38, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58f48468c4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-22-168", ContainerID:"173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88", Pod:"calico-apiserver-58f48468c4-7bcxq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.93.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid51a1b71e6b", MAC:"12:7b:9d:a1:76:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:38:33.652813 containerd[2153]: 2024-08-05 21:38:33.637 [INFO][6153] k8s.go 500: Wrote updated endpoint to datastore ContainerID="173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88" Namespace="calico-apiserver" Pod="calico-apiserver-58f48468c4-7bcxq" WorkloadEndpoint="ip--172--31--22--168-k8s-calico--apiserver--58f48468c4--7bcxq-eth0" Aug 5 21:38:33.776836 containerd[2153]: time="2024-08-05T21:38:33.774044325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:38:33.776836 containerd[2153]: time="2024-08-05T21:38:33.774162537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:38:33.776836 containerd[2153]: time="2024-08-05T21:38:33.774223689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:38:33.776836 containerd[2153]: time="2024-08-05T21:38:33.774288573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:38:33.976840 containerd[2153]: time="2024-08-05T21:38:33.976287622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58f48468c4-7bcxq,Uid:f06d28e8-23bf-41ab-adad-2b04d02bd7cf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88\"" Aug 5 21:38:33.981521 containerd[2153]: time="2024-08-05T21:38:33.981315082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 21:38:34.614732 sshd[6138]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:34.634442 systemd[1]: sshd@18-172.31.22.168:22-139.178.68.195:44784.service: Deactivated successfully. Aug 5 21:38:34.647707 systemd-logind[2118]: Session 19 logged out. Waiting for processes to exit. Aug 5 21:38:34.659422 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 21:38:34.678052 systemd[1]: Started sshd@19-172.31.22.168:22-139.178.68.195:44790.service - OpenSSH per-connection server daemon (139.178.68.195:44790). Aug 5 21:38:34.684354 systemd-logind[2118]: Removed session 19. Aug 5 21:38:34.735948 systemd-networkd[1694]: calid51a1b71e6b: Gained IPv6LL Aug 5 21:38:34.870140 sshd[6230]: Accepted publickey for core from 139.178.68.195 port 44790 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:34.872361 sshd[6230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:34.882493 systemd-logind[2118]: New session 20 of user core. Aug 5 21:38:34.889393 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 21:38:36.085216 sshd[6230]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:36.108683 systemd[1]: sshd@19-172.31.22.168:22-139.178.68.195:44790.service: Deactivated successfully. Aug 5 21:38:36.124319 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 21:38:36.145852 systemd-logind[2118]: Session 20 logged out. Waiting for processes to exit. Aug 5 21:38:36.155636 systemd[1]: Started sshd@20-172.31.22.168:22-139.178.68.195:44800.service - OpenSSH per-connection server daemon (139.178.68.195:44800). Aug 5 21:38:36.162947 systemd-logind[2118]: Removed session 20. Aug 5 21:38:36.415278 sshd[6247]: Accepted publickey for core from 139.178.68.195 port 44800 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:36.418302 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:36.430464 systemd-logind[2118]: New session 21 of user core. Aug 5 21:38:36.439604 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 21:38:36.757644 sshd[6247]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:36.768236 systemd[1]: sshd@20-172.31.22.168:22-139.178.68.195:44800.service: Deactivated successfully. Aug 5 21:38:36.778633 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 21:38:36.784187 systemd-logind[2118]: Session 21 logged out. Waiting for processes to exit. Aug 5 21:38:36.787539 systemd-logind[2118]: Removed session 21. Aug 5 21:38:37.109928 ntpd[2101]: Listen normally on 12 calid51a1b71e6b [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 21:38:37.111043 ntpd[2101]: 5 Aug 21:38:37 ntpd[2101]: Listen normally on 12 calid51a1b71e6b [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 21:38:37.131112 containerd[2153]: time="2024-08-05T21:38:37.131046910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:38:37.132941 containerd[2153]: time="2024-08-05T21:38:37.132746854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Aug 5 21:38:37.134909 containerd[2153]: time="2024-08-05T21:38:37.134797522Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:38:37.141239 containerd[2153]: time="2024-08-05T21:38:37.141154138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:38:37.142860 containerd[2153]: time="2024-08-05T21:38:37.142620406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 3.161220952s" Aug 5 21:38:37.142860 containerd[2153]: time="2024-08-05T21:38:37.142683970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Aug 5 21:38:37.148228 containerd[2153]: time="2024-08-05T21:38:37.148024558Z" level=info msg="CreateContainer within sandbox \"173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 21:38:37.167629 containerd[2153]: time="2024-08-05T21:38:37.166474102Z" level=info msg="CreateContainer within sandbox \"173e2a3f3bece838924ce9615d746950fe5e1b84053aec9981c7d7faa9b86e88\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2c3953f2747ea3e6550be440c3eb732235ce17fdba2e669ac821dcfede37dfe3\"" Aug 5 21:38:37.170130 containerd[2153]: time="2024-08-05T21:38:37.169556566Z" level=info msg="StartContainer for \"2c3953f2747ea3e6550be440c3eb732235ce17fdba2e669ac821dcfede37dfe3\"" Aug 5 21:38:37.340301 containerd[2153]: time="2024-08-05T21:38:37.340090943Z" level=info msg="StartContainer for \"2c3953f2747ea3e6550be440c3eb732235ce17fdba2e669ac821dcfede37dfe3\" returns successfully" Aug 5 21:38:38.700309 kubelet[3595]: I0805 21:38:38.700196 3595 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58f48468c4-7bcxq" podStartSLOduration=3.536019214 podCreationTimestamp="2024-08-05 21:38:32 +0000 UTC" firstStartedPulling="2024-08-05 21:38:33.978972982 +0000 UTC m=+96.896378066" lastFinishedPulling="2024-08-05 21:38:37.14308393 +0000 UTC m=+100.060489026" observedRunningTime="2024-08-05 21:38:38.275938548 +0000 UTC m=+101.193343656" watchObservedRunningTime="2024-08-05 21:38:38.700130174 +0000 UTC m=+101.617535258" Aug 5 21:38:39.331826 update_engine[2126]: I0805 21:38:39.331750 2126 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 21:38:39.333181 update_engine[2126]: I0805 21:38:39.332029 2126 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 21:38:39.333181 update_engine[2126]: I0805 21:38:39.332428 2126 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 21:38:39.333181 update_engine[2126]: E0805 21:38:39.333006 2126 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 21:38:39.333181 update_engine[2126]: I0805 21:38:39.333071 2126 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 5 21:38:41.787263 systemd[1]: Started sshd@21-172.31.22.168:22-139.178.68.195:38620.service - OpenSSH per-connection server daemon (139.178.68.195:38620). Aug 5 21:38:41.973128 sshd[6316]: Accepted publickey for core from 139.178.68.195 port 38620 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:41.976504 sshd[6316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:41.985835 systemd-logind[2118]: New session 22 of user core. Aug 5 21:38:41.993392 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 21:38:42.235503 sshd[6316]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:42.245097 systemd[1]: sshd@21-172.31.22.168:22-139.178.68.195:38620.service: Deactivated successfully. Aug 5 21:38:42.254022 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 21:38:42.254374 systemd-logind[2118]: Session 22 logged out. Waiting for processes to exit. Aug 5 21:38:42.258354 systemd-logind[2118]: Removed session 22. Aug 5 21:38:47.269742 systemd[1]: Started sshd@22-172.31.22.168:22-139.178.68.195:38622.service - OpenSSH per-connection server daemon (139.178.68.195:38622). Aug 5 21:38:47.451697 sshd[6357]: Accepted publickey for core from 139.178.68.195 port 38622 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:47.454633 sshd[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:47.464585 systemd-logind[2118]: New session 23 of user core. Aug 5 21:38:47.475365 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 21:38:47.717083 sshd[6357]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:47.725564 systemd[1]: sshd@22-172.31.22.168:22-139.178.68.195:38622.service: Deactivated successfully. Aug 5 21:38:47.735925 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 21:38:47.738275 systemd-logind[2118]: Session 23 logged out. Waiting for processes to exit. Aug 5 21:38:47.741180 systemd-logind[2118]: Removed session 23. Aug 5 21:38:49.329921 update_engine[2126]: I0805 21:38:49.329850 2126 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 21:38:49.330665 update_engine[2126]: I0805 21:38:49.330148 2126 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 21:38:49.330665 update_engine[2126]: I0805 21:38:49.330427 2126 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 21:38:49.331661 update_engine[2126]: E0805 21:38:49.331156 2126 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 21:38:49.331661 update_engine[2126]: I0805 21:38:49.331229 2126 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 5 21:38:49.331661 update_engine[2126]: I0805 21:38:49.331240 2126 omaha_request_action.cc:617] Omaha request response: Aug 5 21:38:49.331661 update_engine[2126]: E0805 21:38:49.331377 2126 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 5 21:38:49.331661 update_engine[2126]: I0805 21:38:49.331426 2126 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 5 21:38:49.331661 update_engine[2126]: I0805 21:38:49.331436 2126 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 5 21:38:49.331661 update_engine[2126]: I0805 21:38:49.331444 2126 update_attempter.cc:306] Processing Done. Aug 5 21:38:49.331661 update_engine[2126]: E0805 21:38:49.331470 2126 update_attempter.cc:619] Update failed. Aug 5 21:38:49.331661 update_engine[2126]: I0805 21:38:49.331478 2126 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 5 21:38:49.331661 update_engine[2126]: I0805 21:38:49.331486 2126 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 5 21:38:49.331661 update_engine[2126]: I0805 21:38:49.331494 2126 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.332638 2126 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.332700 2126 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.332736 2126 omaha_request_action.cc:272] Request: Aug 5 21:38:49.335364 update_engine[2126]: Aug 5 21:38:49.335364 update_engine[2126]: Aug 5 21:38:49.335364 update_engine[2126]: Aug 5 21:38:49.335364 update_engine[2126]: Aug 5 21:38:49.335364 update_engine[2126]: Aug 5 21:38:49.335364 update_engine[2126]: Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.332749 2126 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.333267 2126 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.333598 2126 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 5 21:38:49.335364 update_engine[2126]: E0805 21:38:49.334326 2126 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.334410 2126 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.334422 2126 omaha_request_action.cc:617] Omaha request response: Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.334432 2126 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.334440 2126 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.334447 2126 update_attempter.cc:306] Processing Done. Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.334456 2126 update_attempter.cc:310] Error event sent. Aug 5 21:38:49.335364 update_engine[2126]: I0805 21:38:49.334470 2126 update_check_scheduler.cc:74] Next update check in 40m29s Aug 5 21:38:49.336383 locksmithd[2226]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 5 21:38:49.336383 locksmithd[2226]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 5 21:38:52.751105 systemd[1]: Started sshd@23-172.31.22.168:22-139.178.68.195:35588.service - OpenSSH per-connection server daemon (139.178.68.195:35588). Aug 5 21:38:52.978173 sshd[6376]: Accepted publickey for core from 139.178.68.195 port 35588 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:52.981930 sshd[6376]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:52.996862 systemd-logind[2118]: New session 24 of user core. Aug 5 21:38:53.001452 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 21:38:53.290129 sshd[6376]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:53.311473 systemd[1]: sshd@23-172.31.22.168:22-139.178.68.195:35588.service: Deactivated successfully. Aug 5 21:38:53.323261 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 21:38:53.327194 systemd-logind[2118]: Session 24 logged out. Waiting for processes to exit. Aug 5 21:38:53.330636 systemd-logind[2118]: Removed session 24. Aug 5 21:38:58.321544 systemd[1]: Started sshd@24-172.31.22.168:22-139.178.68.195:35592.service - OpenSSH per-connection server daemon (139.178.68.195:35592). Aug 5 21:38:58.521235 sshd[6396]: Accepted publickey for core from 139.178.68.195 port 35592 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:38:58.525320 sshd[6396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:38:58.539906 systemd-logind[2118]: New session 25 of user core. Aug 5 21:38:58.544968 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 21:38:58.811624 sshd[6396]: pam_unix(sshd:session): session closed for user core Aug 5 21:38:58.818210 systemd[1]: sshd@24-172.31.22.168:22-139.178.68.195:35592.service: Deactivated successfully. Aug 5 21:38:58.829576 systemd-logind[2118]: Session 25 logged out. Waiting for processes to exit. Aug 5 21:38:58.829786 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 21:38:58.834369 systemd-logind[2118]: Removed session 25. Aug 5 21:39:03.844406 systemd[1]: Started sshd@25-172.31.22.168:22-139.178.68.195:48318.service - OpenSSH per-connection server daemon (139.178.68.195:48318). Aug 5 21:39:04.018427 sshd[6435]: Accepted publickey for core from 139.178.68.195 port 48318 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:39:04.021583 sshd[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:39:04.030086 systemd-logind[2118]: New session 26 of user core. Aug 5 21:39:04.037187 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 21:39:04.278781 sshd[6435]: pam_unix(sshd:session): session closed for user core Aug 5 21:39:04.284685 systemd[1]: sshd@25-172.31.22.168:22-139.178.68.195:48318.service: Deactivated successfully. Aug 5 21:39:04.295034 systemd-logind[2118]: Session 26 logged out. Waiting for processes to exit. Aug 5 21:39:04.295188 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 21:39:04.298248 systemd-logind[2118]: Removed session 26. Aug 5 21:39:09.311578 systemd[1]: Started sshd@26-172.31.22.168:22-139.178.68.195:48334.service - OpenSSH per-connection server daemon (139.178.68.195:48334). Aug 5 21:39:09.505322 sshd[6450]: Accepted publickey for core from 139.178.68.195 port 48334 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:39:09.508145 sshd[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:39:09.517782 systemd-logind[2118]: New session 27 of user core. Aug 5 21:39:09.525196 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 21:39:09.778170 sshd[6450]: pam_unix(sshd:session): session closed for user core Aug 5 21:39:09.788060 systemd-logind[2118]: Session 27 logged out. Waiting for processes to exit. Aug 5 21:39:09.789701 systemd[1]: sshd@26-172.31.22.168:22-139.178.68.195:48334.service: Deactivated successfully. Aug 5 21:39:09.800626 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 21:39:09.804155 systemd-logind[2118]: Removed session 27. Aug 5 21:39:14.812397 systemd[1]: Started sshd@27-172.31.22.168:22-139.178.68.195:56638.service - OpenSSH per-connection server daemon (139.178.68.195:56638). Aug 5 21:39:14.989814 sshd[6481]: Accepted publickey for core from 139.178.68.195 port 56638 ssh2: RSA SHA256:n8e1/3rwUUwoD0Er9acY8H8+dzFC/4NaXBaaRAZ4VQE Aug 5 21:39:14.992534 sshd[6481]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:39:15.003827 systemd-logind[2118]: New session 28 of user core. Aug 5 21:39:15.011297 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 21:39:15.276922 sshd[6481]: pam_unix(sshd:session): session closed for user core Aug 5 21:39:15.285382 systemd[1]: sshd@27-172.31.22.168:22-139.178.68.195:56638.service: Deactivated successfully. Aug 5 21:39:15.293695 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 21:39:15.297345 systemd-logind[2118]: Session 28 logged out. Waiting for processes to exit. Aug 5 21:39:15.300567 systemd-logind[2118]: Removed session 28. Aug 5 21:39:29.092596 containerd[2153]: time="2024-08-05T21:39:29.092411004Z" level=info msg="shim disconnected" id=fdf0514d9380935c97a5d79c93fca82376d458ea48e0c546997c6b9d22e61ab5 namespace=k8s.io Aug 5 21:39:29.092596 containerd[2153]: time="2024-08-05T21:39:29.092588988Z" level=warning msg="cleaning up after shim disconnected" id=fdf0514d9380935c97a5d79c93fca82376d458ea48e0c546997c6b9d22e61ab5 namespace=k8s.io Aug 5 21:39:29.092596 containerd[2153]: time="2024-08-05T21:39:29.092611548Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:39:29.100831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fdf0514d9380935c97a5d79c93fca82376d458ea48e0c546997c6b9d22e61ab5-rootfs.mount: Deactivated successfully. Aug 5 21:39:29.416629 kubelet[3595]: I0805 21:39:29.416490 3595 scope.go:117] "RemoveContainer" containerID="fdf0514d9380935c97a5d79c93fca82376d458ea48e0c546997c6b9d22e61ab5" Aug 5 21:39:29.421560 containerd[2153]: time="2024-08-05T21:39:29.421468670Z" level=info msg="CreateContainer within sandbox \"2f040257413d2f08be6f9bc9ca4121cc180fd2491e84664fa847e151562937c1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 5 21:39:29.458945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252332538.mount: Deactivated successfully. Aug 5 21:39:29.467058 containerd[2153]: time="2024-08-05T21:39:29.466990958Z" level=info msg="CreateContainer within sandbox \"2f040257413d2f08be6f9bc9ca4121cc180fd2491e84664fa847e151562937c1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"bf67676719320b789ed1b9b1881d5edb9bc59bda357da1fa63e223feb8cb9e4d\"" Aug 5 21:39:29.467895 containerd[2153]: time="2024-08-05T21:39:29.467779646Z" level=info msg="StartContainer for \"bf67676719320b789ed1b9b1881d5edb9bc59bda357da1fa63e223feb8cb9e4d\"" Aug 5 21:39:29.594370 containerd[2153]: time="2024-08-05T21:39:29.594109070Z" level=info msg="StartContainer for \"bf67676719320b789ed1b9b1881d5edb9bc59bda357da1fa63e223feb8cb9e4d\" returns successfully" Aug 5 21:39:29.828269 kubelet[3595]: E0805 21:39:29.827094 3595 controller.go:193] "Failed to update lease" err="Put \"https://172.31.22.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-168?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 5 21:39:30.175664 containerd[2153]: time="2024-08-05T21:39:30.175476109Z" level=info msg="shim disconnected" id=8c5ff97e07266c80389e7e7fafce00f6563ac6865a84d74c4737ba1f0c3d33c0 namespace=k8s.io Aug 5 21:39:30.175664 containerd[2153]: time="2024-08-05T21:39:30.175561465Z" level=warning msg="cleaning up after shim disconnected" id=8c5ff97e07266c80389e7e7fafce00f6563ac6865a84d74c4737ba1f0c3d33c0 namespace=k8s.io Aug 5 21:39:30.175664 containerd[2153]: time="2024-08-05T21:39:30.175583977Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:39:30.177761 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8c5ff97e07266c80389e7e7fafce00f6563ac6865a84d74c4737ba1f0c3d33c0-rootfs.mount: Deactivated successfully. Aug 5 21:39:30.430785 kubelet[3595]: I0805 21:39:30.430193 3595 scope.go:117] "RemoveContainer" containerID="8c5ff97e07266c80389e7e7fafce00f6563ac6865a84d74c4737ba1f0c3d33c0" Aug 5 21:39:30.436697 containerd[2153]: time="2024-08-05T21:39:30.435791115Z" level=info msg="CreateContainer within sandbox \"951850b6e05c70d13b799864d9f0e38cbabfa4f3d97490a14081cd1462ac7949\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 5 21:39:30.472689 containerd[2153]: time="2024-08-05T21:39:30.471177591Z" level=info msg="CreateContainer within sandbox \"951850b6e05c70d13b799864d9f0e38cbabfa4f3d97490a14081cd1462ac7949\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7ccd23e36eb85b812516b61fe05e3bb8e6fea8ed0d67dc722a3e0de85ae678ef\"" Aug 5 21:39:30.475928 containerd[2153]: time="2024-08-05T21:39:30.475695255Z" level=info msg="StartContainer for \"7ccd23e36eb85b812516b61fe05e3bb8e6fea8ed0d67dc722a3e0de85ae678ef\"" Aug 5 21:39:30.635595 containerd[2153]: time="2024-08-05T21:39:30.635508496Z" level=info msg="StartContainer for \"7ccd23e36eb85b812516b61fe05e3bb8e6fea8ed0d67dc722a3e0de85ae678ef\" returns successfully" Aug 5 21:39:34.235578 containerd[2153]: time="2024-08-05T21:39:34.234850734Z" level=info msg="shim disconnected" id=05fc7514915ee33786b8ec3e7d1e878d289c445a342461669528649b149abe0d namespace=k8s.io Aug 5 21:39:34.235578 containerd[2153]: time="2024-08-05T21:39:34.234940230Z" level=warning msg="cleaning up after shim disconnected" id=05fc7514915ee33786b8ec3e7d1e878d289c445a342461669528649b149abe0d namespace=k8s.io Aug 5 21:39:34.235578 containerd[2153]: time="2024-08-05T21:39:34.234960774Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:39:34.246626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05fc7514915ee33786b8ec3e7d1e878d289c445a342461669528649b149abe0d-rootfs.mount: Deactivated successfully. Aug 5 21:39:34.274664 containerd[2153]: time="2024-08-05T21:39:34.272842614Z" level=warning msg="cleanup warnings time=\"2024-08-05T21:39:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 21:39:34.450326 kubelet[3595]: I0805 21:39:34.449771 3595 scope.go:117] "RemoveContainer" containerID="05fc7514915ee33786b8ec3e7d1e878d289c445a342461669528649b149abe0d" Aug 5 21:39:34.456050 containerd[2153]: time="2024-08-05T21:39:34.455631691Z" level=info msg="CreateContainer within sandbox \"870369b9f87498f5ea92491448e0b9901a7ad3328d024ca678b89d83be19fd5d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 5 21:39:34.484337 containerd[2153]: time="2024-08-05T21:39:34.484166035Z" level=info msg="CreateContainer within sandbox \"870369b9f87498f5ea92491448e0b9901a7ad3328d024ca678b89d83be19fd5d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2feaf88ee700521f2904e4393221115a64240b0676dec4d0288a7886faa1e346\"" Aug 5 21:39:34.485901 containerd[2153]: time="2024-08-05T21:39:34.485099179Z" level=info msg="StartContainer for \"2feaf88ee700521f2904e4393221115a64240b0676dec4d0288a7886faa1e346\"" Aug 5 21:39:34.651929 containerd[2153]: time="2024-08-05T21:39:34.651849344Z" level=info msg="StartContainer for \"2feaf88ee700521f2904e4393221115a64240b0676dec4d0288a7886faa1e346\" returns successfully" Aug 5 21:39:39.828417 kubelet[3595]: E0805 21:39:39.828354 3595 controller.go:193] "Failed to update lease" err="Put \"https://172.31.22.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-168?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 5 21:39:46.134529 systemd[1]: run-containerd-runc-k8s.io-4814130884d1ba9f60da892623d5bc745509fa8f2c8867ddfef9fb23c3d5f8fd-runc.LmoGiI.mount: Deactivated successfully. Aug 5 21:39:49.829442 kubelet[3595]: E0805 21:39:49.829230 3595 controller.go:193] "Failed to update lease" err="Put \"https://172.31.22.168:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-22-168?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"