Aug 5 22:17:59.149845 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:27 -00 2024 Aug 5 22:17:59.149887 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:17:59.149902 kernel: BIOS-provided physical RAM map: Aug 5 22:17:59.149912 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 5 22:17:59.149923 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 5 22:17:59.149933 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 5 22:17:59.149949 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Aug 5 22:17:59.149960 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Aug 5 22:17:59.149973 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Aug 5 22:17:59.149986 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 5 22:17:59.149998 kernel: NX (Execute Disable) protection: active Aug 5 22:17:59.150011 kernel: APIC: Static calls initialized Aug 5 22:17:59.150023 kernel: SMBIOS 2.7 present. Aug 5 22:17:59.150035 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Aug 5 22:17:59.150054 kernel: Hypervisor detected: KVM Aug 5 22:17:59.150069 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:17:59.150384 kernel: kvm-clock: using sched offset of 6060706655 cycles Aug 5 22:17:59.150408 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:17:59.150422 kernel: tsc: Detected 2499.994 MHz processor Aug 5 22:17:59.150435 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:17:59.150449 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:17:59.150468 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Aug 5 22:17:59.150482 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 5 22:17:59.150496 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:17:59.150509 kernel: Using GB pages for direct mapping Aug 5 22:17:59.150566 kernel: ACPI: Early table checksum verification disabled Aug 5 22:17:59.150578 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Aug 5 22:17:59.150590 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Aug 5 22:17:59.150602 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 5 22:17:59.150614 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Aug 5 22:17:59.150631 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Aug 5 22:17:59.150643 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:17:59.150655 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 5 22:17:59.150668 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Aug 5 22:17:59.150681 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 5 22:17:59.150694 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Aug 5 22:17:59.150706 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Aug 5 22:17:59.150719 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:17:59.150735 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Aug 5 22:17:59.150748 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Aug 5 22:17:59.150768 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Aug 5 22:17:59.150783 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Aug 5 22:17:59.150893 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Aug 5 22:17:59.150911 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Aug 5 22:17:59.150932 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Aug 5 22:17:59.150948 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Aug 5 22:17:59.150963 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Aug 5 22:17:59.150979 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Aug 5 22:17:59.150995 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:17:59.151010 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:17:59.151026 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Aug 5 22:17:59.151042 kernel: NUMA: Initialized distance table, cnt=1 Aug 5 22:17:59.151057 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Aug 5 22:17:59.151077 kernel: Zone ranges: Aug 5 22:17:59.151093 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:17:59.151141 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Aug 5 22:17:59.151157 kernel: Normal empty Aug 5 22:17:59.151173 kernel: Movable zone start for each node Aug 5 22:17:59.151188 kernel: Early memory node ranges Aug 5 22:17:59.151204 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 5 22:17:59.151219 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Aug 5 22:17:59.151281 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Aug 5 22:17:59.151301 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:17:59.151315 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 5 22:17:59.151328 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Aug 5 22:17:59.151346 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 5 22:17:59.151362 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:17:59.151377 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Aug 5 22:17:59.151392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:17:59.151447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:17:59.151465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:17:59.151484 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:17:59.151498 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:17:59.151719 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 5 22:17:59.151788 kernel: TSC deadline timer available Aug 5 22:17:59.151803 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:17:59.151818 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:17:59.151832 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Aug 5 22:17:59.151846 kernel: Booting paravirtualized kernel on KVM Aug 5 22:17:59.151860 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:17:59.151873 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:17:59.151947 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:17:59.151962 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:17:59.152039 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:17:59.152064 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:17:59.152080 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:17:59.152097 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:17:59.152130 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:17:59.152146 kernel: random: crng init done Aug 5 22:17:59.152160 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:17:59.152173 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:17:59.152236 kernel: Fallback order for Node 0: 0 Aug 5 22:17:59.152251 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Aug 5 22:17:59.152264 kernel: Policy zone: DMA32 Aug 5 22:17:59.152276 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:17:59.152290 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Aug 5 22:17:59.152302 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:17:59.152320 kernel: Kernel/User page tables isolation: enabled Aug 5 22:17:59.152335 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:17:59.152350 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:17:59.152486 kernel: Dynamic Preempt: voluntary Aug 5 22:17:59.152505 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:17:59.152518 kernel: rcu: RCU event tracing is enabled. Aug 5 22:17:59.152530 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:17:59.152542 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:17:59.152555 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:17:59.152567 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:17:59.152583 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:17:59.152637 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:17:59.153056 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 5 22:17:59.153073 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:17:59.153086 kernel: Console: colour VGA+ 80x25 Aug 5 22:17:59.153117 kernel: printk: console [ttyS0] enabled Aug 5 22:17:59.153129 kernel: ACPI: Core revision 20230628 Aug 5 22:17:59.153142 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Aug 5 22:17:59.153155 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:17:59.153172 kernel: x2apic enabled Aug 5 22:17:59.153184 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:17:59.153208 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Aug 5 22:17:59.153224 kernel: Calibrating delay loop (skipped) preset value.. 4999.98 BogoMIPS (lpj=2499994) Aug 5 22:17:59.153240 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 5 22:17:59.153255 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Aug 5 22:17:59.153272 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:17:59.153288 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:17:59.153305 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:17:59.153322 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:17:59.153340 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 5 22:17:59.153356 kernel: RETBleed: Vulnerable Aug 5 22:17:59.153378 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:17:59.153393 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:17:59.153409 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:17:59.153423 kernel: GDS: Unknown: Dependent on hypervisor status Aug 5 22:17:59.153437 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:17:59.153451 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:17:59.153471 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:17:59.153486 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 5 22:17:59.153499 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 5 22:17:59.153511 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 5 22:17:59.153526 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 5 22:17:59.153541 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 5 22:17:59.153618 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 5 22:17:59.153637 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:17:59.153651 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 5 22:17:59.153664 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 5 22:17:59.153676 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Aug 5 22:17:59.153693 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Aug 5 22:17:59.153706 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Aug 5 22:17:59.153720 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Aug 5 22:17:59.153733 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Aug 5 22:17:59.153788 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:17:59.153802 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:17:59.153815 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:17:59.153829 kernel: SELinux: Initializing. Aug 5 22:17:59.153842 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:17:59.153856 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:17:59.153870 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 5 22:17:59.153958 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:17:59.154180 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:17:59.154197 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:17:59.154211 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 5 22:17:59.154304 kernel: signal: max sigframe size: 3632 Aug 5 22:17:59.154323 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:17:59.154338 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:17:59.154383 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:17:59.154402 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:17:59.154416 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:17:59.154435 kernel: .... node #0, CPUs: #1 Aug 5 22:17:59.154483 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 5 22:17:59.154500 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:17:59.154514 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:17:59.154559 kernel: smpboot: Max logical packages: 1 Aug 5 22:17:59.154576 kernel: smpboot: Total of 2 processors activated (9999.97 BogoMIPS) Aug 5 22:17:59.154589 kernel: devtmpfs: initialized Aug 5 22:17:59.154603 kernel: x86/mm: Memory block size: 128MB Aug 5 22:17:59.154901 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:17:59.154917 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:17:59.155486 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:17:59.155717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:17:59.155735 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:17:59.155749 kernel: audit: type=2000 audit(1722896278.224:1): state=initialized audit_enabled=0 res=1 Aug 5 22:17:59.155792 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:17:59.155810 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:17:59.155824 kernel: cpuidle: using governor menu Aug 5 22:17:59.155844 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:17:59.155891 kernel: dca service started, version 1.12.1 Aug 5 22:17:59.155905 kernel: PCI: Using configuration type 1 for base access Aug 5 22:17:59.156160 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:17:59.156236 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:17:59.156255 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:17:59.156268 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:17:59.156782 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:17:59.156808 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:17:59.156828 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:17:59.156843 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:17:59.156858 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:17:59.156872 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 5 22:17:59.156888 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:17:59.156902 kernel: ACPI: Interpreter enabled Aug 5 22:17:59.156915 kernel: ACPI: PM: (supports S0 S5) Aug 5 22:17:59.156929 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:17:59.156942 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:17:59.156958 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:17:59.156971 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 5 22:17:59.156984 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:17:59.157271 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:17:59.157425 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 5 22:17:59.157558 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 5 22:17:59.157575 kernel: acpiphp: Slot [3] registered Aug 5 22:17:59.157594 kernel: acpiphp: Slot [4] registered Aug 5 22:17:59.157609 kernel: acpiphp: Slot [5] registered Aug 5 22:17:59.157624 kernel: acpiphp: Slot [6] registered Aug 5 22:17:59.157638 kernel: acpiphp: Slot [7] registered Aug 5 22:17:59.157652 kernel: acpiphp: Slot [8] registered Aug 5 22:17:59.157666 kernel: acpiphp: Slot [9] registered Aug 5 22:17:59.157680 kernel: acpiphp: Slot [10] registered Aug 5 22:17:59.157695 kernel: acpiphp: Slot [11] registered Aug 5 22:17:59.157709 kernel: acpiphp: Slot [12] registered Aug 5 22:17:59.157724 kernel: acpiphp: Slot [13] registered Aug 5 22:17:59.157741 kernel: acpiphp: Slot [14] registered Aug 5 22:17:59.157755 kernel: acpiphp: Slot [15] registered Aug 5 22:17:59.157770 kernel: acpiphp: Slot [16] registered Aug 5 22:17:59.157785 kernel: acpiphp: Slot [17] registered Aug 5 22:17:59.157799 kernel: acpiphp: Slot [18] registered Aug 5 22:17:59.157813 kernel: acpiphp: Slot [19] registered Aug 5 22:17:59.157828 kernel: acpiphp: Slot [20] registered Aug 5 22:17:59.157843 kernel: acpiphp: Slot [21] registered Aug 5 22:17:59.157858 kernel: acpiphp: Slot [22] registered Aug 5 22:17:59.157876 kernel: acpiphp: Slot [23] registered Aug 5 22:17:59.157890 kernel: acpiphp: Slot [24] registered Aug 5 22:17:59.157905 kernel: acpiphp: Slot [25] registered Aug 5 22:17:59.157920 kernel: acpiphp: Slot [26] registered Aug 5 22:17:59.157935 kernel: acpiphp: Slot [27] registered Aug 5 22:17:59.157949 kernel: acpiphp: Slot [28] registered Aug 5 22:17:59.157964 kernel: acpiphp: Slot [29] registered Aug 5 22:17:59.157978 kernel: acpiphp: Slot [30] registered Aug 5 22:17:59.157993 kernel: acpiphp: Slot [31] registered Aug 5 22:17:59.158008 kernel: PCI host bridge to bus 0000:00 Aug 5 22:17:59.158167 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:17:59.158375 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:17:59.158506 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:17:59.158689 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 5 22:17:59.158901 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:17:59.159057 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:17:59.159223 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:17:59.160576 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Aug 5 22:17:59.160742 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 5 22:17:59.160896 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Aug 5 22:17:59.161044 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Aug 5 22:17:59.161210 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Aug 5 22:17:59.161339 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Aug 5 22:17:59.161482 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Aug 5 22:17:59.161616 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Aug 5 22:17:59.161750 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Aug 5 22:17:59.161885 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 16601 usecs Aug 5 22:17:59.162190 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Aug 5 22:17:59.162592 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Aug 5 22:17:59.162736 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 5 22:17:59.162925 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:17:59.163255 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 5 22:17:59.163392 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Aug 5 22:17:59.163587 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 5 22:17:59.163714 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Aug 5 22:17:59.163732 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:17:59.163746 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:17:59.163765 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:17:59.163779 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:17:59.163792 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:17:59.163806 kernel: iommu: Default domain type: Translated Aug 5 22:17:59.163819 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:17:59.163833 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:17:59.163845 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:17:59.163858 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 5 22:17:59.163870 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Aug 5 22:17:59.166014 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Aug 5 22:17:59.166396 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Aug 5 22:17:59.166884 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:17:59.166913 kernel: vgaarb: loaded Aug 5 22:17:59.166955 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 5 22:17:59.166972 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Aug 5 22:17:59.166989 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:17:59.167006 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:17:59.167052 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:17:59.167069 kernel: pnp: PnP ACPI init Aug 5 22:17:59.167085 kernel: pnp: PnP ACPI: found 5 devices Aug 5 22:17:59.167143 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:17:59.167159 kernel: NET: Registered PF_INET protocol family Aug 5 22:17:59.167174 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:17:59.167217 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 5 22:17:59.167234 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:17:59.167247 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:17:59.167266 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 5 22:17:59.167306 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 5 22:17:59.167323 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:17:59.167339 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:17:59.167354 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:17:59.167391 kernel: NET: Registered PF_XDP protocol family Aug 5 22:17:59.167685 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:17:59.167812 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:17:59.167947 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:17:59.168792 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 5 22:17:59.169020 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:17:59.169043 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:17:59.169059 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:17:59.169075 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240933eba6e, max_idle_ns: 440795246008 ns Aug 5 22:17:59.169090 kernel: clocksource: Switched to clocksource tsc Aug 5 22:17:59.169133 kernel: Initialise system trusted keyrings Aug 5 22:17:59.169148 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 5 22:17:59.169167 kernel: Key type asymmetric registered Aug 5 22:17:59.169182 kernel: Asymmetric key parser 'x509' registered Aug 5 22:17:59.169197 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:17:59.169211 kernel: io scheduler mq-deadline registered Aug 5 22:17:59.169391 kernel: io scheduler kyber registered Aug 5 22:17:59.169407 kernel: io scheduler bfq registered Aug 5 22:17:59.169420 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:17:59.169433 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:17:59.169448 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:17:59.169468 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:17:59.169482 kernel: i8042: Warning: Keylock active Aug 5 22:17:59.169495 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:17:59.169509 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:17:59.169744 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 5 22:17:59.169868 kernel: rtc_cmos 00:00: registered as rtc0 Aug 5 22:17:59.169982 kernel: rtc_cmos 00:00: setting system clock to 2024-08-05T22:17:58 UTC (1722896278) Aug 5 22:17:59.170121 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 5 22:17:59.170140 kernel: intel_pstate: CPU model not supported Aug 5 22:17:59.170156 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:17:59.170172 kernel: Segment Routing with IPv6 Aug 5 22:17:59.170188 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:17:59.170204 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:17:59.170220 kernel: Key type dns_resolver registered Aug 5 22:17:59.170236 kernel: IPI shorthand broadcast: enabled Aug 5 22:17:59.170252 kernel: sched_clock: Marking stable (736003467, 285833107)->(1118360163, -96523589) Aug 5 22:17:59.170268 kernel: registered taskstats version 1 Aug 5 22:17:59.170289 kernel: Loading compiled-in X.509 certificates Aug 5 22:17:59.170305 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: e31e857530e65c19b206dbf3ab8297cc37ac5d55' Aug 5 22:17:59.170320 kernel: Key type .fscrypt registered Aug 5 22:17:59.170336 kernel: Key type fscrypt-provisioning registered Aug 5 22:17:59.170353 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:17:59.170368 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:17:59.170383 kernel: ima: No architecture policies found Aug 5 22:17:59.170396 kernel: clk: Disabling unused clocks Aug 5 22:17:59.170415 kernel: Freeing unused kernel image (initmem) memory: 49328K Aug 5 22:17:59.170431 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:17:59.170447 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:17:59.170463 kernel: Run /init as init process Aug 5 22:17:59.170478 kernel: with arguments: Aug 5 22:17:59.170494 kernel: /init Aug 5 22:17:59.170509 kernel: with environment: Aug 5 22:17:59.170524 kernel: HOME=/ Aug 5 22:17:59.170540 kernel: TERM=linux Aug 5 22:17:59.170555 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:17:59.170583 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:17:59.170616 systemd[1]: Detected virtualization amazon. Aug 5 22:17:59.170636 systemd[1]: Detected architecture x86-64. Aug 5 22:17:59.170653 systemd[1]: Running in initrd. Aug 5 22:17:59.170673 systemd[1]: No hostname configured, using default hostname. Aug 5 22:17:59.170690 systemd[1]: Hostname set to . Aug 5 22:17:59.170708 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:17:59.170725 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:17:59.170742 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Aug 5 22:17:59.170764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:17:59.170784 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:17:59.170803 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:17:59.170821 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:17:59.170837 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:17:59.170859 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:17:59.170878 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:17:59.170896 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:17:59.170914 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:17:59.170932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:17:59.170950 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:17:59.170967 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:17:59.170987 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:17:59.171005 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:17:59.171023 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:17:59.171040 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:17:59.171057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:17:59.171075 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:17:59.171092 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:17:59.171125 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:17:59.171140 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:17:59.171159 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:17:59.171177 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:17:59.171196 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:17:59.171214 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:17:59.171233 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:17:59.171259 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:17:59.171277 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:17:59.171296 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:17:59.171350 systemd-journald[178]: Collecting audit messages is disabled. Aug 5 22:17:59.171391 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:17:59.171458 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:17:59.171480 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:17:59.171501 systemd-journald[178]: Journal started Aug 5 22:17:59.171539 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2a249c65304f00445dffb7fb8c1876) is 4.8M, max 38.6M, 33.7M free. Aug 5 22:17:59.175143 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:17:59.190361 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:17:59.198803 systemd-modules-load[179]: Inserted module 'overlay' Aug 5 22:17:59.544971 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:17:59.545014 kernel: Bridge firewalling registered Aug 5 22:17:59.206460 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:17:59.370307 systemd-modules-load[179]: Inserted module 'br_netfilter' Aug 5 22:17:59.553873 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:17:59.559503 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:17:59.569645 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:17:59.585085 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:17:59.590340 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:17:59.591960 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:17:59.596491 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:17:59.622890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:17:59.625973 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:17:59.634430 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:17:59.641711 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:17:59.651411 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:17:59.697224 dracut-cmdline[214]: dracut-dracut-053 Aug 5 22:17:59.698995 systemd-resolved[208]: Positive Trust Anchors: Aug 5 22:17:59.699017 systemd-resolved[208]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:17:59.699070 systemd-resolved[208]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:17:59.714506 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:17:59.726766 systemd-resolved[208]: Defaulting to hostname 'linux'. Aug 5 22:17:59.729269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:17:59.731671 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:17:59.815136 kernel: SCSI subsystem initialized Aug 5 22:17:59.827126 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:17:59.843125 kernel: iscsi: registered transport (tcp) Aug 5 22:17:59.874140 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:17:59.874215 kernel: QLogic iSCSI HBA Driver Aug 5 22:17:59.922371 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:17:59.929379 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:17:59.960130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:17:59.960208 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:17:59.960229 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:18:00.013141 kernel: raid6: avx512x4 gen() 16289 MB/s Aug 5 22:18:00.030153 kernel: raid6: avx512x2 gen() 15500 MB/s Aug 5 22:18:00.047154 kernel: raid6: avx512x1 gen() 16297 MB/s Aug 5 22:18:00.064153 kernel: raid6: avx2x4 gen() 15344 MB/s Aug 5 22:18:00.081153 kernel: raid6: avx2x2 gen() 16041 MB/s Aug 5 22:18:00.098156 kernel: raid6: avx2x1 gen() 12937 MB/s Aug 5 22:18:00.098244 kernel: raid6: using algorithm avx512x1 gen() 16297 MB/s Aug 5 22:18:00.115518 kernel: raid6: .... xor() 15977 MB/s, rmw enabled Aug 5 22:18:00.115670 kernel: raid6: using avx512x2 recovery algorithm Aug 5 22:18:00.171213 kernel: xor: automatically using best checksumming function avx Aug 5 22:18:00.437240 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:18:00.469581 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:18:00.487762 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:18:00.572319 systemd-udevd[397]: Using default interface naming scheme 'v255'. Aug 5 22:18:00.601179 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:18:00.623311 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:18:00.697851 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Aug 5 22:18:00.851889 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:18:00.864378 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:18:00.975187 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:18:00.995091 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:18:01.064495 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:18:01.082713 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:18:01.084462 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:18:01.089945 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:18:01.112342 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:18:01.198180 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:18:01.320726 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 5 22:18:01.350867 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 5 22:18:01.351076 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Aug 5 22:18:01.351284 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:85:d8:55:d3:71 Aug 5 22:18:01.390568 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:18:01.381902 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:18:01.449858 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:18:01.451356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:18:01.463147 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:18:01.478031 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:18:01.478174 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:18:01.498059 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:18:01.501177 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:18:01.528445 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:18:01.543201 kernel: AES CTR mode by8 optimization enabled Aug 5 22:18:01.568223 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 5 22:18:01.568501 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 5 22:18:01.627330 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 5 22:18:01.636279 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:18:01.636722 kernel: GPT:9289727 != 16777215 Aug 5 22:18:01.636752 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:18:01.636770 kernel: GPT:9289727 != 16777215 Aug 5 22:18:01.636913 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:18:01.636933 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:18:01.822204 kernel: BTRFS: device fsid d3844c60-0a2c-449a-9ee9-2a875f8d8e12 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (444) Aug 5 22:18:01.835135 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (442) Aug 5 22:18:01.838096 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:18:01.850379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:18:01.899432 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 5 22:18:01.920425 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 5 22:18:01.923004 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:18:01.954217 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 5 22:18:01.955434 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 5 22:18:01.974556 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:18:01.990197 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:18:02.027266 disk-uuid[623]: Primary Header is updated. Aug 5 22:18:02.027266 disk-uuid[623]: Secondary Entries is updated. Aug 5 22:18:02.027266 disk-uuid[623]: Secondary Header is updated. Aug 5 22:18:02.033125 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:18:02.041138 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:18:02.051132 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:18:03.059605 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:18:03.060639 disk-uuid[624]: The operation has completed successfully. Aug 5 22:18:03.256598 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:18:03.256726 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:18:03.279365 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:18:03.284876 sh[967]: Success Aug 5 22:18:03.311129 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:18:03.431181 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:18:03.441546 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:18:03.446604 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:18:03.470907 kernel: BTRFS info (device dm-0): first mount of filesystem d3844c60-0a2c-449a-9ee9-2a875f8d8e12 Aug 5 22:18:03.470970 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:18:03.470990 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:18:03.471008 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:18:03.472124 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:18:03.547136 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 5 22:18:03.559253 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:18:03.560434 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:18:03.571388 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:18:03.576346 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:18:03.607539 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:18:03.607618 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:18:03.607648 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:18:03.617047 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:18:03.637114 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:18:03.638944 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:18:03.660402 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:18:03.673350 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:18:03.848339 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:18:03.855585 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:18:03.890513 systemd-networkd[1160]: lo: Link UP Aug 5 22:18:03.890525 systemd-networkd[1160]: lo: Gained carrier Aug 5 22:18:03.893519 systemd-networkd[1160]: Enumeration completed Aug 5 22:18:03.893951 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:18:03.893955 systemd-networkd[1160]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:18:03.896216 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:18:03.907208 systemd[1]: Reached target network.target - Network. Aug 5 22:18:03.910059 systemd-networkd[1160]: eth0: Link UP Aug 5 22:18:03.910070 systemd-networkd[1160]: eth0: Gained carrier Aug 5 22:18:03.910086 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:18:03.921200 systemd-networkd[1160]: eth0: DHCPv4 address 172.31.26.236/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:18:04.147813 ignition[1070]: Ignition 2.18.0 Aug 5 22:18:04.147830 ignition[1070]: Stage: fetch-offline Aug 5 22:18:04.148383 ignition[1070]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:18:04.148397 ignition[1070]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:18:04.148773 ignition[1070]: Ignition finished successfully Aug 5 22:18:04.152829 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:18:04.166760 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:18:04.185782 ignition[1169]: Ignition 2.18.0 Aug 5 22:18:04.185806 ignition[1169]: Stage: fetch Aug 5 22:18:04.186746 ignition[1169]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:18:04.186758 ignition[1169]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:18:04.189418 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:18:04.198283 ignition[1169]: PUT result: OK Aug 5 22:18:04.200941 ignition[1169]: parsed url from cmdline: "" Aug 5 22:18:04.200954 ignition[1169]: no config URL provided Aug 5 22:18:04.200963 ignition[1169]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:18:04.200975 ignition[1169]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:18:04.200992 ignition[1169]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:18:04.205579 ignition[1169]: PUT result: OK Aug 5 22:18:04.205664 ignition[1169]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 5 22:18:04.210603 ignition[1169]: GET result: OK Aug 5 22:18:04.210726 ignition[1169]: parsing config with SHA512: 3fe9d0b2384841d5c361597246f1142b5a7a88d1155371234aac21f2dd59984ebaf7976249e6ca31142c080609265c8d2073d8360b7c940e61876d3c34634683 Aug 5 22:18:04.216223 unknown[1169]: fetched base config from "system" Aug 5 22:18:04.216239 unknown[1169]: fetched base config from "system" Aug 5 22:18:04.216248 unknown[1169]: fetched user config from "aws" Aug 5 22:18:04.219013 ignition[1169]: fetch: fetch complete Aug 5 22:18:04.219025 ignition[1169]: fetch: fetch passed Aug 5 22:18:04.219076 ignition[1169]: Ignition finished successfully Aug 5 22:18:04.224083 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:18:04.230329 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:18:04.266985 ignition[1176]: Ignition 2.18.0 Aug 5 22:18:04.267000 ignition[1176]: Stage: kargs Aug 5 22:18:04.267474 ignition[1176]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:18:04.267486 ignition[1176]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:18:04.267592 ignition[1176]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:18:04.270471 ignition[1176]: PUT result: OK Aug 5 22:18:04.276067 ignition[1176]: kargs: kargs passed Aug 5 22:18:04.277305 ignition[1176]: Ignition finished successfully Aug 5 22:18:04.281096 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:18:04.291374 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:18:04.345116 ignition[1183]: Ignition 2.18.0 Aug 5 22:18:04.345129 ignition[1183]: Stage: disks Aug 5 22:18:04.345537 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:18:04.345546 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:18:04.345689 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:18:04.346967 ignition[1183]: PUT result: OK Aug 5 22:18:04.352010 ignition[1183]: disks: disks passed Aug 5 22:18:04.352136 ignition[1183]: Ignition finished successfully Aug 5 22:18:04.368254 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:18:04.371040 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:18:04.372479 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:18:04.375170 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:18:04.376389 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:18:04.378926 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:18:04.387526 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:18:04.435337 systemd-fsck[1192]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:18:04.439184 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:18:04.448246 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:18:04.588140 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e865ac73-053b-4efa-9a0f-50dec3f650d9 r/w with ordered data mode. Quota mode: none. Aug 5 22:18:04.587877 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:18:04.590225 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:18:04.611258 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:18:04.616278 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:18:04.617825 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:18:04.617896 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:18:04.617933 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:18:04.629128 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1211) Aug 5 22:18:04.632595 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:18:04.632675 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:18:04.632698 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:18:04.636657 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:18:04.641124 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:18:04.647376 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:18:04.651472 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:18:05.036073 initrd-setup-root[1235]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:18:05.043150 initrd-setup-root[1242]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:18:05.048458 initrd-setup-root[1249]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:18:05.055273 initrd-setup-root[1256]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:18:05.271841 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:18:05.279683 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:18:05.290301 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:18:05.292217 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:18:05.293504 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:18:05.330579 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:18:05.333602 ignition[1324]: INFO : Ignition 2.18.0 Aug 5 22:18:05.334815 ignition[1324]: INFO : Stage: mount Aug 5 22:18:05.335811 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:18:05.336803 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:18:05.338131 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:18:05.340131 ignition[1324]: INFO : PUT result: OK Aug 5 22:18:05.344251 ignition[1324]: INFO : mount: mount passed Aug 5 22:18:05.344251 ignition[1324]: INFO : Ignition finished successfully Aug 5 22:18:05.346979 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:18:05.355259 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:18:05.408294 systemd-networkd[1160]: eth0: Gained IPv6LL Aug 5 22:18:05.593353 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:18:05.606123 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1336) Aug 5 22:18:05.606187 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:18:05.607421 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:18:05.607450 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:18:05.611167 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:18:05.613429 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:18:05.651685 ignition[1353]: INFO : Ignition 2.18.0 Aug 5 22:18:05.651685 ignition[1353]: INFO : Stage: files Aug 5 22:18:05.654309 ignition[1353]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:18:05.654309 ignition[1353]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:18:05.667668 ignition[1353]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:18:05.675702 ignition[1353]: INFO : PUT result: OK Aug 5 22:18:05.689017 ignition[1353]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:18:05.691938 ignition[1353]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:18:05.691938 ignition[1353]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:18:05.717483 ignition[1353]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:18:05.720151 ignition[1353]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:18:05.720151 ignition[1353]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:18:05.719213 unknown[1353]: wrote ssh authorized keys file for user: core Aug 5 22:18:05.729783 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:18:05.732058 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:18:05.780701 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:18:05.911355 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:18:05.911355 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:18:05.915758 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 5 22:18:06.269193 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:18:06.632083 ignition[1353]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:18:06.632083 ignition[1353]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:18:06.648597 ignition[1353]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:18:06.661597 ignition[1353]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:18:06.661597 ignition[1353]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:18:06.661597 ignition[1353]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:18:06.668958 ignition[1353]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:18:06.668958 ignition[1353]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:18:06.668958 ignition[1353]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:18:06.668958 ignition[1353]: INFO : files: files passed Aug 5 22:18:06.668958 ignition[1353]: INFO : Ignition finished successfully Aug 5 22:18:06.671171 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:18:06.683637 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:18:06.691306 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:18:06.696881 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:18:06.697432 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:18:06.741085 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:18:06.741085 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:18:06.745596 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:18:06.749346 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:18:06.751991 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:18:06.758279 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:18:06.810121 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:18:06.810281 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:18:06.815573 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:18:06.818082 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:18:06.820250 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:18:06.832545 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:18:06.851707 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:18:06.859509 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:18:06.904215 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:18:06.906255 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:18:06.910496 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:18:06.912687 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:18:06.912907 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:18:06.916636 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:18:06.924416 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:18:06.926455 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:18:06.929473 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:18:06.931798 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:18:06.934503 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:18:06.936841 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:18:06.939496 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:18:06.941204 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:18:06.943141 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:18:06.945436 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:18:06.945693 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:18:06.950242 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:18:06.953644 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:18:06.955469 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:18:06.956799 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:18:06.958524 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:18:06.958650 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:18:06.961661 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:18:06.961923 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:18:06.963507 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:18:06.963658 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:18:06.972560 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:18:06.977464 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:18:06.980330 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:18:06.984222 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:18:06.988809 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:18:06.990517 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:18:07.012487 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:18:07.014765 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:18:07.017418 ignition[1407]: INFO : Ignition 2.18.0 Aug 5 22:18:07.017418 ignition[1407]: INFO : Stage: umount Aug 5 22:18:07.022417 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:18:07.022417 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:18:07.022417 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:18:07.022417 ignition[1407]: INFO : PUT result: OK Aug 5 22:18:07.031054 ignition[1407]: INFO : umount: umount passed Aug 5 22:18:07.036491 ignition[1407]: INFO : Ignition finished successfully Aug 5 22:18:07.041320 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:18:07.042697 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:18:07.045361 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:18:07.046335 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:18:07.049335 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:18:07.050406 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:18:07.052582 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:18:07.052651 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:18:07.053841 systemd[1]: Stopped target network.target - Network. Aug 5 22:18:07.055244 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:18:07.055320 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:18:07.064111 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:18:07.065122 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:18:07.065192 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:18:07.066378 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:18:07.067908 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:18:07.069087 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:18:07.069170 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:18:07.070312 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:18:07.070396 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:18:07.076192 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:18:07.079819 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:18:07.084707 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:18:07.084777 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:18:07.090091 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:18:07.111749 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:18:07.117634 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:18:07.118697 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:18:07.118897 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:18:07.119164 systemd-networkd[1160]: eth0: DHCPv6 lease lost Aug 5 22:18:07.121626 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:18:07.121781 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:18:07.125568 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:18:07.125690 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:18:07.134501 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:18:07.134578 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:18:07.137596 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:18:07.137672 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:18:07.146229 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:18:07.147133 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:18:07.147195 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:18:07.150369 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:18:07.150418 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:18:07.154817 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:18:07.154864 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:18:07.158790 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:18:07.158850 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:18:07.162897 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:18:07.181733 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:18:07.181888 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:18:07.185048 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:18:07.185199 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:18:07.188703 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:18:07.188742 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:18:07.191814 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:18:07.191871 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:18:07.194449 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:18:07.194503 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:18:07.196712 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:18:07.196760 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:18:07.209326 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:18:07.210775 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:18:07.210847 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:18:07.212009 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:18:07.212077 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:18:07.213415 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:18:07.213459 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:18:07.214661 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:18:07.214701 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:18:07.217167 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:18:07.217259 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:18:07.227761 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:18:07.227910 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:18:07.233455 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:18:07.257576 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:18:07.296956 systemd[1]: Switching root. Aug 5 22:18:07.351756 systemd-journald[178]: Journal stopped Aug 5 22:18:10.009820 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Aug 5 22:18:10.009942 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:18:10.009972 kernel: SELinux: policy capability open_perms=1 Aug 5 22:18:10.010002 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:18:10.010033 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:18:10.010056 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:18:10.010080 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:18:10.013194 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:18:10.013221 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:18:10.013242 kernel: audit: type=1403 audit(1722896288.381:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:18:10.013275 systemd[1]: Successfully loaded SELinux policy in 57.591ms. Aug 5 22:18:10.013308 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.914ms. Aug 5 22:18:10.013339 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:18:10.013368 systemd[1]: Detected virtualization amazon. Aug 5 22:18:10.013390 systemd[1]: Detected architecture x86-64. Aug 5 22:18:10.013413 systemd[1]: Detected first boot. Aug 5 22:18:10.013435 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:18:10.013458 zram_generator::config[1449]: No configuration found. Aug 5 22:18:10.013482 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:18:10.013504 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:18:10.013527 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:18:10.013552 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:18:10.013579 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:18:10.013602 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:18:10.013625 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:18:10.013646 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:18:10.013666 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:18:10.013685 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:18:10.013706 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:18:10.013731 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:18:10.013754 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:18:10.013775 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:18:10.013794 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:18:10.013813 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:18:10.013834 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:18:10.013857 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:18:10.013879 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:18:10.013901 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:18:10.013923 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:18:10.014012 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:18:10.014047 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:18:10.014068 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:18:10.014233 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:18:10.014267 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:18:10.014290 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:18:10.014313 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:18:10.014334 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:18:10.014362 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:18:10.014385 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:18:10.014407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:18:10.014429 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:18:10.014451 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:18:10.014469 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:18:10.014490 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:18:10.014512 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:18:10.014534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:18:10.014559 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:18:10.014579 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:18:10.014598 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:18:10.014620 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:18:10.014640 systemd[1]: Reached target machines.target - Containers. Aug 5 22:18:10.014660 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:18:10.014682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:18:10.014703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:18:10.014726 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:18:10.014746 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:18:10.014766 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:18:10.014787 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:18:10.014808 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:18:10.014828 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:18:10.014849 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:18:10.014869 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:18:10.014893 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:18:10.014913 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:18:10.014934 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:18:10.014957 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:18:10.014979 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:18:10.015002 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:18:10.015024 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:18:10.015043 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:18:10.015063 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:18:10.015085 systemd[1]: Stopped verity-setup.service. Aug 5 22:18:10.032200 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:18:10.032243 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:18:10.032262 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:18:10.032284 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:18:10.032308 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:18:10.032334 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:18:10.032359 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:18:10.032391 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:18:10.032415 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:18:10.032436 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:18:10.032458 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:18:10.032484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:18:10.032509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:18:10.032540 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:18:10.032564 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:18:10.032592 kernel: ACPI: bus type drm_connector registered Aug 5 22:18:10.032617 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:18:10.032643 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:18:10.032669 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:18:10.032698 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:18:10.032724 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:18:10.032749 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:18:10.032773 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:18:10.032801 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:18:10.032826 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:18:10.032852 kernel: loop: module loaded Aug 5 22:18:10.032875 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:18:10.032901 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:18:10.032923 kernel: fuse: init (API version 7.39) Aug 5 22:18:10.032946 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:18:10.032972 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:18:10.033087 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:18:10.033150 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:18:10.033222 systemd-journald[1530]: Collecting audit messages is disabled. Aug 5 22:18:10.033266 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:18:10.033296 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:18:10.033321 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:18:10.033348 systemd-journald[1530]: Journal started Aug 5 22:18:10.033402 systemd-journald[1530]: Runtime Journal (/run/log/journal/ec2a249c65304f00445dffb7fb8c1876) is 4.8M, max 38.6M, 33.7M free. Aug 5 22:18:10.045568 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:18:10.045631 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:18:09.300381 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:18:10.052300 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:18:09.371852 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 5 22:18:09.372448 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:18:10.065970 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:18:10.062779 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:18:10.062975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:18:10.070158 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:18:10.147932 kernel: loop0: detected capacity change from 0 to 139904 Aug 5 22:18:10.143214 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:18:10.154127 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:18:10.156988 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:18:10.159762 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:18:10.160358 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:18:10.162794 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:18:10.167305 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:18:10.180370 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:18:10.183753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:18:10.185376 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:18:10.193744 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:18:10.202311 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:18:10.226764 systemd-journald[1530]: Time spent on flushing to /var/log/journal/ec2a249c65304f00445dffb7fb8c1876 is 100.078ms for 969 entries. Aug 5 22:18:10.226764 systemd-journald[1530]: System Journal (/var/log/journal/ec2a249c65304f00445dffb7fb8c1876) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:18:10.350500 systemd-journald[1530]: Received client request to flush runtime journal. Aug 5 22:18:10.350556 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:18:10.350591 kernel: loop1: detected capacity change from 0 to 209816 Aug 5 22:18:10.248819 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Aug 5 22:18:10.248842 systemd-tmpfiles[1544]: ACLs are not supported, ignoring. Aug 5 22:18:10.268681 udevadm[1587]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 22:18:10.295895 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:18:10.307305 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:18:10.310001 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:18:10.311080 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:18:10.360164 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:18:10.407412 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:18:10.419279 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:18:10.424190 kernel: loop2: detected capacity change from 0 to 60984 Aug 5 22:18:10.454944 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Aug 5 22:18:10.455415 systemd-tmpfiles[1598]: ACLs are not supported, ignoring. Aug 5 22:18:10.463850 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:18:10.527130 kernel: loop3: detected capacity change from 0 to 80568 Aug 5 22:18:10.633242 kernel: loop4: detected capacity change from 0 to 139904 Aug 5 22:18:10.673136 kernel: loop5: detected capacity change from 0 to 209816 Aug 5 22:18:10.699132 kernel: loop6: detected capacity change from 0 to 60984 Aug 5 22:18:10.729426 kernel: loop7: detected capacity change from 0 to 80568 Aug 5 22:18:10.754919 (sd-merge)[1605]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 5 22:18:10.755544 (sd-merge)[1605]: Merged extensions into '/usr'. Aug 5 22:18:10.769262 systemd[1]: Reloading requested from client PID 1554 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:18:10.769290 systemd[1]: Reloading... Aug 5 22:18:10.950364 zram_generator::config[1629]: No configuration found. Aug 5 22:18:11.275878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:18:11.424022 systemd[1]: Reloading finished in 654 ms. Aug 5 22:18:11.460251 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:18:11.469481 systemd[1]: Starting ensure-sysext.service... Aug 5 22:18:11.478929 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:18:11.502804 systemd[1]: Reloading requested from client PID 1677 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:18:11.502823 systemd[1]: Reloading... Aug 5 22:18:11.565770 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:18:11.570049 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:18:11.571707 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:18:11.573772 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Aug 5 22:18:11.573971 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Aug 5 22:18:11.599350 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:18:11.602296 systemd-tmpfiles[1678]: Skipping /boot Aug 5 22:18:11.625674 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:18:11.625690 systemd-tmpfiles[1678]: Skipping /boot Aug 5 22:18:11.636138 zram_generator::config[1704]: No configuration found. Aug 5 22:18:11.832194 ldconfig[1546]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:18:11.849818 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:18:11.916744 systemd[1]: Reloading finished in 412 ms. Aug 5 22:18:11.937399 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:18:11.939046 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:18:11.943640 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:18:11.963334 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:18:11.969339 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:18:11.974467 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:18:11.979363 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:18:11.992337 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:18:12.003354 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:18:12.020174 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:18:12.025074 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:18:12.026422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:18:12.036491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:18:12.042404 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:18:12.047194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:18:12.049156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:18:12.049463 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:18:12.051001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:18:12.053318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:18:12.064874 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:18:12.066502 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:18:12.077048 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:18:12.078750 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:18:12.079482 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:18:12.108863 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:18:12.111020 systemd-udevd[1768]: Using default interface naming scheme 'v255'. Aug 5 22:18:12.117781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:18:12.118465 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:18:12.129883 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:18:12.131717 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:18:12.134265 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:18:12.142627 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:18:12.143825 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:18:12.145177 systemd[1]: Finished ensure-sysext.service. Aug 5 22:18:12.146881 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:18:12.156760 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:18:12.159199 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:18:12.165756 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:18:12.174124 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:18:12.175465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:18:12.177613 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:18:12.177817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:18:12.179524 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:18:12.179712 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:18:12.182504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:18:12.187731 augenrules[1790]: No rules Aug 5 22:18:12.191481 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:18:12.200188 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:18:12.209245 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:18:12.223327 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:18:12.251187 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:18:12.252868 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:18:12.255716 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:18:12.414368 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:18:12.438265 systemd-networkd[1802]: lo: Link UP Aug 5 22:18:12.438276 systemd-networkd[1802]: lo: Gained carrier Aug 5 22:18:12.440574 systemd-networkd[1802]: Enumeration completed Aug 5 22:18:12.440682 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:18:12.456866 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1821) Aug 5 22:18:12.455399 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:18:12.457162 systemd-resolved[1764]: Positive Trust Anchors: Aug 5 22:18:12.457181 systemd-resolved[1764]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:18:12.457237 systemd-resolved[1764]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:18:12.464625 systemd-resolved[1764]: Defaulting to hostname 'linux'. Aug 5 22:18:12.473077 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:18:12.476462 systemd[1]: Reached target network.target - Network. Aug 5 22:18:12.478281 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:18:12.488268 (udev-worker)[1804]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:18:12.551136 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Aug 5 22:18:12.561418 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 5 22:18:12.566598 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:18:12.566670 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Aug 5 22:18:12.567656 systemd-networkd[1802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:18:12.567671 systemd-networkd[1802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:18:12.569134 kernel: ACPI: button: Sleep Button [SLPF] Aug 5 22:18:12.575214 systemd-networkd[1802]: eth0: Link UP Aug 5 22:18:12.575723 systemd-networkd[1802]: eth0: Gained carrier Aug 5 22:18:12.575758 systemd-networkd[1802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:18:12.585213 systemd-networkd[1802]: eth0: DHCPv4 address 172.31.26.236/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:18:12.597018 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Aug 5 22:18:12.632158 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1805) Aug 5 22:18:12.700150 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:18:12.756334 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:18:12.837269 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:18:12.845416 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:18:12.846484 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:18:12.854369 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:18:12.883122 lvm[1923]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:18:12.881257 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:18:12.920416 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:18:12.924029 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:18:12.931522 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:18:13.050175 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:18:13.051944 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:18:13.053401 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:18:13.055179 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:18:13.056484 lvm[1929]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:18:13.056806 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:18:13.058460 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:18:13.059878 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:18:13.061626 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:18:13.061674 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:18:13.062709 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:18:13.066697 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:18:13.069893 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:18:13.076712 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:18:13.079081 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:18:13.080459 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:18:13.081603 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:18:13.082591 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:18:13.082614 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:18:13.088574 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:18:13.093365 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 22:18:13.096358 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:18:13.102269 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:18:13.107266 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:18:13.108391 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:18:13.116173 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:18:13.125511 systemd[1]: Started ntpd.service - Network Time Service. Aug 5 22:18:13.130233 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:18:13.134261 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 5 22:18:13.139311 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:18:13.151333 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:18:13.157353 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:18:13.159844 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:18:13.160517 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:18:13.170417 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:18:13.174263 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:18:13.176521 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:18:13.204138 jq[1936]: false Aug 5 22:18:13.210218 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:18:13.211203 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:18:13.232790 (ntainerd)[1955]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:18:13.257777 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:18:13.258050 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:18:13.283752 jq[1946]: true Aug 5 22:18:13.308252 extend-filesystems[1937]: Found loop4 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found loop5 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found loop6 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found loop7 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found nvme0n1 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found nvme0n1p1 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found nvme0n1p2 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found nvme0n1p3 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found usr Aug 5 22:18:13.308252 extend-filesystems[1937]: Found nvme0n1p4 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found nvme0n1p6 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found nvme0n1p7 Aug 5 22:18:13.308252 extend-filesystems[1937]: Found nvme0n1p9 Aug 5 22:18:13.361340 extend-filesystems[1937]: Checking size of /dev/nvme0n1p9 Aug 5 22:18:13.372620 update_engine[1945]: I0805 22:18:13.343832 1945 main.cc:92] Flatcar Update Engine starting Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:28 UTC 2024 (1): Starting Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: ---------------------------------------------------- Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: corporation. Support and training for ntp-4 are Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: available at https://www.nwtime.org/support Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: ---------------------------------------------------- Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: proto: precision = 0.097 usec (-23) Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: basedate set to 2024-07-24 Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: gps base set to 2024-07-28 (week 2325) Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: Listen normally on 3 eth0 172.31.26.236:123 Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: Listen normally on 4 lo [::1]:123 Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: bind(21) AF_INET6 fe80::485:d8ff:fe55:d371%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: unable to create socket on eth0 (5) for fe80::485:d8ff:fe55:d371%2#123 Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: failed to init interface for address fe80::485:d8ff:fe55:d371%2 Aug 5 22:18:13.379828 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: Listening on routing socket on fd #21 for interface updates Aug 5 22:18:13.326893 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:18:13.321740 dbus-daemon[1935]: [system] SELinux support is enabled Aug 5 22:18:13.396222 extend-filesystems[1937]: Resized partition /dev/nvme0n1p9 Aug 5 22:18:13.400384 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:18:13.400384 ntpd[1939]: 5 Aug 22:18:13 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:18:13.357503 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:18:13.338256 ntpd[1939]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:28 UTC 2024 (1): Starting Aug 5 22:18:13.357544 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:18:13.338379 ntpd[1939]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:18:13.407400 jq[1965]: true Aug 5 22:18:13.362872 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:18:13.338393 ntpd[1939]: ---------------------------------------------------- Aug 5 22:18:13.362902 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:18:13.338403 ntpd[1939]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:18:13.434473 extend-filesystems[1983]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:18:13.477187 update_engine[1945]: I0805 22:18:13.417896 1945 update_check_scheduler.cc:74] Next update check in 8m42s Aug 5 22:18:13.393975 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:18:13.338413 ntpd[1939]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:18:13.426312 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 5 22:18:13.338423 ntpd[1939]: corporation. Support and training for ntp-4 are Aug 5 22:18:13.494624 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 5 22:18:13.427582 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:18:13.338433 ntpd[1939]: available at https://www.nwtime.org/support Aug 5 22:18:13.458309 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:18:13.338443 ntpd[1939]: ---------------------------------------------------- Aug 5 22:18:13.485244 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:18:13.340657 ntpd[1939]: proto: precision = 0.097 usec (-23) Aug 5 22:18:13.485502 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:18:13.341557 dbus-daemon[1935]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1802 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 5 22:18:13.348483 ntpd[1939]: basedate set to 2024-07-24 Aug 5 22:18:13.348506 ntpd[1939]: gps base set to 2024-07-28 (week 2325) Aug 5 22:18:13.365482 ntpd[1939]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:18:13.365553 ntpd[1939]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:18:13.370291 ntpd[1939]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:18:13.370343 ntpd[1939]: Listen normally on 3 eth0 172.31.26.236:123 Aug 5 22:18:13.370385 ntpd[1939]: Listen normally on 4 lo [::1]:123 Aug 5 22:18:13.370436 ntpd[1939]: bind(21) AF_INET6 fe80::485:d8ff:fe55:d371%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:18:13.370458 ntpd[1939]: unable to create socket on eth0 (5) for fe80::485:d8ff:fe55:d371%2#123 Aug 5 22:18:13.370475 ntpd[1939]: failed to init interface for address fe80::485:d8ff:fe55:d371%2 Aug 5 22:18:13.370521 ntpd[1939]: Listening on routing socket on fd #21 for interface updates Aug 5 22:18:13.384558 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:18:13.384595 ntpd[1939]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:18:13.402576 dbus-daemon[1935]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 5 22:18:13.514408 tar[1956]: linux-amd64/helm Aug 5 22:18:13.543697 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 5 22:18:13.624827 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 5 22:18:13.688002 extend-filesystems[1983]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 5 22:18:13.688002 extend-filesystems[1983]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:18:13.688002 extend-filesystems[1983]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 5 22:18:13.705041 extend-filesystems[1937]: Resized filesystem in /dev/nvme0n1p9 Aug 5 22:18:13.714148 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1820) Aug 5 22:18:13.688803 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:18:13.714276 coreos-metadata[1934]: Aug 05 22:18:13.701 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:18:13.714276 coreos-metadata[1934]: Aug 05 22:18:13.713 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 5 22:18:13.689000 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:18:13.735977 coreos-metadata[1934]: Aug 05 22:18:13.734 INFO Fetch successful Aug 5 22:18:13.735977 coreos-metadata[1934]: Aug 05 22:18:13.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 5 22:18:13.735977 coreos-metadata[1934]: Aug 05 22:18:13.735 INFO Fetch successful Aug 5 22:18:13.735977 coreos-metadata[1934]: Aug 05 22:18:13.735 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 5 22:18:13.738352 coreos-metadata[1934]: Aug 05 22:18:13.736 INFO Fetch successful Aug 5 22:18:13.738352 coreos-metadata[1934]: Aug 05 22:18:13.736 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 5 22:18:13.738485 bash[2012]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:18:13.755878 coreos-metadata[1934]: Aug 05 22:18:13.740 INFO Fetch successful Aug 5 22:18:13.755878 coreos-metadata[1934]: Aug 05 22:18:13.741 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 5 22:18:13.743658 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:18:13.755455 systemd[1]: Starting sshkeys.service... Aug 5 22:18:13.768496 coreos-metadata[1934]: Aug 05 22:18:13.764 INFO Fetch failed with 404: resource not found Aug 5 22:18:13.768496 coreos-metadata[1934]: Aug 05 22:18:13.764 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 5 22:18:13.771000 coreos-metadata[1934]: Aug 05 22:18:13.768 INFO Fetch successful Aug 5 22:18:13.771000 coreos-metadata[1934]: Aug 05 22:18:13.768 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 5 22:18:13.772436 coreos-metadata[1934]: Aug 05 22:18:13.772 INFO Fetch successful Aug 5 22:18:13.772687 coreos-metadata[1934]: Aug 05 22:18:13.772 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 5 22:18:13.778473 coreos-metadata[1934]: Aug 05 22:18:13.778 INFO Fetch successful Aug 5 22:18:13.778473 coreos-metadata[1934]: Aug 05 22:18:13.778 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 5 22:18:13.787693 coreos-metadata[1934]: Aug 05 22:18:13.779 INFO Fetch successful Aug 5 22:18:13.787693 coreos-metadata[1934]: Aug 05 22:18:13.779 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 5 22:18:13.797450 coreos-metadata[1934]: Aug 05 22:18:13.788 INFO Fetch successful Aug 5 22:18:13.802319 systemd-networkd[1802]: eth0: Gained IPv6LL Aug 5 22:18:13.832947 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:18:13.836374 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:18:13.846377 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 5 22:18:13.856338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:18:13.873304 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:18:13.908868 systemd-logind[1944]: Watching system buttons on /dev/input/event1 (Power Button) Aug 5 22:18:13.929601 systemd-logind[1944]: Watching system buttons on /dev/input/event2 (Sleep Button) Aug 5 22:18:13.929646 systemd-logind[1944]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:18:13.930071 systemd-logind[1944]: New seat seat0. Aug 5 22:18:13.933122 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:18:13.948864 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 22:18:13.970070 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 22:18:14.040678 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 22:18:14.043030 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:18:14.044860 sshd_keygen[1969]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:18:14.097043 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:18:14.123822 dbus-daemon[1935]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 5 22:18:14.126874 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 5 22:18:14.135636 dbus-daemon[1935]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1984 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 5 22:18:14.152460 systemd[1]: Starting polkit.service - Authorization Manager... Aug 5 22:18:14.171177 amazon-ssm-agent[2034]: Initializing new seelog logger Aug 5 22:18:14.178880 amazon-ssm-agent[2034]: New Seelog Logger Creation Complete Aug 5 22:18:14.178880 amazon-ssm-agent[2034]: 2024/08/05 22:18:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:18:14.178880 amazon-ssm-agent[2034]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:18:14.178880 amazon-ssm-agent[2034]: 2024/08/05 22:18:14 processing appconfig overrides Aug 5 22:18:14.178880 amazon-ssm-agent[2034]: 2024/08/05 22:18:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:18:14.178880 amazon-ssm-agent[2034]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:18:14.178880 amazon-ssm-agent[2034]: 2024/08/05 22:18:14 processing appconfig overrides Aug 5 22:18:14.178880 amazon-ssm-agent[2034]: 2024/08/05 22:18:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:18:14.178880 amazon-ssm-agent[2034]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:18:14.182818 amazon-ssm-agent[2034]: 2024/08/05 22:18:14 processing appconfig overrides Aug 5 22:18:14.182818 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO Proxy environment variables: Aug 5 22:18:14.197134 amazon-ssm-agent[2034]: 2024/08/05 22:18:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:18:14.197134 amazon-ssm-agent[2034]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:18:14.197134 amazon-ssm-agent[2034]: 2024/08/05 22:18:14 processing appconfig overrides Aug 5 22:18:14.219172 polkitd[2077]: Started polkitd version 121 Aug 5 22:18:14.233135 polkitd[2077]: Loading rules from directory /etc/polkit-1/rules.d Aug 5 22:18:14.234237 polkitd[2077]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 5 22:18:14.236391 polkitd[2077]: Finished loading, compiling and executing 2 rules Aug 5 22:18:14.237511 dbus-daemon[1935]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 5 22:18:14.237862 systemd[1]: Started polkit.service - Authorization Manager. Aug 5 22:18:14.245009 polkitd[2077]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 5 22:18:14.250144 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:18:14.262118 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:18:14.274789 systemd[1]: Started sshd@0-172.31.26.236:22-139.178.89.65:52190.service - OpenSSH per-connection server daemon (139.178.89.65:52190). Aug 5 22:18:14.288388 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO https_proxy: Aug 5 22:18:14.361465 systemd-hostnamed[1984]: Hostname set to (transient) Aug 5 22:18:14.365966 systemd-resolved[1764]: System hostname changed to 'ip-172-31-26-236'. Aug 5 22:18:14.379607 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:18:14.379848 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:18:14.390675 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:18:14.419180 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO http_proxy: Aug 5 22:18:14.424183 coreos-metadata[2045]: Aug 05 22:18:14.423 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:18:14.426326 coreos-metadata[2045]: Aug 05 22:18:14.424 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 5 22:18:14.426326 coreos-metadata[2045]: Aug 05 22:18:14.426 INFO Fetch successful Aug 5 22:18:14.426326 coreos-metadata[2045]: Aug 05 22:18:14.426 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 5 22:18:14.430011 coreos-metadata[2045]: Aug 05 22:18:14.427 INFO Fetch successful Aug 5 22:18:14.432901 unknown[2045]: wrote ssh authorized keys file for user: core Aug 5 22:18:14.450218 locksmithd[1987]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:18:14.520189 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO no_proxy: Aug 5 22:18:14.570546 update-ssh-keys[2151]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:18:14.572488 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 22:18:14.587093 systemd[1]: Finished sshkeys.service. Aug 5 22:18:14.596059 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:18:14.608957 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:18:14.617805 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:18:14.621375 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:18:14.630812 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO Checking if agent identity type OnPrem can be assumed Aug 5 22:18:14.725448 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO Checking if agent identity type EC2 can be assumed Aug 5 22:18:14.764136 sshd[2108]: Accepted publickey for core from 139.178.89.65 port 52190 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:18:14.774632 sshd[2108]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:14.835210 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO Agent will take identity from EC2 Aug 5 22:18:14.877133 containerd[1955]: time="2024-08-05T22:18:14.876168028Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 22:18:14.883380 systemd-logind[1944]: New session 1 of user core. Aug 5 22:18:14.889632 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:18:14.902460 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:18:14.925252 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [amazon-ssm-agent] Starting Core Agent Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [Registrar] Starting registrar module Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [EC2Identity] EC2 registration was successful. Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [CredentialRefresher] credentialRefresher has started Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [CredentialRefresher] Starting credentials refresher loop Aug 5 22:18:14.938266 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 5 22:18:14.942683 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:18:14.960889 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:18:14.976939 (systemd)[2179]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.001681299Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.001757376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.004473674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.004517830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.004879918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.004910563Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.005028527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.005237415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.005264472Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.005362789Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:18:15.007271 containerd[1955]: time="2024-08-05T22:18:15.005651360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:18:15.009447 containerd[1955]: time="2024-08-05T22:18:15.005677961Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:18:15.009447 containerd[1955]: time="2024-08-05T22:18:15.005695345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:18:15.009447 containerd[1955]: time="2024-08-05T22:18:15.006230990Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:18:15.009447 containerd[1955]: time="2024-08-05T22:18:15.006255077Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:18:15.009447 containerd[1955]: time="2024-08-05T22:18:15.006335987Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:18:15.009447 containerd[1955]: time="2024-08-05T22:18:15.006353151Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.017807788Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.017867310Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.017887719Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.017932889Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.017956611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.017972581Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.017990939Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.018661328Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.018689276Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.018710997Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.018731602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.018755518Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.018781983Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:18:15.019459 containerd[1955]: time="2024-08-05T22:18:15.018988061Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:18:15.020133 containerd[1955]: time="2024-08-05T22:18:15.019008305Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:18:15.020133 containerd[1955]: time="2024-08-05T22:18:15.019029414Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:18:15.020133 containerd[1955]: time="2024-08-05T22:18:15.019051394Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:18:15.020133 containerd[1955]: time="2024-08-05T22:18:15.019070333Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:18:15.020133 containerd[1955]: time="2024-08-05T22:18:15.019088183Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:18:15.020133 containerd[1955]: time="2024-08-05T22:18:15.019251696Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021241261Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021323709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021433794Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021504635Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021638227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021665491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021684765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021733952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021754694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021778085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021829248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021848259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.021898824Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:18:15.026473 containerd[1955]: time="2024-08-05T22:18:15.022432334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.027032 amazon-ssm-agent[2034]: 2024-08-05 22:18:14 INFO [CredentialRefresher] Next credential rotation will be in 31.0916587865 minutes Aug 5 22:18:15.027085 containerd[1955]: time="2024-08-05T22:18:15.022859269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.027085 containerd[1955]: time="2024-08-05T22:18:15.022888593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.027085 containerd[1955]: time="2024-08-05T22:18:15.022944290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.027085 containerd[1955]: time="2024-08-05T22:18:15.022970155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.027085 containerd[1955]: time="2024-08-05T22:18:15.023023775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.027085 containerd[1955]: time="2024-08-05T22:18:15.023052231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.027085 containerd[1955]: time="2024-08-05T22:18:15.023070720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:18:15.027349 containerd[1955]: time="2024-08-05T22:18:15.026173969Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:18:15.027349 containerd[1955]: time="2024-08-05T22:18:15.026317737Z" level=info msg="Connect containerd service" Aug 5 22:18:15.027349 containerd[1955]: time="2024-08-05T22:18:15.026370215Z" level=info msg="using legacy CRI server" Aug 5 22:18:15.027349 containerd[1955]: time="2024-08-05T22:18:15.026384410Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:18:15.027349 containerd[1955]: time="2024-08-05T22:18:15.026505330Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:18:15.036454 containerd[1955]: time="2024-08-05T22:18:15.033834776Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:18:15.036454 containerd[1955]: time="2024-08-05T22:18:15.033919690Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:18:15.036454 containerd[1955]: time="2024-08-05T22:18:15.033957396Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:18:15.036454 containerd[1955]: time="2024-08-05T22:18:15.033978456Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:18:15.036454 containerd[1955]: time="2024-08-05T22:18:15.034002153Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:18:15.040868 containerd[1955]: time="2024-08-05T22:18:15.038019089Z" level=info msg="Start subscribing containerd event" Aug 5 22:18:15.040868 containerd[1955]: time="2024-08-05T22:18:15.039513559Z" level=info msg="Start recovering state" Aug 5 22:18:15.040868 containerd[1955]: time="2024-08-05T22:18:15.039620368Z" level=info msg="Start event monitor" Aug 5 22:18:15.040868 containerd[1955]: time="2024-08-05T22:18:15.039646238Z" level=info msg="Start snapshots syncer" Aug 5 22:18:15.040868 containerd[1955]: time="2024-08-05T22:18:15.039660269Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:18:15.040868 containerd[1955]: time="2024-08-05T22:18:15.039671437Z" level=info msg="Start streaming server" Aug 5 22:18:15.040868 containerd[1955]: time="2024-08-05T22:18:15.038701756Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:18:15.045020 containerd[1955]: time="2024-08-05T22:18:15.041784295Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:18:15.041985 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:18:15.045372 containerd[1955]: time="2024-08-05T22:18:15.045345634Z" level=info msg="containerd successfully booted in 0.171317s" Aug 5 22:18:15.219081 systemd[2179]: Queued start job for default target default.target. Aug 5 22:18:15.225512 systemd[2179]: Created slice app.slice - User Application Slice. Aug 5 22:18:15.225550 systemd[2179]: Reached target paths.target - Paths. Aug 5 22:18:15.225670 systemd[2179]: Reached target timers.target - Timers. Aug 5 22:18:15.229951 systemd[2179]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:18:15.259973 systemd[2179]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:18:15.260178 systemd[2179]: Reached target sockets.target - Sockets. Aug 5 22:18:15.260202 systemd[2179]: Reached target basic.target - Basic System. Aug 5 22:18:15.260265 systemd[2179]: Reached target default.target - Main User Target. Aug 5 22:18:15.260310 systemd[2179]: Startup finished in 265ms. Aug 5 22:18:15.261024 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:18:15.274328 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:18:15.452073 systemd[1]: Started sshd@1-172.31.26.236:22-139.178.89.65:52200.service - OpenSSH per-connection server daemon (139.178.89.65:52200). Aug 5 22:18:15.569226 tar[1956]: linux-amd64/LICENSE Aug 5 22:18:15.569226 tar[1956]: linux-amd64/README.md Aug 5 22:18:15.607555 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:18:15.630537 sshd[2192]: Accepted publickey for core from 139.178.89.65 port 52200 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:18:15.630837 sshd[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:15.647838 systemd-logind[1944]: New session 2 of user core. Aug 5 22:18:15.655407 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:18:15.794455 sshd[2192]: pam_unix(sshd:session): session closed for user core Aug 5 22:18:15.799200 systemd[1]: sshd@1-172.31.26.236:22-139.178.89.65:52200.service: Deactivated successfully. Aug 5 22:18:15.802798 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:18:15.805207 systemd-logind[1944]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:18:15.807624 systemd-logind[1944]: Removed session 2. Aug 5 22:18:15.839977 systemd[1]: Started sshd@2-172.31.26.236:22-139.178.89.65:52210.service - OpenSSH per-connection server daemon (139.178.89.65:52210). Aug 5 22:18:15.965053 amazon-ssm-agent[2034]: 2024-08-05 22:18:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 5 22:18:16.045647 sshd[2202]: Accepted publickey for core from 139.178.89.65 port 52210 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:18:16.058222 sshd[2202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:16.071377 amazon-ssm-agent[2034]: 2024-08-05 22:18:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2205) started Aug 5 22:18:16.104591 systemd-logind[1944]: New session 3 of user core. Aug 5 22:18:16.109680 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:18:16.172642 amazon-ssm-agent[2034]: 2024-08-05 22:18:15 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 5 22:18:16.250332 sshd[2202]: pam_unix(sshd:session): session closed for user core Aug 5 22:18:16.261278 systemd-logind[1944]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:18:16.262345 systemd[1]: sshd@2-172.31.26.236:22-139.178.89.65:52210.service: Deactivated successfully. Aug 5 22:18:16.276590 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:18:16.302376 systemd-logind[1944]: Removed session 3. Aug 5 22:18:16.309346 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:18:16.312515 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:18:16.314955 systemd[1]: Startup finished in 917ms (kernel) + 9.615s (initrd) + 7.989s (userspace) = 18.522s. Aug 5 22:18:16.322606 (kubelet)[2223]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:18:16.349335 ntpd[1939]: Listen normally on 6 eth0 [fe80::485:d8ff:fe55:d371%2]:123 Aug 5 22:18:16.354191 ntpd[1939]: 5 Aug 22:18:16 ntpd[1939]: Listen normally on 6 eth0 [fe80::485:d8ff:fe55:d371%2]:123 Aug 5 22:18:17.085225 kubelet[2223]: E0805 22:18:17.085145 2223 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:18:17.088222 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:18:17.088432 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:18:17.088792 systemd[1]: kubelet.service: Consumed 1.068s CPU time. Aug 5 22:18:21.098746 systemd-resolved[1764]: Clock change detected. Flushing caches. Aug 5 22:18:27.040999 systemd[1]: Started sshd@3-172.31.26.236:22-139.178.89.65:36666.service - OpenSSH per-connection server daemon (139.178.89.65:36666). Aug 5 22:18:27.243496 sshd[2237]: Accepted publickey for core from 139.178.89.65 port 36666 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:18:27.245464 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:27.262082 systemd-logind[1944]: New session 4 of user core. Aug 5 22:18:27.266111 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:18:27.405488 sshd[2237]: pam_unix(sshd:session): session closed for user core Aug 5 22:18:27.409575 systemd[1]: sshd@3-172.31.26.236:22-139.178.89.65:36666.service: Deactivated successfully. Aug 5 22:18:27.413157 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:18:27.418192 systemd-logind[1944]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:18:27.425339 systemd-logind[1944]: Removed session 4. Aug 5 22:18:27.450302 systemd[1]: Started sshd@4-172.31.26.236:22-139.178.89.65:36670.service - OpenSSH per-connection server daemon (139.178.89.65:36670). Aug 5 22:18:27.608277 sshd[2244]: Accepted publickey for core from 139.178.89.65 port 36670 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:18:27.613472 sshd[2244]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:27.619395 systemd-logind[1944]: New session 5 of user core. Aug 5 22:18:27.631117 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:18:27.744529 sshd[2244]: pam_unix(sshd:session): session closed for user core Aug 5 22:18:27.750372 systemd[1]: sshd@4-172.31.26.236:22-139.178.89.65:36670.service: Deactivated successfully. Aug 5 22:18:27.755507 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:18:27.758795 systemd-logind[1944]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:18:27.760393 systemd-logind[1944]: Removed session 5. Aug 5 22:18:27.788296 systemd[1]: Started sshd@5-172.31.26.236:22-139.178.89.65:36680.service - OpenSSH per-connection server daemon (139.178.89.65:36680). Aug 5 22:18:27.931261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:18:27.937165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:18:27.964745 sshd[2251]: Accepted publickey for core from 139.178.89.65 port 36680 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:18:27.966276 sshd[2251]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:27.975302 systemd-logind[1944]: New session 6 of user core. Aug 5 22:18:27.980094 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:18:28.097340 sshd[2251]: pam_unix(sshd:session): session closed for user core Aug 5 22:18:28.102456 systemd-logind[1944]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:18:28.103217 systemd[1]: sshd@5-172.31.26.236:22-139.178.89.65:36680.service: Deactivated successfully. Aug 5 22:18:28.105443 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:18:28.106451 systemd-logind[1944]: Removed session 6. Aug 5 22:18:28.133420 systemd[1]: Started sshd@6-172.31.26.236:22-139.178.89.65:36690.service - OpenSSH per-connection server daemon (139.178.89.65:36690). Aug 5 22:18:28.294777 sshd[2261]: Accepted publickey for core from 139.178.89.65 port 36690 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:18:28.297685 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:28.303979 systemd-logind[1944]: New session 7 of user core. Aug 5 22:18:28.312112 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:18:28.346258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:18:28.352698 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:18:28.448698 kubelet[2269]: E0805 22:18:28.448640 2269 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:18:28.454235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:18:28.454421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:18:28.460157 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:18:28.460517 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:18:28.470953 sudo[2276]: pam_unix(sudo:session): session closed for user root Aug 5 22:18:28.495114 sshd[2261]: pam_unix(sshd:session): session closed for user core Aug 5 22:18:28.503750 systemd[1]: sshd@6-172.31.26.236:22-139.178.89.65:36690.service: Deactivated successfully. Aug 5 22:18:28.506122 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:18:28.507689 systemd-logind[1944]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:18:28.509054 systemd-logind[1944]: Removed session 7. Aug 5 22:18:28.531305 systemd[1]: Started sshd@7-172.31.26.236:22-139.178.89.65:36704.service - OpenSSH per-connection server daemon (139.178.89.65:36704). Aug 5 22:18:28.689161 sshd[2283]: Accepted publickey for core from 139.178.89.65 port 36704 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:18:28.689773 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:28.695244 systemd-logind[1944]: New session 8 of user core. Aug 5 22:18:28.706128 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:18:28.821421 sudo[2287]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:18:28.821950 sudo[2287]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:18:28.826246 sudo[2287]: pam_unix(sudo:session): session closed for user root Aug 5 22:18:28.833952 sudo[2286]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:18:28.835267 sudo[2286]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:18:28.861331 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:18:28.863747 auditctl[2290]: No rules Aug 5 22:18:28.865263 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:18:28.865535 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:18:28.871440 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:18:28.910353 augenrules[2308]: No rules Aug 5 22:18:28.911960 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:18:28.913324 sudo[2286]: pam_unix(sudo:session): session closed for user root Aug 5 22:18:28.936082 sshd[2283]: pam_unix(sshd:session): session closed for user core Aug 5 22:18:28.939966 systemd[1]: sshd@7-172.31.26.236:22-139.178.89.65:36704.service: Deactivated successfully. Aug 5 22:18:28.949507 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:18:28.952902 systemd-logind[1944]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:18:28.981360 systemd[1]: Started sshd@8-172.31.26.236:22-139.178.89.65:36706.service - OpenSSH per-connection server daemon (139.178.89.65:36706). Aug 5 22:18:28.982685 systemd-logind[1944]: Removed session 8. Aug 5 22:18:29.138973 sshd[2316]: Accepted publickey for core from 139.178.89.65 port 36706 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:18:29.140753 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:18:29.149932 systemd-logind[1944]: New session 9 of user core. Aug 5 22:18:29.152067 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:18:29.258550 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:18:29.258949 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:18:29.522715 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:18:29.543407 (dockerd)[2328]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:18:30.071661 dockerd[2328]: time="2024-08-05T22:18:30.071601514Z" level=info msg="Starting up" Aug 5 22:18:30.163660 dockerd[2328]: time="2024-08-05T22:18:30.163613347Z" level=info msg="Loading containers: start." Aug 5 22:18:30.342081 kernel: Initializing XFRM netlink socket Aug 5 22:18:30.404405 (udev-worker)[2339]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:18:30.546551 systemd-networkd[1802]: docker0: Link UP Aug 5 22:18:30.567604 dockerd[2328]: time="2024-08-05T22:18:30.567551455Z" level=info msg="Loading containers: done." Aug 5 22:18:30.731855 dockerd[2328]: time="2024-08-05T22:18:30.731777985Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:18:30.732113 dockerd[2328]: time="2024-08-05T22:18:30.732084830Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:18:30.732247 dockerd[2328]: time="2024-08-05T22:18:30.732224622Z" level=info msg="Daemon has completed initialization" Aug 5 22:18:30.782967 dockerd[2328]: time="2024-08-05T22:18:30.782599264Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:18:30.782829 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:18:32.006191 containerd[1955]: time="2024-08-05T22:18:32.006143791Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 22:18:32.698320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584260961.mount: Deactivated successfully. Aug 5 22:18:34.872252 containerd[1955]: time="2024-08-05T22:18:34.872198433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:34.874662 containerd[1955]: time="2024-08-05T22:18:34.874466638Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=34527317" Aug 5 22:18:34.876085 containerd[1955]: time="2024-08-05T22:18:34.876045021Z" level=info msg="ImageCreate event name:\"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:34.879909 containerd[1955]: time="2024-08-05T22:18:34.879596837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:34.881051 containerd[1955]: time="2024-08-05T22:18:34.880843528Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"34524117\" in 2.87465426s" Aug 5 22:18:34.881051 containerd[1955]: time="2024-08-05T22:18:34.880903083Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\"" Aug 5 22:18:34.909512 containerd[1955]: time="2024-08-05T22:18:34.909464142Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 22:18:37.071020 containerd[1955]: time="2024-08-05T22:18:37.070963190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:37.072863 containerd[1955]: time="2024-08-05T22:18:37.072657937Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=31847067" Aug 5 22:18:37.074903 containerd[1955]: time="2024-08-05T22:18:37.074712182Z" level=info msg="ImageCreate event name:\"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:37.078410 containerd[1955]: time="2024-08-05T22:18:37.078347247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:37.079892 containerd[1955]: time="2024-08-05T22:18:37.079623347Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"33397013\" in 2.170117069s" Aug 5 22:18:37.079892 containerd[1955]: time="2024-08-05T22:18:37.079667323Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\"" Aug 5 22:18:37.106373 containerd[1955]: time="2024-08-05T22:18:37.106337823Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 22:18:38.610324 containerd[1955]: time="2024-08-05T22:18:38.610265914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:38.611903 containerd[1955]: time="2024-08-05T22:18:38.611795943Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=17097295" Aug 5 22:18:38.613562 containerd[1955]: time="2024-08-05T22:18:38.613502054Z" level=info msg="ImageCreate event name:\"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:38.616789 containerd[1955]: time="2024-08-05T22:18:38.616726457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:38.618572 containerd[1955]: time="2024-08-05T22:18:38.617907294Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"18647259\" in 1.511530255s" Aug 5 22:18:38.618572 containerd[1955]: time="2024-08-05T22:18:38.617954639Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\"" Aug 5 22:18:38.642416 containerd[1955]: time="2024-08-05T22:18:38.642379764Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 22:18:38.677805 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:18:38.683291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:18:39.271119 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:18:39.284704 (kubelet)[2542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:18:39.413835 kubelet[2542]: E0805 22:18:39.413672 2542 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:18:39.427368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:18:39.428503 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:18:40.311277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756655270.mount: Deactivated successfully. Aug 5 22:18:41.141943 containerd[1955]: time="2024-08-05T22:18:41.141870018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:41.161243 containerd[1955]: time="2024-08-05T22:18:41.161138670Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=28303769" Aug 5 22:18:41.172507 containerd[1955]: time="2024-08-05T22:18:41.172425866Z" level=info msg="ImageCreate event name:\"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:41.189763 containerd[1955]: time="2024-08-05T22:18:41.189713999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:41.192981 containerd[1955]: time="2024-08-05T22:18:41.190699854Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"28302788\" in 2.548231656s" Aug 5 22:18:41.192981 containerd[1955]: time="2024-08-05T22:18:41.190759073Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\"" Aug 5 22:18:41.224092 containerd[1955]: time="2024-08-05T22:18:41.224048805Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:18:41.752084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585588963.mount: Deactivated successfully. Aug 5 22:18:41.761447 containerd[1955]: time="2024-08-05T22:18:41.761302981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:41.762904 containerd[1955]: time="2024-08-05T22:18:41.762826818Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 5 22:18:41.766104 containerd[1955]: time="2024-08-05T22:18:41.764942390Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:41.769601 containerd[1955]: time="2024-08-05T22:18:41.769538118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:41.770386 containerd[1955]: time="2024-08-05T22:18:41.770346890Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 546.250036ms" Aug 5 22:18:41.770494 containerd[1955]: time="2024-08-05T22:18:41.770391774Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:18:41.799975 containerd[1955]: time="2024-08-05T22:18:41.799928590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:18:42.362028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount636171218.mount: Deactivated successfully. Aug 5 22:18:45.156787 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 5 22:18:45.264861 containerd[1955]: time="2024-08-05T22:18:45.264785324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:45.281508 containerd[1955]: time="2024-08-05T22:18:45.281244204Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Aug 5 22:18:45.299921 containerd[1955]: time="2024-08-05T22:18:45.299528358Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:45.314828 containerd[1955]: time="2024-08-05T22:18:45.314749067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:45.320292 containerd[1955]: time="2024-08-05T22:18:45.320073741Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.52010137s" Aug 5 22:18:45.320292 containerd[1955]: time="2024-08-05T22:18:45.320130411Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 5 22:18:45.389295 containerd[1955]: time="2024-08-05T22:18:45.389258948Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 22:18:45.938586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2547037966.mount: Deactivated successfully. Aug 5 22:18:47.039626 containerd[1955]: time="2024-08-05T22:18:47.039570893Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:47.041104 containerd[1955]: time="2024-08-05T22:18:47.040976967Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Aug 5 22:18:47.042992 containerd[1955]: time="2024-08-05T22:18:47.042954844Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:47.048028 containerd[1955]: time="2024-08-05T22:18:47.046830931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:18:47.048028 containerd[1955]: time="2024-08-05T22:18:47.047868449Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.658570108s" Aug 5 22:18:47.048028 containerd[1955]: time="2024-08-05T22:18:47.047925356Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Aug 5 22:18:49.428633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 22:18:49.442097 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:18:50.008448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:18:50.016523 (kubelet)[2701]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:18:50.106939 kubelet[2701]: E0805 22:18:50.103306 2701 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:18:50.107355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:18:50.107553 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:18:50.939386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:18:50.953288 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:18:50.990269 systemd[1]: Reloading requested from client PID 2715 ('systemctl') (unit session-9.scope)... Aug 5 22:18:50.990288 systemd[1]: Reloading... Aug 5 22:18:51.172905 zram_generator::config[2753]: No configuration found. Aug 5 22:18:51.313707 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:18:51.414969 systemd[1]: Reloading finished in 424 ms. Aug 5 22:18:51.475221 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:18:51.475360 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:18:51.476047 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:18:51.481239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:18:52.009708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:18:52.025564 (kubelet)[2810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:18:52.114276 kubelet[2810]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:18:52.114276 kubelet[2810]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:18:52.114276 kubelet[2810]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:18:52.114742 kubelet[2810]: I0805 22:18:52.114328 2810 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:18:52.526031 kubelet[2810]: I0805 22:18:52.525994 2810 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:18:52.526031 kubelet[2810]: I0805 22:18:52.526025 2810 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:18:52.526335 kubelet[2810]: I0805 22:18:52.526313 2810 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:18:52.556917 kubelet[2810]: I0805 22:18:52.555852 2810 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:18:52.557146 kubelet[2810]: E0805 22:18:52.557120 2810 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.236:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:52.573116 kubelet[2810]: I0805 22:18:52.573078 2810 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:18:52.574421 kubelet[2810]: I0805 22:18:52.574394 2810 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:18:52.574702 kubelet[2810]: I0805 22:18:52.574676 2810 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:18:52.575555 kubelet[2810]: I0805 22:18:52.575517 2810 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:18:52.575555 kubelet[2810]: I0805 22:18:52.575553 2810 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:18:52.576482 kubelet[2810]: I0805 22:18:52.576457 2810 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:18:52.579215 kubelet[2810]: I0805 22:18:52.579191 2810 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:18:52.579313 kubelet[2810]: I0805 22:18:52.579223 2810 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:18:52.579313 kubelet[2810]: I0805 22:18:52.579294 2810 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:18:52.579386 kubelet[2810]: I0805 22:18:52.579315 2810 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:18:52.582909 kubelet[2810]: I0805 22:18:52.582135 2810 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:18:52.588902 kubelet[2810]: W0805 22:18:52.587749 2810 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:18:52.592381 kubelet[2810]: W0805 22:18:52.592307 2810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.26.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-236&limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:52.592381 kubelet[2810]: E0805 22:18:52.592382 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-236&limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:52.592565 kubelet[2810]: W0805 22:18:52.592471 2810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.26.236:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:52.592565 kubelet[2810]: E0805 22:18:52.592518 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.236:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:52.594279 kubelet[2810]: I0805 22:18:52.594253 2810 server.go:1232] "Started kubelet" Aug 5 22:18:52.596293 kubelet[2810]: I0805 22:18:52.596272 2810 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:18:52.596768 kubelet[2810]: I0805 22:18:52.596751 2810 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:18:52.596941 kubelet[2810]: I0805 22:18:52.596930 2810 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:18:52.598264 kubelet[2810]: I0805 22:18:52.598246 2810 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:18:52.602140 kubelet[2810]: I0805 22:18:52.598268 2810 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:18:52.606001 kubelet[2810]: E0805 22:18:52.605970 2810 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:18:52.606179 kubelet[2810]: E0805 22:18:52.606166 2810 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:18:52.607525 kubelet[2810]: E0805 22:18:52.607397 2810 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-26-236.17e8f516111a0b3c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-26-236", UID:"ip-172-31-26-236", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-26-236"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 18, 52, 594219836, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 18, 52, 594219836, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-26-236"}': 'Post "https://172.31.26.236:6443/api/v1/namespaces/default/events": dial tcp 172.31.26.236:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:18:52.608304 kubelet[2810]: I0805 22:18:52.607821 2810 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:18:52.608304 kubelet[2810]: I0805 22:18:52.607946 2810 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:18:52.608304 kubelet[2810]: I0805 22:18:52.608023 2810 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:18:52.608467 kubelet[2810]: W0805 22:18:52.608423 2810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.26.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:52.608529 kubelet[2810]: E0805 22:18:52.608484 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:52.610288 kubelet[2810]: E0805 22:18:52.610266 2810 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-236?timeout=10s\": dial tcp 172.31.26.236:6443: connect: connection refused" interval="200ms" Aug 5 22:18:52.648963 kubelet[2810]: I0805 22:18:52.645987 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:18:52.648963 kubelet[2810]: I0805 22:18:52.647495 2810 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:18:52.648963 kubelet[2810]: I0805 22:18:52.647518 2810 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:18:52.648963 kubelet[2810]: I0805 22:18:52.647540 2810 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:18:52.648963 kubelet[2810]: E0805 22:18:52.647599 2810 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:18:52.658573 kubelet[2810]: W0805 22:18:52.658510 2810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.26.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:52.658698 kubelet[2810]: E0805 22:18:52.658582 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:52.662236 kubelet[2810]: I0805 22:18:52.662217 2810 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:18:52.662430 kubelet[2810]: I0805 22:18:52.662350 2810 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:18:52.662499 kubelet[2810]: I0805 22:18:52.662494 2810 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:18:52.682351 kubelet[2810]: I0805 22:18:52.682316 2810 policy_none.go:49] "None policy: Start" Aug 5 22:18:52.683568 kubelet[2810]: I0805 22:18:52.683272 2810 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:18:52.683568 kubelet[2810]: I0805 22:18:52.683298 2810 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:18:52.708548 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:18:52.710989 kubelet[2810]: I0805 22:18:52.710964 2810 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-236" Aug 5 22:18:52.711420 kubelet[2810]: E0805 22:18:52.711402 2810 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.26.236:6443/api/v1/nodes\": dial tcp 172.31.26.236:6443: connect: connection refused" node="ip-172-31-26-236" Aug 5 22:18:52.722812 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:18:52.745110 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:18:52.749173 kubelet[2810]: I0805 22:18:52.747947 2810 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:18:52.749517 kubelet[2810]: I0805 22:18:52.749485 2810 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:18:52.753275 kubelet[2810]: E0805 22:18:52.753250 2810 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-26-236\" not found" Aug 5 22:18:52.756436 kubelet[2810]: I0805 22:18:52.756063 2810 topology_manager.go:215] "Topology Admit Handler" podUID="a34e21104e78c02a105905f8dfdc43ee" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-236" Aug 5 22:18:52.763808 kubelet[2810]: I0805 22:18:52.763776 2810 topology_manager.go:215] "Topology Admit Handler" podUID="ca8512a50d7bc8c8626a926667b07f5a" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-236" Aug 5 22:18:52.768854 kubelet[2810]: I0805 22:18:52.768816 2810 topology_manager.go:215] "Topology Admit Handler" podUID="d4964b75e14f2aa99b370f247b330d8a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-236" Aug 5 22:18:52.779203 systemd[1]: Created slice kubepods-burstable-poda34e21104e78c02a105905f8dfdc43ee.slice - libcontainer container kubepods-burstable-poda34e21104e78c02a105905f8dfdc43ee.slice. Aug 5 22:18:52.807171 systemd[1]: Created slice kubepods-burstable-podca8512a50d7bc8c8626a926667b07f5a.slice - libcontainer container kubepods-burstable-podca8512a50d7bc8c8626a926667b07f5a.slice. Aug 5 22:18:52.811623 kubelet[2810]: E0805 22:18:52.811591 2810 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-236?timeout=10s\": dial tcp 172.31.26.236:6443: connect: connection refused" interval="400ms" Aug 5 22:18:52.816931 systemd[1]: Created slice kubepods-burstable-podd4964b75e14f2aa99b370f247b330d8a.slice - libcontainer container kubepods-burstable-podd4964b75e14f2aa99b370f247b330d8a.slice. Aug 5 22:18:52.909243 kubelet[2810]: I0805 22:18:52.909162 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4964b75e14f2aa99b370f247b330d8a-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-236\" (UID: \"d4964b75e14f2aa99b370f247b330d8a\") " pod="kube-system/kube-scheduler-ip-172-31-26-236" Aug 5 22:18:52.909243 kubelet[2810]: I0805 22:18:52.909254 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:18:52.909527 kubelet[2810]: I0805 22:18:52.909305 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:18:52.909527 kubelet[2810]: I0805 22:18:52.909344 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:18:52.909527 kubelet[2810]: I0805 22:18:52.909370 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a34e21104e78c02a105905f8dfdc43ee-ca-certs\") pod \"kube-apiserver-ip-172-31-26-236\" (UID: \"a34e21104e78c02a105905f8dfdc43ee\") " pod="kube-system/kube-apiserver-ip-172-31-26-236" Aug 5 22:18:52.909527 kubelet[2810]: I0805 22:18:52.909417 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a34e21104e78c02a105905f8dfdc43ee-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-236\" (UID: \"a34e21104e78c02a105905f8dfdc43ee\") " pod="kube-system/kube-apiserver-ip-172-31-26-236" Aug 5 22:18:52.909527 kubelet[2810]: I0805 22:18:52.909446 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a34e21104e78c02a105905f8dfdc43ee-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-236\" (UID: \"a34e21104e78c02a105905f8dfdc43ee\") " pod="kube-system/kube-apiserver-ip-172-31-26-236" Aug 5 22:18:52.909791 kubelet[2810]: I0805 22:18:52.909477 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:18:52.909791 kubelet[2810]: I0805 22:18:52.909512 2810 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:18:52.914169 kubelet[2810]: I0805 22:18:52.914139 2810 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-236" Aug 5 22:18:52.914530 kubelet[2810]: E0805 22:18:52.914501 2810 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.26.236:6443/api/v1/nodes\": dial tcp 172.31.26.236:6443: connect: connection refused" node="ip-172-31-26-236" Aug 5 22:18:53.103091 containerd[1955]: time="2024-08-05T22:18:53.102979579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-236,Uid:a34e21104e78c02a105905f8dfdc43ee,Namespace:kube-system,Attempt:0,}" Aug 5 22:18:53.131164 containerd[1955]: time="2024-08-05T22:18:53.131107836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-236,Uid:d4964b75e14f2aa99b370f247b330d8a,Namespace:kube-system,Attempt:0,}" Aug 5 22:18:53.131437 containerd[1955]: time="2024-08-05T22:18:53.131113760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-236,Uid:ca8512a50d7bc8c8626a926667b07f5a,Namespace:kube-system,Attempt:0,}" Aug 5 22:18:53.213174 kubelet[2810]: E0805 22:18:53.212948 2810 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-236?timeout=10s\": dial tcp 172.31.26.236:6443: connect: connection refused" interval="800ms" Aug 5 22:18:53.316463 kubelet[2810]: I0805 22:18:53.316431 2810 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-236" Aug 5 22:18:53.316935 kubelet[2810]: E0805 22:18:53.316909 2810 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.26.236:6443/api/v1/nodes\": dial tcp 172.31.26.236:6443: connect: connection refused" node="ip-172-31-26-236" Aug 5 22:18:53.622577 kubelet[2810]: W0805 22:18:53.622437 2810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.26.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:53.622577 kubelet[2810]: E0805 22:18:53.622568 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.26.236:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:53.671662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1333492659.mount: Deactivated successfully. Aug 5 22:18:53.686391 containerd[1955]: time="2024-08-05T22:18:53.686338558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:18:53.687962 containerd[1955]: time="2024-08-05T22:18:53.687908941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 5 22:18:53.689496 containerd[1955]: time="2024-08-05T22:18:53.689454820Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:18:53.690844 containerd[1955]: time="2024-08-05T22:18:53.690805950Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:18:53.692487 containerd[1955]: time="2024-08-05T22:18:53.692430261Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:18:53.694433 containerd[1955]: time="2024-08-05T22:18:53.694392037Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:18:53.696804 containerd[1955]: time="2024-08-05T22:18:53.696021445Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:18:53.696928 kubelet[2810]: W0805 22:18:53.696694 2810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.26.236:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:53.696928 kubelet[2810]: E0805 22:18:53.696776 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.26.236:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:53.699936 containerd[1955]: time="2024-08-05T22:18:53.699865629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:18:53.702395 containerd[1955]: time="2024-08-05T22:18:53.700986749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 591.106486ms" Aug 5 22:18:53.707799 containerd[1955]: time="2024-08-05T22:18:53.707682726Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 576.44161ms" Aug 5 22:18:53.708549 containerd[1955]: time="2024-08-05T22:18:53.708238420Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 576.744217ms" Aug 5 22:18:53.935036 kubelet[2810]: W0805 22:18:53.934480 2810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.26.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-236&limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:53.935036 kubelet[2810]: E0805 22:18:53.934646 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.26.236:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-236&limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:54.014611 kubelet[2810]: E0805 22:18:54.014571 2810 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.26.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-236?timeout=10s\": dial tcp 172.31.26.236:6443: connect: connection refused" interval="1.6s" Aug 5 22:18:54.040690 kubelet[2810]: W0805 22:18:54.038673 2810 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.26.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:54.040690 kubelet[2810]: E0805 22:18:54.040230 2810 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.26.236:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:54.104788 containerd[1955]: time="2024-08-05T22:18:54.104576736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:18:54.105984 containerd[1955]: time="2024-08-05T22:18:54.105132939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:18:54.105984 containerd[1955]: time="2024-08-05T22:18:54.105184307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:18:54.106596 containerd[1955]: time="2024-08-05T22:18:54.105214864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:18:54.120412 kubelet[2810]: I0805 22:18:54.120385 2810 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-236" Aug 5 22:18:54.122456 kubelet[2810]: E0805 22:18:54.122371 2810 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.26.236:6443/api/v1/nodes\": dial tcp 172.31.26.236:6443: connect: connection refused" node="ip-172-31-26-236" Aug 5 22:18:54.161663 containerd[1955]: time="2024-08-05T22:18:54.161541151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:18:54.162277 containerd[1955]: time="2024-08-05T22:18:54.161888222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:18:54.165762 containerd[1955]: time="2024-08-05T22:18:54.165559661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:18:54.168562 containerd[1955]: time="2024-08-05T22:18:54.167435183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:18:54.168828 containerd[1955]: time="2024-08-05T22:18:54.167512461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:18:54.168828 containerd[1955]: time="2024-08-05T22:18:54.168147403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:18:54.168828 containerd[1955]: time="2024-08-05T22:18:54.168207637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:18:54.171332 containerd[1955]: time="2024-08-05T22:18:54.171198125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:18:54.191906 systemd[1]: Started cri-containerd-0bd7331028911d68a964370104bc6da9cd2054db8cd5ce4d32a4d3c0428b94b1.scope - libcontainer container 0bd7331028911d68a964370104bc6da9cd2054db8cd5ce4d32a4d3c0428b94b1. Aug 5 22:18:54.242114 systemd[1]: Started cri-containerd-560f7309407254cebbe21e13cb3c1f9f786ceaa443e419cd43cdb45376f2ca7f.scope - libcontainer container 560f7309407254cebbe21e13cb3c1f9f786ceaa443e419cd43cdb45376f2ca7f. Aug 5 22:18:54.251340 systemd[1]: Started cri-containerd-5c78524488668f8c3ef81fbd296f3b7372ae161575396f066a852cfb45f4b728.scope - libcontainer container 5c78524488668f8c3ef81fbd296f3b7372ae161575396f066a852cfb45f4b728. Aug 5 22:18:54.368650 containerd[1955]: time="2024-08-05T22:18:54.368378269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-26-236,Uid:d4964b75e14f2aa99b370f247b330d8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"560f7309407254cebbe21e13cb3c1f9f786ceaa443e419cd43cdb45376f2ca7f\"" Aug 5 22:18:54.391906 containerd[1955]: time="2024-08-05T22:18:54.391091891Z" level=info msg="CreateContainer within sandbox \"560f7309407254cebbe21e13cb3c1f9f786ceaa443e419cd43cdb45376f2ca7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:18:54.403005 containerd[1955]: time="2024-08-05T22:18:54.402865329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-26-236,Uid:ca8512a50d7bc8c8626a926667b07f5a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0bd7331028911d68a964370104bc6da9cd2054db8cd5ce4d32a4d3c0428b94b1\"" Aug 5 22:18:54.411219 containerd[1955]: time="2024-08-05T22:18:54.411045694Z" level=info msg="CreateContainer within sandbox \"0bd7331028911d68a964370104bc6da9cd2054db8cd5ce4d32a4d3c0428b94b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:18:54.424482 containerd[1955]: time="2024-08-05T22:18:54.423677759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-26-236,Uid:a34e21104e78c02a105905f8dfdc43ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c78524488668f8c3ef81fbd296f3b7372ae161575396f066a852cfb45f4b728\"" Aug 5 22:18:54.430734 containerd[1955]: time="2024-08-05T22:18:54.430693910Z" level=info msg="CreateContainer within sandbox \"5c78524488668f8c3ef81fbd296f3b7372ae161575396f066a852cfb45f4b728\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:18:54.459279 containerd[1955]: time="2024-08-05T22:18:54.459167977Z" level=info msg="CreateContainer within sandbox \"560f7309407254cebbe21e13cb3c1f9f786ceaa443e419cd43cdb45376f2ca7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c\"" Aug 5 22:18:54.461207 containerd[1955]: time="2024-08-05T22:18:54.461162828Z" level=info msg="StartContainer for \"1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c\"" Aug 5 22:18:54.463643 containerd[1955]: time="2024-08-05T22:18:54.463097808Z" level=info msg="CreateContainer within sandbox \"0bd7331028911d68a964370104bc6da9cd2054db8cd5ce4d32a4d3c0428b94b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e\"" Aug 5 22:18:54.465037 containerd[1955]: time="2024-08-05T22:18:54.465006243Z" level=info msg="StartContainer for \"8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e\"" Aug 5 22:18:54.479329 containerd[1955]: time="2024-08-05T22:18:54.479143039Z" level=info msg="CreateContainer within sandbox \"5c78524488668f8c3ef81fbd296f3b7372ae161575396f066a852cfb45f4b728\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7093b9c701d0d3d053828214ad5308b43b7fc4adfa56bd1d545e390567ecb0d5\"" Aug 5 22:18:54.480428 containerd[1955]: time="2024-08-05T22:18:54.480363350Z" level=info msg="StartContainer for \"7093b9c701d0d3d053828214ad5308b43b7fc4adfa56bd1d545e390567ecb0d5\"" Aug 5 22:18:54.524514 systemd[1]: Started cri-containerd-1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c.scope - libcontainer container 1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c. Aug 5 22:18:54.549637 systemd[1]: Started cri-containerd-8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e.scope - libcontainer container 8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e. Aug 5 22:18:54.585143 systemd[1]: Started cri-containerd-7093b9c701d0d3d053828214ad5308b43b7fc4adfa56bd1d545e390567ecb0d5.scope - libcontainer container 7093b9c701d0d3d053828214ad5308b43b7fc4adfa56bd1d545e390567ecb0d5. Aug 5 22:18:54.702573 kubelet[2810]: E0805 22:18:54.702539 2810 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.26.236:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.26.236:6443: connect: connection refused Aug 5 22:18:54.712351 containerd[1955]: time="2024-08-05T22:18:54.711921926Z" level=info msg="StartContainer for \"8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e\" returns successfully" Aug 5 22:18:54.712351 containerd[1955]: time="2024-08-05T22:18:54.712017167Z" level=info msg="StartContainer for \"7093b9c701d0d3d053828214ad5308b43b7fc4adfa56bd1d545e390567ecb0d5\" returns successfully" Aug 5 22:18:54.723207 containerd[1955]: time="2024-08-05T22:18:54.723161377Z" level=info msg="StartContainer for \"1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c\" returns successfully" Aug 5 22:18:55.731108 kubelet[2810]: I0805 22:18:55.731082 2810 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-236" Aug 5 22:18:58.125196 kubelet[2810]: E0805 22:18:58.125141 2810 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-26-236\" not found" node="ip-172-31-26-236" Aug 5 22:18:58.155923 kubelet[2810]: I0805 22:18:58.154952 2810 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-26-236" Aug 5 22:18:58.583433 kubelet[2810]: I0805 22:18:58.583381 2810 apiserver.go:52] "Watching apiserver" Aug 5 22:18:58.608279 kubelet[2810]: I0805 22:18:58.608235 2810 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:18:58.754996 kubelet[2810]: E0805 22:18:58.754938 2810 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-26-236\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-26-236" Aug 5 22:18:59.558036 update_engine[1945]: I0805 22:18:59.557983 1945 update_attempter.cc:509] Updating boot flags... Aug 5 22:18:59.708100 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3097) Aug 5 22:19:00.222944 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3098) Aug 5 22:19:00.504952 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3098) Aug 5 22:19:01.785250 systemd[1]: Reloading requested from client PID 3352 ('systemctl') (unit session-9.scope)... Aug 5 22:19:01.785274 systemd[1]: Reloading... Aug 5 22:19:02.075905 zram_generator::config[3390]: No configuration found. Aug 5 22:19:02.395517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:19:02.607340 systemd[1]: Reloading finished in 821 ms. Aug 5 22:19:02.675235 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:19:02.675980 kubelet[2810]: I0805 22:19:02.675659 2810 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:19:02.696431 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:19:02.697001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:19:02.701801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:19:03.097280 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:19:03.106702 (kubelet)[3447]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:19:03.234754 kubelet[3447]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:19:03.234754 kubelet[3447]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:19:03.234754 kubelet[3447]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:19:03.234754 kubelet[3447]: I0805 22:19:03.232585 3447 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:19:03.245381 kubelet[3447]: I0805 22:19:03.245331 3447 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:19:03.245744 kubelet[3447]: I0805 22:19:03.245729 3447 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:19:03.246142 kubelet[3447]: I0805 22:19:03.246122 3447 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:19:03.249037 kubelet[3447]: I0805 22:19:03.249007 3447 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:19:03.251854 kubelet[3447]: I0805 22:19:03.251825 3447 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:19:03.262218 kubelet[3447]: I0805 22:19:03.262184 3447 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:19:03.262509 kubelet[3447]: I0805 22:19:03.262487 3447 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:19:03.262713 kubelet[3447]: I0805 22:19:03.262694 3447 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:19:03.262837 kubelet[3447]: I0805 22:19:03.262723 3447 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:19:03.262837 kubelet[3447]: I0805 22:19:03.262738 3447 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:19:03.262837 kubelet[3447]: I0805 22:19:03.262795 3447 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:19:03.264553 kubelet[3447]: I0805 22:19:03.263067 3447 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:19:03.264553 kubelet[3447]: I0805 22:19:03.263090 3447 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:19:03.264553 kubelet[3447]: I0805 22:19:03.263122 3447 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:19:03.264553 kubelet[3447]: I0805 22:19:03.263142 3447 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:19:03.277823 kubelet[3447]: I0805 22:19:03.276745 3447 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:19:03.277823 kubelet[3447]: I0805 22:19:03.277498 3447 server.go:1232] "Started kubelet" Aug 5 22:19:03.281271 kubelet[3447]: I0805 22:19:03.279656 3447 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:19:03.282496 kubelet[3447]: I0805 22:19:03.282408 3447 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:19:03.287012 kubelet[3447]: I0805 22:19:03.286327 3447 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:19:03.289204 kubelet[3447]: I0805 22:19:03.288318 3447 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:19:03.289204 kubelet[3447]: I0805 22:19:03.288602 3447 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:19:03.301803 kubelet[3447]: I0805 22:19:03.301012 3447 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:19:03.301803 kubelet[3447]: I0805 22:19:03.301466 3447 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:19:03.301803 kubelet[3447]: I0805 22:19:03.301632 3447 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:19:03.331492 kubelet[3447]: I0805 22:19:03.331415 3447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:19:03.338395 kubelet[3447]: I0805 22:19:03.338366 3447 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:19:03.341020 kubelet[3447]: I0805 22:19:03.340999 3447 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:19:03.341193 kubelet[3447]: I0805 22:19:03.341183 3447 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:19:03.341336 kubelet[3447]: E0805 22:19:03.341324 3447 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:19:03.342054 kubelet[3447]: E0805 22:19:03.339388 3447 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:19:03.342195 kubelet[3447]: E0805 22:19:03.342183 3447 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:19:03.410812 kubelet[3447]: I0805 22:19:03.410608 3447 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-26-236" Aug 5 22:19:03.442052 kubelet[3447]: I0805 22:19:03.439300 3447 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-26-236" Aug 5 22:19:03.442052 kubelet[3447]: I0805 22:19:03.440758 3447 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-26-236" Aug 5 22:19:03.442734 kubelet[3447]: E0805 22:19:03.442381 3447 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:19:03.520190 kubelet[3447]: I0805 22:19:03.519579 3447 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:19:03.520190 kubelet[3447]: I0805 22:19:03.519607 3447 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:19:03.520190 kubelet[3447]: I0805 22:19:03.519651 3447 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:19:03.520190 kubelet[3447]: I0805 22:19:03.519977 3447 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:19:03.520190 kubelet[3447]: I0805 22:19:03.520006 3447 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:19:03.520190 kubelet[3447]: I0805 22:19:03.520015 3447 policy_none.go:49] "None policy: Start" Aug 5 22:19:03.521173 kubelet[3447]: I0805 22:19:03.521139 3447 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:19:03.521367 kubelet[3447]: I0805 22:19:03.521346 3447 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:19:03.522517 kubelet[3447]: I0805 22:19:03.521797 3447 state_mem.go:75] "Updated machine memory state" Aug 5 22:19:03.529814 kubelet[3447]: I0805 22:19:03.529056 3447 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:19:03.532460 kubelet[3447]: I0805 22:19:03.529870 3447 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:19:03.643201 kubelet[3447]: I0805 22:19:03.643153 3447 topology_manager.go:215] "Topology Admit Handler" podUID="a34e21104e78c02a105905f8dfdc43ee" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-26-236" Aug 5 22:19:03.643337 kubelet[3447]: I0805 22:19:03.643278 3447 topology_manager.go:215] "Topology Admit Handler" podUID="ca8512a50d7bc8c8626a926667b07f5a" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-26-236" Aug 5 22:19:03.643337 kubelet[3447]: I0805 22:19:03.643330 3447 topology_manager.go:215] "Topology Admit Handler" podUID="d4964b75e14f2aa99b370f247b330d8a" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-26-236" Aug 5 22:19:03.803705 kubelet[3447]: I0805 22:19:03.803639 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-k8s-certs\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:19:03.803870 kubelet[3447]: I0805 22:19:03.803730 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:19:03.803870 kubelet[3447]: I0805 22:19:03.803761 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a34e21104e78c02a105905f8dfdc43ee-ca-certs\") pod \"kube-apiserver-ip-172-31-26-236\" (UID: \"a34e21104e78c02a105905f8dfdc43ee\") " pod="kube-system/kube-apiserver-ip-172-31-26-236" Aug 5 22:19:03.803870 kubelet[3447]: I0805 22:19:03.803790 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a34e21104e78c02a105905f8dfdc43ee-k8s-certs\") pod \"kube-apiserver-ip-172-31-26-236\" (UID: \"a34e21104e78c02a105905f8dfdc43ee\") " pod="kube-system/kube-apiserver-ip-172-31-26-236" Aug 5 22:19:03.803870 kubelet[3447]: I0805 22:19:03.803822 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a34e21104e78c02a105905f8dfdc43ee-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-26-236\" (UID: \"a34e21104e78c02a105905f8dfdc43ee\") " pod="kube-system/kube-apiserver-ip-172-31-26-236" Aug 5 22:19:03.803870 kubelet[3447]: I0805 22:19:03.803850 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-ca-certs\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:19:03.804448 kubelet[3447]: I0805 22:19:03.804423 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:19:03.804710 kubelet[3447]: I0805 22:19:03.804677 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ca8512a50d7bc8c8626a926667b07f5a-kubeconfig\") pod \"kube-controller-manager-ip-172-31-26-236\" (UID: \"ca8512a50d7bc8c8626a926667b07f5a\") " pod="kube-system/kube-controller-manager-ip-172-31-26-236" Aug 5 22:19:03.804776 kubelet[3447]: I0805 22:19:03.804738 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4964b75e14f2aa99b370f247b330d8a-kubeconfig\") pod \"kube-scheduler-ip-172-31-26-236\" (UID: \"d4964b75e14f2aa99b370f247b330d8a\") " pod="kube-system/kube-scheduler-ip-172-31-26-236" Aug 5 22:19:04.275781 kubelet[3447]: I0805 22:19:04.275560 3447 apiserver.go:52] "Watching apiserver" Aug 5 22:19:04.302565 kubelet[3447]: I0805 22:19:04.302483 3447 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:19:04.496629 kubelet[3447]: E0805 22:19:04.494671 3447 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-26-236\" already exists" pod="kube-system/kube-apiserver-ip-172-31-26-236" Aug 5 22:19:04.546984 kubelet[3447]: I0805 22:19:04.546857 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-26-236" podStartSLOduration=1.546802546 podCreationTimestamp="2024-08-05 22:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:19:04.510541637 +0000 UTC m=+1.394569512" watchObservedRunningTime="2024-08-05 22:19:04.546802546 +0000 UTC m=+1.430830412" Aug 5 22:19:04.563394 kubelet[3447]: I0805 22:19:04.563149 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-26-236" podStartSLOduration=1.563099626 podCreationTimestamp="2024-08-05 22:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:19:04.547989842 +0000 UTC m=+1.432017707" watchObservedRunningTime="2024-08-05 22:19:04.563099626 +0000 UTC m=+1.447127488" Aug 5 22:19:06.154673 kubelet[3447]: I0805 22:19:06.154616 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-26-236" podStartSLOduration=3.154524219 podCreationTimestamp="2024-08-05 22:19:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:19:04.565036919 +0000 UTC m=+1.449064784" watchObservedRunningTime="2024-08-05 22:19:06.154524219 +0000 UTC m=+3.038552084" Aug 5 22:19:09.198165 sudo[2319]: pam_unix(sudo:session): session closed for user root Aug 5 22:19:09.221904 sshd[2316]: pam_unix(sshd:session): session closed for user core Aug 5 22:19:09.226630 systemd[1]: sshd@8-172.31.26.236:22-139.178.89.65:36706.service: Deactivated successfully. Aug 5 22:19:09.232606 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:19:09.233104 systemd[1]: session-9.scope: Consumed 5.431s CPU time, 133.6M memory peak, 0B memory swap peak. Aug 5 22:19:09.235231 systemd-logind[1944]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:19:09.236915 systemd-logind[1944]: Removed session 9. Aug 5 22:19:15.888513 kubelet[3447]: I0805 22:19:15.888466 3447 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:19:15.889785 containerd[1955]: time="2024-08-05T22:19:15.889418251Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:19:15.891482 kubelet[3447]: I0805 22:19:15.890347 3447 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:19:16.331391 kubelet[3447]: I0805 22:19:16.331347 3447 topology_manager.go:215] "Topology Admit Handler" podUID="db243418-a393-477a-a683-1450fad6e53a" podNamespace="kube-system" podName="kube-proxy-wstbh" Aug 5 22:19:16.356634 systemd[1]: Created slice kubepods-besteffort-poddb243418_a393_477a_a683_1450fad6e53a.slice - libcontainer container kubepods-besteffort-poddb243418_a393_477a_a683_1450fad6e53a.slice. Aug 5 22:19:16.506939 kubelet[3447]: I0805 22:19:16.506280 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db243418-a393-477a-a683-1450fad6e53a-lib-modules\") pod \"kube-proxy-wstbh\" (UID: \"db243418-a393-477a-a683-1450fad6e53a\") " pod="kube-system/kube-proxy-wstbh" Aug 5 22:19:16.507210 kubelet[3447]: I0805 22:19:16.507029 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db243418-a393-477a-a683-1450fad6e53a-kube-proxy\") pod \"kube-proxy-wstbh\" (UID: \"db243418-a393-477a-a683-1450fad6e53a\") " pod="kube-system/kube-proxy-wstbh" Aug 5 22:19:16.507210 kubelet[3447]: I0805 22:19:16.507062 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db243418-a393-477a-a683-1450fad6e53a-xtables-lock\") pod \"kube-proxy-wstbh\" (UID: \"db243418-a393-477a-a683-1450fad6e53a\") " pod="kube-system/kube-proxy-wstbh" Aug 5 22:19:16.507210 kubelet[3447]: I0805 22:19:16.507198 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fq5zt\" (UniqueName: \"kubernetes.io/projected/db243418-a393-477a-a683-1450fad6e53a-kube-api-access-fq5zt\") pod \"kube-proxy-wstbh\" (UID: \"db243418-a393-477a-a683-1450fad6e53a\") " pod="kube-system/kube-proxy-wstbh" Aug 5 22:19:16.675856 containerd[1955]: time="2024-08-05T22:19:16.675629341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wstbh,Uid:db243418-a393-477a-a683-1450fad6e53a,Namespace:kube-system,Attempt:0,}" Aug 5 22:19:16.722607 containerd[1955]: time="2024-08-05T22:19:16.722109578Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:19:16.722607 containerd[1955]: time="2024-08-05T22:19:16.722369630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:16.722607 containerd[1955]: time="2024-08-05T22:19:16.722407138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:19:16.722607 containerd[1955]: time="2024-08-05T22:19:16.722432156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:16.770416 systemd[1]: Started cri-containerd-a04855fbd866fc4e374703f137ae63ddecf498177825fb93c7c455a5a59fc862.scope - libcontainer container a04855fbd866fc4e374703f137ae63ddecf498177825fb93c7c455a5a59fc862. Aug 5 22:19:16.847347 containerd[1955]: time="2024-08-05T22:19:16.847231358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wstbh,Uid:db243418-a393-477a-a683-1450fad6e53a,Namespace:kube-system,Attempt:0,} returns sandbox id \"a04855fbd866fc4e374703f137ae63ddecf498177825fb93c7c455a5a59fc862\"" Aug 5 22:19:16.855087 containerd[1955]: time="2024-08-05T22:19:16.854776551Z" level=info msg="CreateContainer within sandbox \"a04855fbd866fc4e374703f137ae63ddecf498177825fb93c7c455a5a59fc862\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:19:16.901766 containerd[1955]: time="2024-08-05T22:19:16.901413012Z" level=info msg="CreateContainer within sandbox \"a04855fbd866fc4e374703f137ae63ddecf498177825fb93c7c455a5a59fc862\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac69dca384934cb9d619df599e699f1d512f5b6fe00921ed944201844bea42ec\"" Aug 5 22:19:16.907176 containerd[1955]: time="2024-08-05T22:19:16.904014952Z" level=info msg="StartContainer for \"ac69dca384934cb9d619df599e699f1d512f5b6fe00921ed944201844bea42ec\"" Aug 5 22:19:16.972956 kubelet[3447]: I0805 22:19:16.972506 3447 topology_manager.go:215] "Topology Admit Handler" podUID="f2279aa9-bc7b-4d7e-b5e8-46ada5274754" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-rqs52" Aug 5 22:19:16.979099 systemd[1]: Started cri-containerd-ac69dca384934cb9d619df599e699f1d512f5b6fe00921ed944201844bea42ec.scope - libcontainer container ac69dca384934cb9d619df599e699f1d512f5b6fe00921ed944201844bea42ec. Aug 5 22:19:16.995797 systemd[1]: Created slice kubepods-besteffort-podf2279aa9_bc7b_4d7e_b5e8_46ada5274754.slice - libcontainer container kubepods-besteffort-podf2279aa9_bc7b_4d7e_b5e8_46ada5274754.slice. Aug 5 22:19:17.045715 containerd[1955]: time="2024-08-05T22:19:17.045614547Z" level=info msg="StartContainer for \"ac69dca384934cb9d619df599e699f1d512f5b6fe00921ed944201844bea42ec\" returns successfully" Aug 5 22:19:17.114082 kubelet[3447]: I0805 22:19:17.113837 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f2279aa9-bc7b-4d7e-b5e8-46ada5274754-var-lib-calico\") pod \"tigera-operator-76c4974c85-rqs52\" (UID: \"f2279aa9-bc7b-4d7e-b5e8-46ada5274754\") " pod="tigera-operator/tigera-operator-76c4974c85-rqs52" Aug 5 22:19:17.114082 kubelet[3447]: I0805 22:19:17.113924 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgcx7\" (UniqueName: \"kubernetes.io/projected/f2279aa9-bc7b-4d7e-b5e8-46ada5274754-kube-api-access-kgcx7\") pod \"tigera-operator-76c4974c85-rqs52\" (UID: \"f2279aa9-bc7b-4d7e-b5e8-46ada5274754\") " pod="tigera-operator/tigera-operator-76c4974c85-rqs52" Aug 5 22:19:17.300536 containerd[1955]: time="2024-08-05T22:19:17.300408706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rqs52,Uid:f2279aa9-bc7b-4d7e-b5e8-46ada5274754,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:19:17.336953 containerd[1955]: time="2024-08-05T22:19:17.336605816Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:19:17.336953 containerd[1955]: time="2024-08-05T22:19:17.336682171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:17.336953 containerd[1955]: time="2024-08-05T22:19:17.336714653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:19:17.337323 containerd[1955]: time="2024-08-05T22:19:17.336734707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:17.385365 systemd[1]: Started cri-containerd-0c095b236651d7a125c05440c3e66abb8baaed16f807a3cda6ffec823bf03469.scope - libcontainer container 0c095b236651d7a125c05440c3e66abb8baaed16f807a3cda6ffec823bf03469. Aug 5 22:19:17.467609 kubelet[3447]: I0805 22:19:17.467401 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wstbh" podStartSLOduration=1.467350836 podCreationTimestamp="2024-08-05 22:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:19:17.466544374 +0000 UTC m=+14.350572235" watchObservedRunningTime="2024-08-05 22:19:17.467350836 +0000 UTC m=+14.351378701" Aug 5 22:19:17.506042 containerd[1955]: time="2024-08-05T22:19:17.505989677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-rqs52,Uid:f2279aa9-bc7b-4d7e-b5e8-46ada5274754,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0c095b236651d7a125c05440c3e66abb8baaed16f807a3cda6ffec823bf03469\"" Aug 5 22:19:17.517005 containerd[1955]: time="2024-08-05T22:19:17.516492076Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:19:17.678491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797310123.mount: Deactivated successfully. Aug 5 22:19:18.898539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329330353.mount: Deactivated successfully. Aug 5 22:19:19.661061 containerd[1955]: time="2024-08-05T22:19:19.661014526Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:19.662770 containerd[1955]: time="2024-08-05T22:19:19.662537901Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Aug 5 22:19:19.664035 containerd[1955]: time="2024-08-05T22:19:19.663991055Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:19.667665 containerd[1955]: time="2024-08-05T22:19:19.667252698Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:19.668550 containerd[1955]: time="2024-08-05T22:19:19.668510044Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.151477752s" Aug 5 22:19:19.668642 containerd[1955]: time="2024-08-05T22:19:19.668556157Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:19:19.676464 containerd[1955]: time="2024-08-05T22:19:19.676285632Z" level=info msg="CreateContainer within sandbox \"0c095b236651d7a125c05440c3e66abb8baaed16f807a3cda6ffec823bf03469\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:19:19.703293 containerd[1955]: time="2024-08-05T22:19:19.703251319Z" level=info msg="CreateContainer within sandbox \"0c095b236651d7a125c05440c3e66abb8baaed16f807a3cda6ffec823bf03469\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031\"" Aug 5 22:19:19.706897 containerd[1955]: time="2024-08-05T22:19:19.705182949Z" level=info msg="StartContainer for \"55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031\"" Aug 5 22:19:19.754073 systemd[1]: Started cri-containerd-55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031.scope - libcontainer container 55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031. Aug 5 22:19:19.789225 containerd[1955]: time="2024-08-05T22:19:19.789176477Z" level=info msg="StartContainer for \"55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031\" returns successfully" Aug 5 22:19:20.469263 kubelet[3447]: I0805 22:19:20.469220 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-rqs52" podStartSLOduration=2.306947932 podCreationTimestamp="2024-08-05 22:19:16 +0000 UTC" firstStartedPulling="2024-08-05 22:19:17.507780035 +0000 UTC m=+14.391807891" lastFinishedPulling="2024-08-05 22:19:19.670005325 +0000 UTC m=+16.554033168" observedRunningTime="2024-08-05 22:19:20.469021869 +0000 UTC m=+17.353049732" watchObservedRunningTime="2024-08-05 22:19:20.469173209 +0000 UTC m=+17.353201074" Aug 5 22:19:23.289853 kubelet[3447]: I0805 22:19:23.289461 3447 topology_manager.go:215] "Topology Admit Handler" podUID="8f230886-646f-4361-9c08-7b86fba186bc" podNamespace="calico-system" podName="calico-typha-7fb9c6487f-kjxnn" Aug 5 22:19:23.306506 systemd[1]: Created slice kubepods-besteffort-pod8f230886_646f_4361_9c08_7b86fba186bc.slice - libcontainer container kubepods-besteffort-pod8f230886_646f_4361_9c08_7b86fba186bc.slice. Aug 5 22:19:23.438721 kubelet[3447]: I0805 22:19:23.438683 3447 topology_manager.go:215] "Topology Admit Handler" podUID="eed88071-6299-4f1c-a87b-e1fe64b455e5" podNamespace="calico-system" podName="calico-node-xr9dd" Aug 5 22:19:23.448901 systemd[1]: Created slice kubepods-besteffort-podeed88071_6299_4f1c_a87b_e1fe64b455e5.slice - libcontainer container kubepods-besteffort-podeed88071_6299_4f1c_a87b_e1fe64b455e5.slice. Aug 5 22:19:23.471952 kubelet[3447]: I0805 22:19:23.471732 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nww9w\" (UniqueName: \"kubernetes.io/projected/8f230886-646f-4361-9c08-7b86fba186bc-kube-api-access-nww9w\") pod \"calico-typha-7fb9c6487f-kjxnn\" (UID: \"8f230886-646f-4361-9c08-7b86fba186bc\") " pod="calico-system/calico-typha-7fb9c6487f-kjxnn" Aug 5 22:19:23.471952 kubelet[3447]: I0805 22:19:23.471793 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f230886-646f-4361-9c08-7b86fba186bc-tigera-ca-bundle\") pod \"calico-typha-7fb9c6487f-kjxnn\" (UID: \"8f230886-646f-4361-9c08-7b86fba186bc\") " pod="calico-system/calico-typha-7fb9c6487f-kjxnn" Aug 5 22:19:23.471952 kubelet[3447]: I0805 22:19:23.471828 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8f230886-646f-4361-9c08-7b86fba186bc-typha-certs\") pod \"calico-typha-7fb9c6487f-kjxnn\" (UID: \"8f230886-646f-4361-9c08-7b86fba186bc\") " pod="calico-system/calico-typha-7fb9c6487f-kjxnn" Aug 5 22:19:23.528912 kubelet[3447]: I0805 22:19:23.528617 3447 topology_manager.go:215] "Topology Admit Handler" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" podNamespace="calico-system" podName="csi-node-driver-wgdwc" Aug 5 22:19:23.532346 kubelet[3447]: E0805 22:19:23.531651 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:23.573975 kubelet[3447]: I0805 22:19:23.572727 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eed88071-6299-4f1c-a87b-e1fe64b455e5-tigera-ca-bundle\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575275 kubelet[3447]: I0805 22:19:23.575245 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eed88071-6299-4f1c-a87b-e1fe64b455e5-var-run-calico\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575384 kubelet[3447]: I0805 22:19:23.575363 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4q2mj\" (UniqueName: \"kubernetes.io/projected/eed88071-6299-4f1c-a87b-e1fe64b455e5-kube-api-access-4q2mj\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575476 kubelet[3447]: I0805 22:19:23.575412 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6fc55fe-9251-4794-b8d5-cba9ade83b18-socket-dir\") pod \"csi-node-driver-wgdwc\" (UID: \"f6fc55fe-9251-4794-b8d5-cba9ade83b18\") " pod="calico-system/csi-node-driver-wgdwc" Aug 5 22:19:23.575476 kubelet[3447]: I0805 22:19:23.575464 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsr9l\" (UniqueName: \"kubernetes.io/projected/f6fc55fe-9251-4794-b8d5-cba9ade83b18-kube-api-access-xsr9l\") pod \"csi-node-driver-wgdwc\" (UID: \"f6fc55fe-9251-4794-b8d5-cba9ade83b18\") " pod="calico-system/csi-node-driver-wgdwc" Aug 5 22:19:23.575581 kubelet[3447]: I0805 22:19:23.575499 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eed88071-6299-4f1c-a87b-e1fe64b455e5-xtables-lock\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575581 kubelet[3447]: I0805 22:19:23.575553 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eed88071-6299-4f1c-a87b-e1fe64b455e5-policysync\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575671 kubelet[3447]: I0805 22:19:23.575589 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eed88071-6299-4f1c-a87b-e1fe64b455e5-lib-modules\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575671 kubelet[3447]: I0805 22:19:23.575635 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eed88071-6299-4f1c-a87b-e1fe64b455e5-cni-net-dir\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575671 kubelet[3447]: I0805 22:19:23.575671 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eed88071-6299-4f1c-a87b-e1fe64b455e5-node-certs\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575799 kubelet[3447]: I0805 22:19:23.575724 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eed88071-6299-4f1c-a87b-e1fe64b455e5-flexvol-driver-host\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575799 kubelet[3447]: I0805 22:19:23.575796 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f6fc55fe-9251-4794-b8d5-cba9ade83b18-varrun\") pod \"csi-node-driver-wgdwc\" (UID: \"f6fc55fe-9251-4794-b8d5-cba9ade83b18\") " pod="calico-system/csi-node-driver-wgdwc" Aug 5 22:19:23.575895 kubelet[3447]: I0805 22:19:23.575830 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eed88071-6299-4f1c-a87b-e1fe64b455e5-cni-log-dir\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.575946 kubelet[3447]: I0805 22:19:23.575901 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6fc55fe-9251-4794-b8d5-cba9ade83b18-kubelet-dir\") pod \"csi-node-driver-wgdwc\" (UID: \"f6fc55fe-9251-4794-b8d5-cba9ade83b18\") " pod="calico-system/csi-node-driver-wgdwc" Aug 5 22:19:23.575946 kubelet[3447]: I0805 22:19:23.575933 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6fc55fe-9251-4794-b8d5-cba9ade83b18-registration-dir\") pod \"csi-node-driver-wgdwc\" (UID: \"f6fc55fe-9251-4794-b8d5-cba9ade83b18\") " pod="calico-system/csi-node-driver-wgdwc" Aug 5 22:19:23.578997 kubelet[3447]: I0805 22:19:23.578970 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eed88071-6299-4f1c-a87b-e1fe64b455e5-var-lib-calico\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.579103 kubelet[3447]: I0805 22:19:23.579017 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eed88071-6299-4f1c-a87b-e1fe64b455e5-cni-bin-dir\") pod \"calico-node-xr9dd\" (UID: \"eed88071-6299-4f1c-a87b-e1fe64b455e5\") " pod="calico-system/calico-node-xr9dd" Aug 5 22:19:23.695222 kubelet[3447]: E0805 22:19:23.695192 3447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:19:23.695222 kubelet[3447]: W0805 22:19:23.695218 3447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:19:23.695409 kubelet[3447]: E0805 22:19:23.695269 3447 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:19:23.695906 kubelet[3447]: E0805 22:19:23.695557 3447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:19:23.695906 kubelet[3447]: W0805 22:19:23.695570 3447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:19:23.695906 kubelet[3447]: E0805 22:19:23.695588 3447 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:19:23.711140 kubelet[3447]: E0805 22:19:23.709160 3447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:19:23.711140 kubelet[3447]: W0805 22:19:23.709181 3447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:19:23.711140 kubelet[3447]: E0805 22:19:23.709223 3447 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:19:23.722150 kubelet[3447]: E0805 22:19:23.722121 3447 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:19:23.722150 kubelet[3447]: W0805 22:19:23.722149 3447 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:19:23.722321 kubelet[3447]: E0805 22:19:23.722179 3447 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:19:23.775991 containerd[1955]: time="2024-08-05T22:19:23.775942115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xr9dd,Uid:eed88071-6299-4f1c-a87b-e1fe64b455e5,Namespace:calico-system,Attempt:0,}" Aug 5 22:19:23.890300 containerd[1955]: time="2024-08-05T22:19:23.888749917Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:19:23.890300 containerd[1955]: time="2024-08-05T22:19:23.888848736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:23.890300 containerd[1955]: time="2024-08-05T22:19:23.888867950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:19:23.890300 containerd[1955]: time="2024-08-05T22:19:23.888899461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:23.926664 systemd[1]: Started cri-containerd-829c40024daf36f3087d4f60b2b037901a621a86d67aaaf0926b0fca17bb0f23.scope - libcontainer container 829c40024daf36f3087d4f60b2b037901a621a86d67aaaf0926b0fca17bb0f23. Aug 5 22:19:23.934924 containerd[1955]: time="2024-08-05T22:19:23.934220907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb9c6487f-kjxnn,Uid:8f230886-646f-4361-9c08-7b86fba186bc,Namespace:calico-system,Attempt:0,}" Aug 5 22:19:23.999592 containerd[1955]: time="2024-08-05T22:19:23.999543688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xr9dd,Uid:eed88071-6299-4f1c-a87b-e1fe64b455e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"829c40024daf36f3087d4f60b2b037901a621a86d67aaaf0926b0fca17bb0f23\"" Aug 5 22:19:24.005017 containerd[1955]: time="2024-08-05T22:19:24.004592107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:19:24.035045 containerd[1955]: time="2024-08-05T22:19:24.032884709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:19:24.035045 containerd[1955]: time="2024-08-05T22:19:24.034668809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:24.035045 containerd[1955]: time="2024-08-05T22:19:24.034695272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:19:24.035045 containerd[1955]: time="2024-08-05T22:19:24.034712037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:24.075140 systemd[1]: Started cri-containerd-4e73ee818c02aaea96ee3bce80f6f6cfb5165f585edf184fa40f06eaef56f10f.scope - libcontainer container 4e73ee818c02aaea96ee3bce80f6f6cfb5165f585edf184fa40f06eaef56f10f. Aug 5 22:19:24.201344 containerd[1955]: time="2024-08-05T22:19:24.201292761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7fb9c6487f-kjxnn,Uid:8f230886-646f-4361-9c08-7b86fba186bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e73ee818c02aaea96ee3bce80f6f6cfb5165f585edf184fa40f06eaef56f10f\"" Aug 5 22:19:25.343933 kubelet[3447]: E0805 22:19:25.342489 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:25.398391 containerd[1955]: time="2024-08-05T22:19:25.398346194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:25.399419 containerd[1955]: time="2024-08-05T22:19:25.399368736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:19:25.403763 containerd[1955]: time="2024-08-05T22:19:25.401349120Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:25.404601 containerd[1955]: time="2024-08-05T22:19:25.404542724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:25.408905 containerd[1955]: time="2024-08-05T22:19:25.408305242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.403657063s" Aug 5 22:19:25.408905 containerd[1955]: time="2024-08-05T22:19:25.408357244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:19:25.413303 containerd[1955]: time="2024-08-05T22:19:25.413266925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:19:25.415861 containerd[1955]: time="2024-08-05T22:19:25.415813292Z" level=info msg="CreateContainer within sandbox \"829c40024daf36f3087d4f60b2b037901a621a86d67aaaf0926b0fca17bb0f23\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:19:25.445904 containerd[1955]: time="2024-08-05T22:19:25.444600052Z" level=info msg="CreateContainer within sandbox \"829c40024daf36f3087d4f60b2b037901a621a86d67aaaf0926b0fca17bb0f23\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"33b0afaa73d30531c78f593873bdf8b97651339fedcc6778cf209af30a40861c\"" Aug 5 22:19:25.447641 containerd[1955]: time="2024-08-05T22:19:25.447601083Z" level=info msg="StartContainer for \"33b0afaa73d30531c78f593873bdf8b97651339fedcc6778cf209af30a40861c\"" Aug 5 22:19:25.514235 systemd[1]: Started cri-containerd-33b0afaa73d30531c78f593873bdf8b97651339fedcc6778cf209af30a40861c.scope - libcontainer container 33b0afaa73d30531c78f593873bdf8b97651339fedcc6778cf209af30a40861c. Aug 5 22:19:25.592846 containerd[1955]: time="2024-08-05T22:19:25.592802409Z" level=info msg="StartContainer for \"33b0afaa73d30531c78f593873bdf8b97651339fedcc6778cf209af30a40861c\" returns successfully" Aug 5 22:19:25.595325 systemd[1]: cri-containerd-33b0afaa73d30531c78f593873bdf8b97651339fedcc6778cf209af30a40861c.scope: Deactivated successfully. Aug 5 22:19:25.635485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33b0afaa73d30531c78f593873bdf8b97651339fedcc6778cf209af30a40861c-rootfs.mount: Deactivated successfully. Aug 5 22:19:25.718725 containerd[1955]: time="2024-08-05T22:19:25.718648960Z" level=info msg="shim disconnected" id=33b0afaa73d30531c78f593873bdf8b97651339fedcc6778cf209af30a40861c namespace=k8s.io Aug 5 22:19:25.719004 containerd[1955]: time="2024-08-05T22:19:25.718797353Z" level=warning msg="cleaning up after shim disconnected" id=33b0afaa73d30531c78f593873bdf8b97651339fedcc6778cf209af30a40861c namespace=k8s.io Aug 5 22:19:25.719004 containerd[1955]: time="2024-08-05T22:19:25.718815553Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:19:27.342697 kubelet[3447]: E0805 22:19:27.342374 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:28.620301 containerd[1955]: time="2024-08-05T22:19:28.620253014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:28.621847 containerd[1955]: time="2024-08-05T22:19:28.621791384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:19:28.623104 containerd[1955]: time="2024-08-05T22:19:28.623074172Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:28.626641 containerd[1955]: time="2024-08-05T22:19:28.626584364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:28.627948 containerd[1955]: time="2024-08-05T22:19:28.627356544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.213901648s" Aug 5 22:19:28.627948 containerd[1955]: time="2024-08-05T22:19:28.627395135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:19:28.628696 containerd[1955]: time="2024-08-05T22:19:28.628668007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:19:28.668337 containerd[1955]: time="2024-08-05T22:19:28.668266008Z" level=info msg="CreateContainer within sandbox \"4e73ee818c02aaea96ee3bce80f6f6cfb5165f585edf184fa40f06eaef56f10f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:19:28.698388 containerd[1955]: time="2024-08-05T22:19:28.698340283Z" level=info msg="CreateContainer within sandbox \"4e73ee818c02aaea96ee3bce80f6f6cfb5165f585edf184fa40f06eaef56f10f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7acbde735479b043e0dc081a63b5e34c211997002b9626f3dd1bdecb9a198a42\"" Aug 5 22:19:28.698915 containerd[1955]: time="2024-08-05T22:19:28.698869511Z" level=info msg="StartContainer for \"7acbde735479b043e0dc081a63b5e34c211997002b9626f3dd1bdecb9a198a42\"" Aug 5 22:19:28.807255 systemd[1]: Started cri-containerd-7acbde735479b043e0dc081a63b5e34c211997002b9626f3dd1bdecb9a198a42.scope - libcontainer container 7acbde735479b043e0dc081a63b5e34c211997002b9626f3dd1bdecb9a198a42. Aug 5 22:19:28.942817 containerd[1955]: time="2024-08-05T22:19:28.942520261Z" level=info msg="StartContainer for \"7acbde735479b043e0dc081a63b5e34c211997002b9626f3dd1bdecb9a198a42\" returns successfully" Aug 5 22:19:29.342072 kubelet[3447]: E0805 22:19:29.341849 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:30.515580 kubelet[3447]: I0805 22:19:30.515536 3447 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:19:30.663355 kubelet[3447]: I0805 22:19:30.662574 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7fb9c6487f-kjxnn" podStartSLOduration=3.238681342 podCreationTimestamp="2024-08-05 22:19:23 +0000 UTC" firstStartedPulling="2024-08-05 22:19:24.203925411 +0000 UTC m=+21.087953267" lastFinishedPulling="2024-08-05 22:19:28.627753706 +0000 UTC m=+25.511781562" observedRunningTime="2024-08-05 22:19:29.53132087 +0000 UTC m=+26.415348754" watchObservedRunningTime="2024-08-05 22:19:30.662509637 +0000 UTC m=+27.546537555" Aug 5 22:19:31.342020 kubelet[3447]: E0805 22:19:31.341790 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:33.342782 kubelet[3447]: E0805 22:19:33.342379 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:34.787315 containerd[1955]: time="2024-08-05T22:19:34.787269035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:34.790232 containerd[1955]: time="2024-08-05T22:19:34.790159697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:19:34.793034 containerd[1955]: time="2024-08-05T22:19:34.792981804Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:34.797410 containerd[1955]: time="2024-08-05T22:19:34.797360009Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:34.799159 containerd[1955]: time="2024-08-05T22:19:34.798327874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 6.169616242s" Aug 5 22:19:34.799159 containerd[1955]: time="2024-08-05T22:19:34.798371820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:19:34.804517 containerd[1955]: time="2024-08-05T22:19:34.804201104Z" level=info msg="CreateContainer within sandbox \"829c40024daf36f3087d4f60b2b037901a621a86d67aaaf0926b0fca17bb0f23\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:19:34.875385 containerd[1955]: time="2024-08-05T22:19:34.875329713Z" level=info msg="CreateContainer within sandbox \"829c40024daf36f3087d4f60b2b037901a621a86d67aaaf0926b0fca17bb0f23\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc\"" Aug 5 22:19:34.878139 containerd[1955]: time="2024-08-05T22:19:34.876349613Z" level=info msg="StartContainer for \"2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc\"" Aug 5 22:19:34.976266 systemd[1]: run-containerd-runc-k8s.io-2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc-runc.fYvwq2.mount: Deactivated successfully. Aug 5 22:19:34.986238 systemd[1]: Started cri-containerd-2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc.scope - libcontainer container 2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc. Aug 5 22:19:35.050194 containerd[1955]: time="2024-08-05T22:19:35.047162197Z" level=info msg="StartContainer for \"2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc\" returns successfully" Aug 5 22:19:35.345826 kubelet[3447]: E0805 22:19:35.345035 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:37.251483 systemd[1]: cri-containerd-2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc.scope: Deactivated successfully. Aug 5 22:19:37.313438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc-rootfs.mount: Deactivated successfully. Aug 5 22:19:37.329792 containerd[1955]: time="2024-08-05T22:19:37.329268703Z" level=info msg="shim disconnected" id=2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc namespace=k8s.io Aug 5 22:19:37.329792 containerd[1955]: time="2024-08-05T22:19:37.329340795Z" level=warning msg="cleaning up after shim disconnected" id=2d0bfd9b181c4c5ed84e8459460dcd427602764ddbe0bc0ca83b3f7d02dff4bc namespace=k8s.io Aug 5 22:19:37.329792 containerd[1955]: time="2024-08-05T22:19:37.329412616Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:19:37.343761 kubelet[3447]: E0805 22:19:37.343025 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:37.353572 kubelet[3447]: I0805 22:19:37.352241 3447 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 22:19:37.375010 containerd[1955]: time="2024-08-05T22:19:37.374826680Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:19:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:19:37.408735 kubelet[3447]: I0805 22:19:37.408676 3447 topology_manager.go:215] "Topology Admit Handler" podUID="b0a89043-6691-488b-a80a-a04aac1af3dc" podNamespace="kube-system" podName="coredns-5dd5756b68-7g7mq" Aug 5 22:19:37.419695 kubelet[3447]: I0805 22:19:37.418413 3447 topology_manager.go:215] "Topology Admit Handler" podUID="2c94ec30-391b-47f9-8ba5-0553eac916fc" podNamespace="calico-system" podName="calico-kube-controllers-79c7b85bf-dzc2m" Aug 5 22:19:37.423400 systemd[1]: Created slice kubepods-burstable-podb0a89043_6691_488b_a80a_a04aac1af3dc.slice - libcontainer container kubepods-burstable-podb0a89043_6691_488b_a80a_a04aac1af3dc.slice. Aug 5 22:19:37.436651 kubelet[3447]: I0805 22:19:37.434019 3447 topology_manager.go:215] "Topology Admit Handler" podUID="f5698d75-b00e-4848-8925-21116137974b" podNamespace="kube-system" podName="coredns-5dd5756b68-4czbl" Aug 5 22:19:37.454860 systemd[1]: Created slice kubepods-burstable-podf5698d75_b00e_4848_8925_21116137974b.slice - libcontainer container kubepods-burstable-podf5698d75_b00e_4848_8925_21116137974b.slice. Aug 5 22:19:37.465151 systemd[1]: Created slice kubepods-besteffort-pod2c94ec30_391b_47f9_8ba5_0553eac916fc.slice - libcontainer container kubepods-besteffort-pod2c94ec30_391b_47f9_8ba5_0553eac916fc.slice. Aug 5 22:19:37.496397 kubelet[3447]: I0805 22:19:37.496270 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg96n\" (UniqueName: \"kubernetes.io/projected/b0a89043-6691-488b-a80a-a04aac1af3dc-kube-api-access-xg96n\") pod \"coredns-5dd5756b68-7g7mq\" (UID: \"b0a89043-6691-488b-a80a-a04aac1af3dc\") " pod="kube-system/coredns-5dd5756b68-7g7mq" Aug 5 22:19:37.500293 kubelet[3447]: I0805 22:19:37.500257 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5698d75-b00e-4848-8925-21116137974b-config-volume\") pod \"coredns-5dd5756b68-4czbl\" (UID: \"f5698d75-b00e-4848-8925-21116137974b\") " pod="kube-system/coredns-5dd5756b68-4czbl" Aug 5 22:19:37.500642 kubelet[3447]: I0805 22:19:37.500309 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn9z6\" (UniqueName: \"kubernetes.io/projected/f5698d75-b00e-4848-8925-21116137974b-kube-api-access-vn9z6\") pod \"coredns-5dd5756b68-4czbl\" (UID: \"f5698d75-b00e-4848-8925-21116137974b\") " pod="kube-system/coredns-5dd5756b68-4czbl" Aug 5 22:19:37.500642 kubelet[3447]: I0805 22:19:37.500382 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b0a89043-6691-488b-a80a-a04aac1af3dc-config-volume\") pod \"coredns-5dd5756b68-7g7mq\" (UID: \"b0a89043-6691-488b-a80a-a04aac1af3dc\") " pod="kube-system/coredns-5dd5756b68-7g7mq" Aug 5 22:19:37.500642 kubelet[3447]: I0805 22:19:37.500418 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9c9r\" (UniqueName: \"kubernetes.io/projected/2c94ec30-391b-47f9-8ba5-0553eac916fc-kube-api-access-g9c9r\") pod \"calico-kube-controllers-79c7b85bf-dzc2m\" (UID: \"2c94ec30-391b-47f9-8ba5-0553eac916fc\") " pod="calico-system/calico-kube-controllers-79c7b85bf-dzc2m" Aug 5 22:19:37.500642 kubelet[3447]: I0805 22:19:37.500451 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c94ec30-391b-47f9-8ba5-0553eac916fc-tigera-ca-bundle\") pod \"calico-kube-controllers-79c7b85bf-dzc2m\" (UID: \"2c94ec30-391b-47f9-8ba5-0553eac916fc\") " pod="calico-system/calico-kube-controllers-79c7b85bf-dzc2m" Aug 5 22:19:37.544431 containerd[1955]: time="2024-08-05T22:19:37.544251954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:19:37.747121 containerd[1955]: time="2024-08-05T22:19:37.747079047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7g7mq,Uid:b0a89043-6691-488b-a80a-a04aac1af3dc,Namespace:kube-system,Attempt:0,}" Aug 5 22:19:37.761691 containerd[1955]: time="2024-08-05T22:19:37.761644674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4czbl,Uid:f5698d75-b00e-4848-8925-21116137974b,Namespace:kube-system,Attempt:0,}" Aug 5 22:19:37.806990 containerd[1955]: time="2024-08-05T22:19:37.806652962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79c7b85bf-dzc2m,Uid:2c94ec30-391b-47f9-8ba5-0553eac916fc,Namespace:calico-system,Attempt:0,}" Aug 5 22:19:38.113601 containerd[1955]: time="2024-08-05T22:19:38.112963482Z" level=error msg="Failed to destroy network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.117003 containerd[1955]: time="2024-08-05T22:19:38.116034657Z" level=error msg="Failed to destroy network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.125569 containerd[1955]: time="2024-08-05T22:19:38.123959184Z" level=error msg="encountered an error cleaning up failed sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.125569 containerd[1955]: time="2024-08-05T22:19:38.124094360Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79c7b85bf-dzc2m,Uid:2c94ec30-391b-47f9-8ba5-0553eac916fc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.125569 containerd[1955]: time="2024-08-05T22:19:38.124292401Z" level=error msg="encountered an error cleaning up failed sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.125569 containerd[1955]: time="2024-08-05T22:19:38.124479161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7g7mq,Uid:b0a89043-6691-488b-a80a-a04aac1af3dc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.126545 kubelet[3447]: E0805 22:19:38.124824 3447 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.126545 kubelet[3447]: E0805 22:19:38.124901 3447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-7g7mq" Aug 5 22:19:38.126545 kubelet[3447]: E0805 22:19:38.124931 3447 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-7g7mq" Aug 5 22:19:38.126863 kubelet[3447]: E0805 22:19:38.125039 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-7g7mq_kube-system(b0a89043-6691-488b-a80a-a04aac1af3dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-7g7mq_kube-system(b0a89043-6691-488b-a80a-a04aac1af3dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-7g7mq" podUID="b0a89043-6691-488b-a80a-a04aac1af3dc" Aug 5 22:19:38.126863 kubelet[3447]: E0805 22:19:38.125268 3447 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.126863 kubelet[3447]: E0805 22:19:38.125309 3447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79c7b85bf-dzc2m" Aug 5 22:19:38.127106 kubelet[3447]: E0805 22:19:38.125335 3447 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-79c7b85bf-dzc2m" Aug 5 22:19:38.127106 kubelet[3447]: E0805 22:19:38.125384 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-79c7b85bf-dzc2m_calico-system(2c94ec30-391b-47f9-8ba5-0553eac916fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-79c7b85bf-dzc2m_calico-system(2c94ec30-391b-47f9-8ba5-0553eac916fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79c7b85bf-dzc2m" podUID="2c94ec30-391b-47f9-8ba5-0553eac916fc" Aug 5 22:19:38.137687 containerd[1955]: time="2024-08-05T22:19:38.137633724Z" level=error msg="Failed to destroy network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.138058 containerd[1955]: time="2024-08-05T22:19:38.138016193Z" level=error msg="encountered an error cleaning up failed sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.138150 containerd[1955]: time="2024-08-05T22:19:38.138076697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4czbl,Uid:f5698d75-b00e-4848-8925-21116137974b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.138416 kubelet[3447]: E0805 22:19:38.138385 3447 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.138502 kubelet[3447]: E0805 22:19:38.138459 3447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4czbl" Aug 5 22:19:38.138502 kubelet[3447]: E0805 22:19:38.138488 3447 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-4czbl" Aug 5 22:19:38.139495 kubelet[3447]: E0805 22:19:38.138565 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-4czbl_kube-system(f5698d75-b00e-4848-8925-21116137974b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-4czbl_kube-system(f5698d75-b00e-4848-8925-21116137974b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4czbl" podUID="f5698d75-b00e-4848-8925-21116137974b" Aug 5 22:19:38.304178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71-shm.mount: Deactivated successfully. Aug 5 22:19:38.546469 kubelet[3447]: I0805 22:19:38.546435 3447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:19:38.551815 kubelet[3447]: I0805 22:19:38.550005 3447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:19:38.551986 containerd[1955]: time="2024-08-05T22:19:38.551948981Z" level=info msg="StopPodSandbox for \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\"" Aug 5 22:19:38.552342 containerd[1955]: time="2024-08-05T22:19:38.552251818Z" level=info msg="Ensure that sandbox 694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565 in task-service has been cleanup successfully" Aug 5 22:19:38.557794 containerd[1955]: time="2024-08-05T22:19:38.557253196Z" level=info msg="StopPodSandbox for \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\"" Aug 5 22:19:38.558220 containerd[1955]: time="2024-08-05T22:19:38.558008409Z" level=info msg="Ensure that sandbox 81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71 in task-service has been cleanup successfully" Aug 5 22:19:38.567310 kubelet[3447]: I0805 22:19:38.567033 3447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:19:38.614002 containerd[1955]: time="2024-08-05T22:19:38.613934835Z" level=info msg="StopPodSandbox for \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\"" Aug 5 22:19:38.615746 containerd[1955]: time="2024-08-05T22:19:38.615348352Z" level=info msg="Ensure that sandbox 0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0 in task-service has been cleanup successfully" Aug 5 22:19:38.716315 containerd[1955]: time="2024-08-05T22:19:38.716055497Z" level=error msg="StopPodSandbox for \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\" failed" error="failed to destroy network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.717206 kubelet[3447]: E0805 22:19:38.717104 3447 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:19:38.717206 kubelet[3447]: E0805 22:19:38.717189 3447 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565"} Aug 5 22:19:38.717543 kubelet[3447]: E0805 22:19:38.717234 3447 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2c94ec30-391b-47f9-8ba5-0553eac916fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:19:38.717543 kubelet[3447]: E0805 22:19:38.717278 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2c94ec30-391b-47f9-8ba5-0553eac916fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-79c7b85bf-dzc2m" podUID="2c94ec30-391b-47f9-8ba5-0553eac916fc" Aug 5 22:19:38.736209 containerd[1955]: time="2024-08-05T22:19:38.736126766Z" level=error msg="StopPodSandbox for \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\" failed" error="failed to destroy network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.737559 kubelet[3447]: E0805 22:19:38.736622 3447 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:19:38.737559 kubelet[3447]: E0805 22:19:38.736700 3447 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71"} Aug 5 22:19:38.737559 kubelet[3447]: E0805 22:19:38.736831 3447 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0a89043-6691-488b-a80a-a04aac1af3dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:19:38.737559 kubelet[3447]: E0805 22:19:38.736869 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0a89043-6691-488b-a80a-a04aac1af3dc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-7g7mq" podUID="b0a89043-6691-488b-a80a-a04aac1af3dc" Aug 5 22:19:38.739922 containerd[1955]: time="2024-08-05T22:19:38.739830064Z" level=error msg="StopPodSandbox for \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\" failed" error="failed to destroy network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:38.740306 kubelet[3447]: E0805 22:19:38.740283 3447 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:19:38.740418 kubelet[3447]: E0805 22:19:38.740329 3447 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0"} Aug 5 22:19:38.740471 kubelet[3447]: E0805 22:19:38.740416 3447 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f5698d75-b00e-4848-8925-21116137974b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:19:38.740471 kubelet[3447]: E0805 22:19:38.740461 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f5698d75-b00e-4848-8925-21116137974b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-4czbl" podUID="f5698d75-b00e-4848-8925-21116137974b" Aug 5 22:19:39.354017 systemd[1]: Created slice kubepods-besteffort-podf6fc55fe_9251_4794_b8d5_cba9ade83b18.slice - libcontainer container kubepods-besteffort-podf6fc55fe_9251_4794_b8d5_cba9ade83b18.slice. Aug 5 22:19:39.359572 containerd[1955]: time="2024-08-05T22:19:39.359097000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wgdwc,Uid:f6fc55fe-9251-4794-b8d5-cba9ade83b18,Namespace:calico-system,Attempt:0,}" Aug 5 22:19:39.502425 containerd[1955]: time="2024-08-05T22:19:39.502370159Z" level=error msg="Failed to destroy network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:39.505774 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a-shm.mount: Deactivated successfully. Aug 5 22:19:39.509960 containerd[1955]: time="2024-08-05T22:19:39.508975906Z" level=error msg="encountered an error cleaning up failed sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:39.509960 containerd[1955]: time="2024-08-05T22:19:39.509064252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wgdwc,Uid:f6fc55fe-9251-4794-b8d5-cba9ade83b18,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:39.510333 kubelet[3447]: E0805 22:19:39.509689 3447 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:39.510333 kubelet[3447]: E0805 22:19:39.509754 3447 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wgdwc" Aug 5 22:19:39.510333 kubelet[3447]: E0805 22:19:39.509787 3447 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wgdwc" Aug 5 22:19:39.512895 kubelet[3447]: E0805 22:19:39.512834 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wgdwc_calico-system(f6fc55fe-9251-4794-b8d5-cba9ade83b18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wgdwc_calico-system(f6fc55fe-9251-4794-b8d5-cba9ade83b18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:39.604896 kubelet[3447]: I0805 22:19:39.604389 3447 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:19:39.607885 containerd[1955]: time="2024-08-05T22:19:39.607713326Z" level=info msg="StopPodSandbox for \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\"" Aug 5 22:19:39.608650 containerd[1955]: time="2024-08-05T22:19:39.608277581Z" level=info msg="Ensure that sandbox 15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a in task-service has been cleanup successfully" Aug 5 22:19:39.695261 containerd[1955]: time="2024-08-05T22:19:39.695191596Z" level=error msg="StopPodSandbox for \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\" failed" error="failed to destroy network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:19:39.695530 kubelet[3447]: E0805 22:19:39.695502 3447 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:19:39.695674 kubelet[3447]: E0805 22:19:39.695556 3447 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a"} Aug 5 22:19:39.695674 kubelet[3447]: E0805 22:19:39.695602 3447 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6fc55fe-9251-4794-b8d5-cba9ade83b18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:19:39.695674 kubelet[3447]: E0805 22:19:39.695642 3447 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6fc55fe-9251-4794-b8d5-cba9ade83b18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wgdwc" podUID="f6fc55fe-9251-4794-b8d5-cba9ade83b18" Aug 5 22:19:45.977542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663890460.mount: Deactivated successfully. Aug 5 22:19:46.176927 containerd[1955]: time="2024-08-05T22:19:46.174802427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:19:46.232017 containerd[1955]: time="2024-08-05T22:19:46.228592890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.678168673s" Aug 5 22:19:46.232017 containerd[1955]: time="2024-08-05T22:19:46.228683141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:19:46.235494 containerd[1955]: time="2024-08-05T22:19:46.213075589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:46.238497 containerd[1955]: time="2024-08-05T22:19:46.237421611Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:46.238631 containerd[1955]: time="2024-08-05T22:19:46.238585181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:46.296608 containerd[1955]: time="2024-08-05T22:19:46.296365232Z" level=info msg="CreateContainer within sandbox \"829c40024daf36f3087d4f60b2b037901a621a86d67aaaf0926b0fca17bb0f23\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:19:46.437993 containerd[1955]: time="2024-08-05T22:19:46.437943817Z" level=info msg="CreateContainer within sandbox \"829c40024daf36f3087d4f60b2b037901a621a86d67aaaf0926b0fca17bb0f23\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"571a5f738025de84478f1d46e51dea47dec17a769b7545668993dd79bb85bf94\"" Aug 5 22:19:46.451247 containerd[1955]: time="2024-08-05T22:19:46.451200118Z" level=info msg="StartContainer for \"571a5f738025de84478f1d46e51dea47dec17a769b7545668993dd79bb85bf94\"" Aug 5 22:19:46.719240 systemd[1]: Started cri-containerd-571a5f738025de84478f1d46e51dea47dec17a769b7545668993dd79bb85bf94.scope - libcontainer container 571a5f738025de84478f1d46e51dea47dec17a769b7545668993dd79bb85bf94. Aug 5 22:19:46.932295 containerd[1955]: time="2024-08-05T22:19:46.932241129Z" level=info msg="StartContainer for \"571a5f738025de84478f1d46e51dea47dec17a769b7545668993dd79bb85bf94\" returns successfully" Aug 5 22:19:47.117194 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:19:47.118266 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:19:48.332266 systemd[1]: Started sshd@9-172.31.26.236:22-139.178.89.65:36138.service - OpenSSH per-connection server daemon (139.178.89.65:36138). Aug 5 22:19:48.543279 sshd[4405]: Accepted publickey for core from 139.178.89.65 port 36138 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:19:48.545200 sshd[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:19:48.558927 systemd-logind[1944]: New session 10 of user core. Aug 5 22:19:48.565164 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:19:48.864282 systemd[1]: run-containerd-runc-k8s.io-571a5f738025de84478f1d46e51dea47dec17a769b7545668993dd79bb85bf94-runc.oVTxfh.mount: Deactivated successfully. Aug 5 22:19:48.894795 sshd[4405]: pam_unix(sshd:session): session closed for user core Aug 5 22:19:48.902685 systemd[1]: sshd@9-172.31.26.236:22-139.178.89.65:36138.service: Deactivated successfully. Aug 5 22:19:48.907691 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:19:48.916086 systemd-logind[1944]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:19:48.919461 systemd-logind[1944]: Removed session 10. Aug 5 22:19:49.681606 (udev-worker)[4359]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:19:49.688141 systemd-networkd[1802]: vxlan.calico: Link UP Aug 5 22:19:49.688148 systemd-networkd[1802]: vxlan.calico: Gained carrier Aug 5 22:19:49.736081 (udev-worker)[4355]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:19:50.342996 containerd[1955]: time="2024-08-05T22:19:50.342717220Z" level=info msg="StopPodSandbox for \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\"" Aug 5 22:19:50.499654 kubelet[3447]: I0805 22:19:50.499601 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-xr9dd" podStartSLOduration=5.202379617 podCreationTimestamp="2024-08-05 22:19:23 +0000 UTC" firstStartedPulling="2024-08-05 22:19:24.004295126 +0000 UTC m=+20.888322978" lastFinishedPulling="2024-08-05 22:19:46.236036024 +0000 UTC m=+43.120063879" observedRunningTime="2024-08-05 22:19:47.807834169 +0000 UTC m=+44.691862034" watchObservedRunningTime="2024-08-05 22:19:50.434120518 +0000 UTC m=+47.318148384" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.426 [INFO][4637] k8s.go 608: Cleaning up netns ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.427 [INFO][4637] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" iface="eth0" netns="/var/run/netns/cni-850ca227-496d-ff29-ace7-5f3092a404cb" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.427 [INFO][4637] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" iface="eth0" netns="/var/run/netns/cni-850ca227-496d-ff29-ace7-5f3092a404cb" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.433 [INFO][4637] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" iface="eth0" netns="/var/run/netns/cni-850ca227-496d-ff29-ace7-5f3092a404cb" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.433 [INFO][4637] k8s.go 615: Releasing IP address(es) ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.433 [INFO][4637] utils.go 188: Calico CNI releasing IP address ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.658 [INFO][4643] ipam_plugin.go 411: Releasing address using handleID ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" HandleID="k8s-pod-network.0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.660 [INFO][4643] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.661 [INFO][4643] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.685 [WARNING][4643] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" HandleID="k8s-pod-network.0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.685 [INFO][4643] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" HandleID="k8s-pod-network.0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.693 [INFO][4643] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:19:50.699037 containerd[1955]: 2024-08-05 22:19:50.694 [INFO][4637] k8s.go 621: Teardown processing complete. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:19:50.699037 containerd[1955]: time="2024-08-05T22:19:50.698653786Z" level=info msg="TearDown network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\" successfully" Aug 5 22:19:50.699037 containerd[1955]: time="2024-08-05T22:19:50.698701886Z" level=info msg="StopPodSandbox for \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\" returns successfully" Aug 5 22:19:50.703529 containerd[1955]: time="2024-08-05T22:19:50.703373855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4czbl,Uid:f5698d75-b00e-4848-8925-21116137974b,Namespace:kube-system,Attempt:1,}" Aug 5 22:19:50.708203 systemd[1]: run-netns-cni\x2d850ca227\x2d496d\x2dff29\x2dace7\x2d5f3092a404cb.mount: Deactivated successfully. Aug 5 22:19:51.124860 systemd-networkd[1802]: cali46a2ba1069b: Link UP Aug 5 22:19:51.126101 systemd-networkd[1802]: cali46a2ba1069b: Gained carrier Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:50.999 [INFO][4655] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0 coredns-5dd5756b68- kube-system f5698d75-b00e-4848-8925-21116137974b 747 0 2024-08-05 22:19:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-236 coredns-5dd5756b68-4czbl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali46a2ba1069b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Namespace="kube-system" Pod="coredns-5dd5756b68-4czbl" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.000 [INFO][4655] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Namespace="kube-system" Pod="coredns-5dd5756b68-4czbl" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.059 [INFO][4662] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" HandleID="k8s-pod-network.c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.072 [INFO][4662] ipam_plugin.go 264: Auto assigning IP ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" HandleID="k8s-pod-network.c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265a20), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-236", "pod":"coredns-5dd5756b68-4czbl", "timestamp":"2024-08-05 22:19:51.05990635 +0000 UTC"}, Hostname:"ip-172-31-26-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.072 [INFO][4662] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.072 [INFO][4662] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.072 [INFO][4662] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-236' Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.074 [INFO][4662] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" host="ip-172-31-26-236" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.084 [INFO][4662] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-236" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.091 [INFO][4662] ipam.go 489: Trying affinity for 192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.093 [INFO][4662] ipam.go 155: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.098 [INFO][4662] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.098 [INFO][4662] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" host="ip-172-31-26-236" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.100 [INFO][4662] ipam.go 1685: Creating new handle: k8s-pod-network.c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.105 [INFO][4662] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" host="ip-172-31-26-236" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.112 [INFO][4662] ipam.go 1216: Successfully claimed IPs: [192.168.16.193/26] block=192.168.16.192/26 handle="k8s-pod-network.c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" host="ip-172-31-26-236" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.112 [INFO][4662] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.193/26] handle="k8s-pod-network.c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" host="ip-172-31-26-236" Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.113 [INFO][4662] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:19:51.167031 containerd[1955]: 2024-08-05 22:19:51.113 [INFO][4662] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.16.193/26] IPv6=[] ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" HandleID="k8s-pod-network.c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:51.170687 containerd[1955]: 2024-08-05 22:19:51.118 [INFO][4655] k8s.go 386: Populated endpoint ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Namespace="kube-system" Pod="coredns-5dd5756b68-4czbl" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f5698d75-b00e-4848-8925-21116137974b", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"", Pod:"coredns-5dd5756b68-4czbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46a2ba1069b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:19:51.170687 containerd[1955]: 2024-08-05 22:19:51.118 [INFO][4655] k8s.go 387: Calico CNI using IPs: [192.168.16.193/32] ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Namespace="kube-system" Pod="coredns-5dd5756b68-4czbl" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:51.170687 containerd[1955]: 2024-08-05 22:19:51.118 [INFO][4655] dataplane_linux.go 68: Setting the host side veth name to cali46a2ba1069b ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Namespace="kube-system" Pod="coredns-5dd5756b68-4czbl" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:51.170687 containerd[1955]: 2024-08-05 22:19:51.141 [INFO][4655] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Namespace="kube-system" Pod="coredns-5dd5756b68-4czbl" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:51.170687 containerd[1955]: 2024-08-05 22:19:51.142 [INFO][4655] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Namespace="kube-system" Pod="coredns-5dd5756b68-4czbl" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f5698d75-b00e-4848-8925-21116137974b", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e", Pod:"coredns-5dd5756b68-4czbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46a2ba1069b", MAC:"66:71:8c:3e:91:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:19:51.170687 containerd[1955]: 2024-08-05 22:19:51.158 [INFO][4655] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e" Namespace="kube-system" Pod="coredns-5dd5756b68-4czbl" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:19:51.221407 containerd[1955]: time="2024-08-05T22:19:51.221099321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:19:51.222210 containerd[1955]: time="2024-08-05T22:19:51.221254542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:51.222210 containerd[1955]: time="2024-08-05T22:19:51.221284586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:19:51.222210 containerd[1955]: time="2024-08-05T22:19:51.221312798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:51.273115 systemd[1]: Started cri-containerd-c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e.scope - libcontainer container c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e. Aug 5 22:19:51.335081 containerd[1955]: time="2024-08-05T22:19:51.335015953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-4czbl,Uid:f5698d75-b00e-4848-8925-21116137974b,Namespace:kube-system,Attempt:1,} returns sandbox id \"c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e\"" Aug 5 22:19:51.346363 containerd[1955]: time="2024-08-05T22:19:51.345965888Z" level=info msg="StopPodSandbox for \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\"" Aug 5 22:19:51.349652 containerd[1955]: time="2024-08-05T22:19:51.349612310Z" level=info msg="CreateContainer within sandbox \"c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:19:51.420263 containerd[1955]: time="2024-08-05T22:19:51.420111469Z" level=info msg="CreateContainer within sandbox \"c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f43df8bd6a5f04055af689ef84497d424df52f630c128d59cc977d96c5118ec9\"" Aug 5 22:19:51.423009 containerd[1955]: time="2024-08-05T22:19:51.422964580Z" level=info msg="StartContainer for \"f43df8bd6a5f04055af689ef84497d424df52f630c128d59cc977d96c5118ec9\"" Aug 5 22:19:51.487148 systemd[1]: Started cri-containerd-f43df8bd6a5f04055af689ef84497d424df52f630c128d59cc977d96c5118ec9.scope - libcontainer container f43df8bd6a5f04055af689ef84497d424df52f630c128d59cc977d96c5118ec9. Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.417 [INFO][4734] k8s.go 608: Cleaning up netns ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.420 [INFO][4734] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" iface="eth0" netns="/var/run/netns/cni-84a6927a-c0ff-a3f7-671d-574f2a0fb698" Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.425 [INFO][4734] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" iface="eth0" netns="/var/run/netns/cni-84a6927a-c0ff-a3f7-671d-574f2a0fb698" Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.426 [INFO][4734] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" iface="eth0" netns="/var/run/netns/cni-84a6927a-c0ff-a3f7-671d-574f2a0fb698" Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.426 [INFO][4734] k8s.go 615: Releasing IP address(es) ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.426 [INFO][4734] utils.go 188: Calico CNI releasing IP address ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.472 [INFO][4740] ipam_plugin.go 411: Releasing address using handleID ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" HandleID="k8s-pod-network.81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.473 [INFO][4740] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.474 [INFO][4740] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.487 [WARNING][4740] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" HandleID="k8s-pod-network.81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.488 [INFO][4740] ipam_plugin.go 439: Releasing address using workloadID ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" HandleID="k8s-pod-network.81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.490 [INFO][4740] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:19:51.505728 containerd[1955]: 2024-08-05 22:19:51.493 [INFO][4734] k8s.go 621: Teardown processing complete. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:19:51.507135 containerd[1955]: time="2024-08-05T22:19:51.505932382Z" level=info msg="TearDown network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\" successfully" Aug 5 22:19:51.507135 containerd[1955]: time="2024-08-05T22:19:51.505986451Z" level=info msg="StopPodSandbox for \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\" returns successfully" Aug 5 22:19:51.507135 containerd[1955]: time="2024-08-05T22:19:51.506951463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7g7mq,Uid:b0a89043-6691-488b-a80a-a04aac1af3dc,Namespace:kube-system,Attempt:1,}" Aug 5 22:19:51.546596 containerd[1955]: time="2024-08-05T22:19:51.546551177Z" level=info msg="StartContainer for \"f43df8bd6a5f04055af689ef84497d424df52f630c128d59cc977d96c5118ec9\" returns successfully" Aug 5 22:19:51.652622 systemd-networkd[1802]: vxlan.calico: Gained IPv6LL Aug 5 22:19:51.714062 systemd[1]: run-netns-cni\x2d84a6927a\x2dc0ff\x2da3f7\x2d671d\x2d574f2a0fb698.mount: Deactivated successfully. Aug 5 22:19:51.828812 systemd-networkd[1802]: calif272013e0bb: Link UP Aug 5 22:19:51.833479 systemd-networkd[1802]: calif272013e0bb: Gained carrier Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.616 [INFO][4782] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0 coredns-5dd5756b68- kube-system b0a89043-6691-488b-a80a-a04aac1af3dc 762 0 2024-08-05 22:19:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-26-236 coredns-5dd5756b68-7g7mq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif272013e0bb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Namespace="kube-system" Pod="coredns-5dd5756b68-7g7mq" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.616 [INFO][4782] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Namespace="kube-system" Pod="coredns-5dd5756b68-7g7mq" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.728 [INFO][4796] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" HandleID="k8s-pod-network.7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.751 [INFO][4796] ipam_plugin.go 264: Auto assigning IP ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" HandleID="k8s-pod-network.7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000376d80), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-26-236", "pod":"coredns-5dd5756b68-7g7mq", "timestamp":"2024-08-05 22:19:51.728758844 +0000 UTC"}, Hostname:"ip-172-31-26-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.752 [INFO][4796] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.752 [INFO][4796] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.752 [INFO][4796] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-236' Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.755 [INFO][4796] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" host="ip-172-31-26-236" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.764 [INFO][4796] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-236" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.770 [INFO][4796] ipam.go 489: Trying affinity for 192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.774 [INFO][4796] ipam.go 155: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.778 [INFO][4796] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.778 [INFO][4796] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" host="ip-172-31-26-236" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.782 [INFO][4796] ipam.go 1685: Creating new handle: k8s-pod-network.7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511 Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.794 [INFO][4796] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" host="ip-172-31-26-236" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.808 [INFO][4796] ipam.go 1216: Successfully claimed IPs: [192.168.16.194/26] block=192.168.16.192/26 handle="k8s-pod-network.7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" host="ip-172-31-26-236" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.808 [INFO][4796] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.194/26] handle="k8s-pod-network.7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" host="ip-172-31-26-236" Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.808 [INFO][4796] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:19:51.888409 containerd[1955]: 2024-08-05 22:19:51.808 [INFO][4796] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.16.194/26] IPv6=[] ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" HandleID="k8s-pod-network.7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.890713 containerd[1955]: 2024-08-05 22:19:51.813 [INFO][4782] k8s.go 386: Populated endpoint ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Namespace="kube-system" Pod="coredns-5dd5756b68-7g7mq" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b0a89043-6691-488b-a80a-a04aac1af3dc", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"", Pod:"coredns-5dd5756b68-7g7mq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif272013e0bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:19:51.890713 containerd[1955]: 2024-08-05 22:19:51.813 [INFO][4782] k8s.go 387: Calico CNI using IPs: [192.168.16.194/32] ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Namespace="kube-system" Pod="coredns-5dd5756b68-7g7mq" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.890713 containerd[1955]: 2024-08-05 22:19:51.814 [INFO][4782] dataplane_linux.go 68: Setting the host side veth name to calif272013e0bb ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Namespace="kube-system" Pod="coredns-5dd5756b68-7g7mq" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.890713 containerd[1955]: 2024-08-05 22:19:51.835 [INFO][4782] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Namespace="kube-system" Pod="coredns-5dd5756b68-7g7mq" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.890713 containerd[1955]: 2024-08-05 22:19:51.839 [INFO][4782] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Namespace="kube-system" Pod="coredns-5dd5756b68-7g7mq" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b0a89043-6691-488b-a80a-a04aac1af3dc", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511", Pod:"coredns-5dd5756b68-7g7mq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif272013e0bb", MAC:"8e:2c:f0:f5:02:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:19:51.890713 containerd[1955]: 2024-08-05 22:19:51.880 [INFO][4782] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511" Namespace="kube-system" Pod="coredns-5dd5756b68-7g7mq" WorkloadEndpoint="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:19:51.988658 containerd[1955]: time="2024-08-05T22:19:51.988477674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:19:51.988829 containerd[1955]: time="2024-08-05T22:19:51.988554747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:51.988829 containerd[1955]: time="2024-08-05T22:19:51.988592130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:19:51.988829 containerd[1955]: time="2024-08-05T22:19:51.988611569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:52.077179 systemd[1]: Started cri-containerd-7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511.scope - libcontainer container 7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511. Aug 5 22:19:52.208058 containerd[1955]: time="2024-08-05T22:19:52.208005067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-7g7mq,Uid:b0a89043-6691-488b-a80a-a04aac1af3dc,Namespace:kube-system,Attempt:1,} returns sandbox id \"7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511\"" Aug 5 22:19:52.218341 containerd[1955]: time="2024-08-05T22:19:52.218279353Z" level=info msg="CreateContainer within sandbox \"7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:19:52.256981 containerd[1955]: time="2024-08-05T22:19:52.252134281Z" level=info msg="CreateContainer within sandbox \"7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a41a441832d51572f038b9fd80eb7536e0a2f83f50bef45b2bfc8718d4196f97\"" Aug 5 22:19:52.257419 containerd[1955]: time="2024-08-05T22:19:52.257375095Z" level=info msg="StartContainer for \"a41a441832d51572f038b9fd80eb7536e0a2f83f50bef45b2bfc8718d4196f97\"" Aug 5 22:19:52.313134 systemd[1]: Started cri-containerd-a41a441832d51572f038b9fd80eb7536e0a2f83f50bef45b2bfc8718d4196f97.scope - libcontainer container a41a441832d51572f038b9fd80eb7536e0a2f83f50bef45b2bfc8718d4196f97. Aug 5 22:19:52.343913 containerd[1955]: time="2024-08-05T22:19:52.343525367Z" level=info msg="StopPodSandbox for \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\"" Aug 5 22:19:52.347816 systemd-networkd[1802]: cali46a2ba1069b: Gained IPv6LL Aug 5 22:19:52.415901 containerd[1955]: time="2024-08-05T22:19:52.413168611Z" level=info msg="StartContainer for \"a41a441832d51572f038b9fd80eb7536e0a2f83f50bef45b2bfc8718d4196f97\" returns successfully" Aug 5 22:19:52.508947 kubelet[3447]: I0805 22:19:52.508617 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-4czbl" podStartSLOduration=36.508566617 podCreationTimestamp="2024-08-05 22:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:19:51.948038058 +0000 UTC m=+48.832065923" watchObservedRunningTime="2024-08-05 22:19:52.508566617 +0000 UTC m=+49.392594481" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.510 [INFO][4909] k8s.go 608: Cleaning up netns ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.511 [INFO][4909] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" iface="eth0" netns="/var/run/netns/cni-5c56d1df-d0b7-483a-f866-2c72a7af5e44" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.511 [INFO][4909] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" iface="eth0" netns="/var/run/netns/cni-5c56d1df-d0b7-483a-f866-2c72a7af5e44" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.511 [INFO][4909] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" iface="eth0" netns="/var/run/netns/cni-5c56d1df-d0b7-483a-f866-2c72a7af5e44" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.511 [INFO][4909] k8s.go 615: Releasing IP address(es) ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.512 [INFO][4909] utils.go 188: Calico CNI releasing IP address ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.573 [INFO][4920] ipam_plugin.go 411: Releasing address using handleID ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" HandleID="k8s-pod-network.15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.574 [INFO][4920] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.574 [INFO][4920] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.585 [WARNING][4920] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" HandleID="k8s-pod-network.15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.585 [INFO][4920] ipam_plugin.go 439: Releasing address using workloadID ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" HandleID="k8s-pod-network.15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.592 [INFO][4920] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:19:52.597436 containerd[1955]: 2024-08-05 22:19:52.594 [INFO][4909] k8s.go 621: Teardown processing complete. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:19:52.598418 containerd[1955]: time="2024-08-05T22:19:52.597787093Z" level=info msg="TearDown network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\" successfully" Aug 5 22:19:52.598418 containerd[1955]: time="2024-08-05T22:19:52.597821686Z" level=info msg="StopPodSandbox for \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\" returns successfully" Aug 5 22:19:52.600297 containerd[1955]: time="2024-08-05T22:19:52.599559744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wgdwc,Uid:f6fc55fe-9251-4794-b8d5-cba9ade83b18,Namespace:calico-system,Attempt:1,}" Aug 5 22:19:52.727276 systemd[1]: run-netns-cni\x2d5c56d1df\x2dd0b7\x2d483a\x2df866\x2d2c72a7af5e44.mount: Deactivated successfully. Aug 5 22:19:52.851563 kubelet[3447]: I0805 22:19:52.851446 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-7g7mq" podStartSLOduration=36.849796421 podCreationTimestamp="2024-08-05 22:19:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:19:52.84921521 +0000 UTC m=+49.733243075" watchObservedRunningTime="2024-08-05 22:19:52.849796421 +0000 UTC m=+49.733824286" Aug 5 22:19:53.076316 systemd-networkd[1802]: cali817fd4de119: Link UP Aug 5 22:19:53.078244 systemd-networkd[1802]: cali817fd4de119: Gained carrier Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.711 [INFO][4926] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0 csi-node-driver- calico-system f6fc55fe-9251-4794-b8d5-cba9ade83b18 782 0 2024-08-05 22:19:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-26-236 csi-node-driver-wgdwc eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali817fd4de119 [] []}} ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Namespace="calico-system" Pod="csi-node-driver-wgdwc" WorkloadEndpoint="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.711 [INFO][4926] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Namespace="calico-system" Pod="csi-node-driver-wgdwc" WorkloadEndpoint="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.872 [INFO][4937] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" HandleID="k8s-pod-network.9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.909 [INFO][4937] ipam_plugin.go 264: Auto assigning IP ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" HandleID="k8s-pod-network.9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-236", "pod":"csi-node-driver-wgdwc", "timestamp":"2024-08-05 22:19:52.870831087 +0000 UTC"}, Hostname:"ip-172-31-26-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.909 [INFO][4937] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.909 [INFO][4937] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.909 [INFO][4937] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-236' Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.935 [INFO][4937] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" host="ip-172-31-26-236" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.977 [INFO][4937] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-236" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.990 [INFO][4937] ipam.go 489: Trying affinity for 192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:52.995 [INFO][4937] ipam.go 155: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:53.003 [INFO][4937] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:53.004 [INFO][4937] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" host="ip-172-31-26-236" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:53.014 [INFO][4937] ipam.go 1685: Creating new handle: k8s-pod-network.9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:53.026 [INFO][4937] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" host="ip-172-31-26-236" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:53.064 [INFO][4937] ipam.go 1216: Successfully claimed IPs: [192.168.16.195/26] block=192.168.16.192/26 handle="k8s-pod-network.9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" host="ip-172-31-26-236" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:53.064 [INFO][4937] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.195/26] handle="k8s-pod-network.9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" host="ip-172-31-26-236" Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:53.064 [INFO][4937] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:19:53.124689 containerd[1955]: 2024-08-05 22:19:53.064 [INFO][4937] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.16.195/26] IPv6=[] ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" HandleID="k8s-pod-network.9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:53.125707 containerd[1955]: 2024-08-05 22:19:53.069 [INFO][4926] k8s.go 386: Populated endpoint ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Namespace="calico-system" Pod="csi-node-driver-wgdwc" WorkloadEndpoint="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6fc55fe-9251-4794-b8d5-cba9ade83b18", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"", Pod:"csi-node-driver-wgdwc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali817fd4de119", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:19:53.125707 containerd[1955]: 2024-08-05 22:19:53.069 [INFO][4926] k8s.go 387: Calico CNI using IPs: [192.168.16.195/32] ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Namespace="calico-system" Pod="csi-node-driver-wgdwc" WorkloadEndpoint="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:53.125707 containerd[1955]: 2024-08-05 22:19:53.069 [INFO][4926] dataplane_linux.go 68: Setting the host side veth name to cali817fd4de119 ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Namespace="calico-system" Pod="csi-node-driver-wgdwc" WorkloadEndpoint="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:53.125707 containerd[1955]: 2024-08-05 22:19:53.082 [INFO][4926] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Namespace="calico-system" Pod="csi-node-driver-wgdwc" WorkloadEndpoint="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:53.125707 containerd[1955]: 2024-08-05 22:19:53.084 [INFO][4926] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Namespace="calico-system" Pod="csi-node-driver-wgdwc" WorkloadEndpoint="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6fc55fe-9251-4794-b8d5-cba9ade83b18", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b", Pod:"csi-node-driver-wgdwc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali817fd4de119", MAC:"5e:a2:0f:2a:fd:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:19:53.125707 containerd[1955]: 2024-08-05 22:19:53.116 [INFO][4926] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b" Namespace="calico-system" Pod="csi-node-driver-wgdwc" WorkloadEndpoint="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:19:53.181484 containerd[1955]: time="2024-08-05T22:19:53.181329820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:19:53.181484 containerd[1955]: time="2024-08-05T22:19:53.181418061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:53.181484 containerd[1955]: time="2024-08-05T22:19:53.181450935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:19:53.182013 containerd[1955]: time="2024-08-05T22:19:53.181476145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:53.225125 systemd[1]: Started cri-containerd-9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b.scope - libcontainer container 9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b. Aug 5 22:19:53.278351 containerd[1955]: time="2024-08-05T22:19:53.278303995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wgdwc,Uid:f6fc55fe-9251-4794-b8d5-cba9ade83b18,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b\"" Aug 5 22:19:53.281335 containerd[1955]: time="2024-08-05T22:19:53.281294504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:19:53.355034 containerd[1955]: time="2024-08-05T22:19:53.354994261Z" level=info msg="StopPodSandbox for \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\"" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.445 [INFO][5016] k8s.go 608: Cleaning up netns ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.445 [INFO][5016] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" iface="eth0" netns="/var/run/netns/cni-a16ce968-38d6-0b23-f7bd-4fc258c81d3a" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.446 [INFO][5016] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" iface="eth0" netns="/var/run/netns/cni-a16ce968-38d6-0b23-f7bd-4fc258c81d3a" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.446 [INFO][5016] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" iface="eth0" netns="/var/run/netns/cni-a16ce968-38d6-0b23-f7bd-4fc258c81d3a" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.446 [INFO][5016] k8s.go 615: Releasing IP address(es) ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.446 [INFO][5016] utils.go 188: Calico CNI releasing IP address ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.480 [INFO][5023] ipam_plugin.go 411: Releasing address using handleID ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" HandleID="k8s-pod-network.694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.481 [INFO][5023] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.481 [INFO][5023] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.488 [WARNING][5023] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" HandleID="k8s-pod-network.694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.488 [INFO][5023] ipam_plugin.go 439: Releasing address using workloadID ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" HandleID="k8s-pod-network.694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.490 [INFO][5023] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:19:53.494310 containerd[1955]: 2024-08-05 22:19:53.492 [INFO][5016] k8s.go 621: Teardown processing complete. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:19:53.501075 containerd[1955]: time="2024-08-05T22:19:53.494985043Z" level=info msg="TearDown network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\" successfully" Aug 5 22:19:53.501075 containerd[1955]: time="2024-08-05T22:19:53.495039063Z" level=info msg="StopPodSandbox for \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\" returns successfully" Aug 5 22:19:53.501075 containerd[1955]: time="2024-08-05T22:19:53.498201251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79c7b85bf-dzc2m,Uid:2c94ec30-391b-47f9-8ba5-0553eac916fc,Namespace:calico-system,Attempt:1,}" Aug 5 22:19:53.501733 systemd[1]: run-netns-cni\x2da16ce968\x2d38d6\x2d0b23\x2df7bd\x2d4fc258c81d3a.mount: Deactivated successfully. Aug 5 22:19:53.638524 systemd-networkd[1802]: calif272013e0bb: Gained IPv6LL Aug 5 22:19:53.868218 systemd-networkd[1802]: cali8c19e9e3c99: Link UP Aug 5 22:19:53.881158 systemd-networkd[1802]: cali8c19e9e3c99: Gained carrier Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.582 [INFO][5034] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0 calico-kube-controllers-79c7b85bf- calico-system 2c94ec30-391b-47f9-8ba5-0553eac916fc 803 0 2024-08-05 22:19:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:79c7b85bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-26-236 calico-kube-controllers-79c7b85bf-dzc2m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8c19e9e3c99 [] []}} ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Namespace="calico-system" Pod="calico-kube-controllers-79c7b85bf-dzc2m" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.582 [INFO][5034] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Namespace="calico-system" Pod="calico-kube-controllers-79c7b85bf-dzc2m" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.726 [INFO][5042] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" HandleID="k8s-pod-network.2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.750 [INFO][5042] ipam_plugin.go 264: Auto assigning IP ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" HandleID="k8s-pod-network.2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a830), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-26-236", "pod":"calico-kube-controllers-79c7b85bf-dzc2m", "timestamp":"2024-08-05 22:19:53.726339789 +0000 UTC"}, Hostname:"ip-172-31-26-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.750 [INFO][5042] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.751 [INFO][5042] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.751 [INFO][5042] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-236' Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.754 [INFO][5042] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" host="ip-172-31-26-236" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.762 [INFO][5042] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-236" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.784 [INFO][5042] ipam.go 489: Trying affinity for 192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.793 [INFO][5042] ipam.go 155: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.802 [INFO][5042] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.803 [INFO][5042] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" host="ip-172-31-26-236" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.808 [INFO][5042] ipam.go 1685: Creating new handle: k8s-pod-network.2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711 Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.824 [INFO][5042] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" host="ip-172-31-26-236" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.851 [INFO][5042] ipam.go 1216: Successfully claimed IPs: [192.168.16.196/26] block=192.168.16.192/26 handle="k8s-pod-network.2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" host="ip-172-31-26-236" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.851 [INFO][5042] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.196/26] handle="k8s-pod-network.2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" host="ip-172-31-26-236" Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.852 [INFO][5042] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:19:53.922683 containerd[1955]: 2024-08-05 22:19:53.852 [INFO][5042] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.16.196/26] IPv6=[] ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" HandleID="k8s-pod-network.2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.923922 containerd[1955]: 2024-08-05 22:19:53.859 [INFO][5034] k8s.go 386: Populated endpoint ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Namespace="calico-system" Pod="calico-kube-controllers-79c7b85bf-dzc2m" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0", GenerateName:"calico-kube-controllers-79c7b85bf-", Namespace:"calico-system", SelfLink:"", UID:"2c94ec30-391b-47f9-8ba5-0553eac916fc", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79c7b85bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"", Pod:"calico-kube-controllers-79c7b85bf-dzc2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c19e9e3c99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:19:53.923922 containerd[1955]: 2024-08-05 22:19:53.859 [INFO][5034] k8s.go 387: Calico CNI using IPs: [192.168.16.196/32] ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Namespace="calico-system" Pod="calico-kube-controllers-79c7b85bf-dzc2m" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.923922 containerd[1955]: 2024-08-05 22:19:53.860 [INFO][5034] dataplane_linux.go 68: Setting the host side veth name to cali8c19e9e3c99 ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Namespace="calico-system" Pod="calico-kube-controllers-79c7b85bf-dzc2m" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.923922 containerd[1955]: 2024-08-05 22:19:53.887 [INFO][5034] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Namespace="calico-system" Pod="calico-kube-controllers-79c7b85bf-dzc2m" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.923922 containerd[1955]: 2024-08-05 22:19:53.894 [INFO][5034] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Namespace="calico-system" Pod="calico-kube-controllers-79c7b85bf-dzc2m" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0", GenerateName:"calico-kube-controllers-79c7b85bf-", Namespace:"calico-system", SelfLink:"", UID:"2c94ec30-391b-47f9-8ba5-0553eac916fc", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79c7b85bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711", Pod:"calico-kube-controllers-79c7b85bf-dzc2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c19e9e3c99", MAC:"0e:24:be:07:4f:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:19:53.923922 containerd[1955]: 2024-08-05 22:19:53.914 [INFO][5034] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711" Namespace="calico-system" Pod="calico-kube-controllers-79c7b85bf-dzc2m" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:19:53.938997 systemd[1]: Started sshd@10-172.31.26.236:22-139.178.89.65:33074.service - OpenSSH per-connection server daemon (139.178.89.65:33074). Aug 5 22:19:54.027560 containerd[1955]: time="2024-08-05T22:19:54.026442633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:19:54.027560 containerd[1955]: time="2024-08-05T22:19:54.026520739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:54.027560 containerd[1955]: time="2024-08-05T22:19:54.026559862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:19:54.027560 containerd[1955]: time="2024-08-05T22:19:54.026587370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:19:54.111307 systemd[1]: Started cri-containerd-2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711.scope - libcontainer container 2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711. Aug 5 22:19:54.189447 sshd[5057]: Accepted publickey for core from 139.178.89.65 port 33074 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:19:54.192782 sshd[5057]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:19:54.203714 systemd-logind[1944]: New session 11 of user core. Aug 5 22:19:54.208309 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:19:54.282390 containerd[1955]: time="2024-08-05T22:19:54.282217423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-79c7b85bf-dzc2m,Uid:2c94ec30-391b-47f9-8ba5-0553eac916fc,Namespace:calico-system,Attempt:1,} returns sandbox id \"2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711\"" Aug 5 22:19:54.392455 systemd-networkd[1802]: cali817fd4de119: Gained IPv6LL Aug 5 22:19:54.846277 sshd[5057]: pam_unix(sshd:session): session closed for user core Aug 5 22:19:54.861350 systemd[1]: sshd@10-172.31.26.236:22-139.178.89.65:33074.service: Deactivated successfully. Aug 5 22:19:54.864517 systemd-logind[1944]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:19:54.870692 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:19:54.878209 systemd-logind[1944]: Removed session 11. Aug 5 22:19:55.239767 containerd[1955]: time="2024-08-05T22:19:55.239450948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:55.242134 containerd[1955]: time="2024-08-05T22:19:55.242026971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:19:55.248935 containerd[1955]: time="2024-08-05T22:19:55.248100016Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:55.253007 containerd[1955]: time="2024-08-05T22:19:55.252939085Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:55.263064 containerd[1955]: time="2024-08-05T22:19:55.262920959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.980880058s" Aug 5 22:19:55.263594 containerd[1955]: time="2024-08-05T22:19:55.263438731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:19:55.265472 containerd[1955]: time="2024-08-05T22:19:55.265438233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:19:55.270401 containerd[1955]: time="2024-08-05T22:19:55.270248499Z" level=info msg="CreateContainer within sandbox \"9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:19:55.341023 containerd[1955]: time="2024-08-05T22:19:55.340887707Z" level=info msg="CreateContainer within sandbox \"9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1ffbf0080fc3a39c8b49ea26f67b6852daebc9854633641ad93a35e6b4dbfa23\"" Aug 5 22:19:55.346022 containerd[1955]: time="2024-08-05T22:19:55.343077155Z" level=info msg="StartContainer for \"1ffbf0080fc3a39c8b49ea26f67b6852daebc9854633641ad93a35e6b4dbfa23\"" Aug 5 22:19:55.350337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323550298.mount: Deactivated successfully. Aug 5 22:19:55.352148 systemd-networkd[1802]: cali8c19e9e3c99: Gained IPv6LL Aug 5 22:19:55.465238 systemd[1]: Started cri-containerd-1ffbf0080fc3a39c8b49ea26f67b6852daebc9854633641ad93a35e6b4dbfa23.scope - libcontainer container 1ffbf0080fc3a39c8b49ea26f67b6852daebc9854633641ad93a35e6b4dbfa23. Aug 5 22:19:55.601454 containerd[1955]: time="2024-08-05T22:19:55.600794585Z" level=info msg="StartContainer for \"1ffbf0080fc3a39c8b49ea26f67b6852daebc9854633641ad93a35e6b4dbfa23\" returns successfully" Aug 5 22:19:58.098895 ntpd[1939]: Listen normally on 7 vxlan.calico 192.168.16.192:123 Aug 5 22:19:58.106748 ntpd[1939]: 5 Aug 22:19:58 ntpd[1939]: Listen normally on 7 vxlan.calico 192.168.16.192:123 Aug 5 22:19:58.106748 ntpd[1939]: 5 Aug 22:19:58 ntpd[1939]: Listen normally on 8 vxlan.calico [fe80::6469:7aff:fec8:71f1%4]:123 Aug 5 22:19:58.106748 ntpd[1939]: 5 Aug 22:19:58 ntpd[1939]: Listen normally on 9 cali46a2ba1069b [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 22:19:58.106748 ntpd[1939]: 5 Aug 22:19:58 ntpd[1939]: Listen normally on 10 calif272013e0bb [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:19:58.106748 ntpd[1939]: 5 Aug 22:19:58 ntpd[1939]: Listen normally on 11 cali817fd4de119 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:19:58.106748 ntpd[1939]: 5 Aug 22:19:58 ntpd[1939]: Listen normally on 12 cali8c19e9e3c99 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:19:58.101774 ntpd[1939]: Listen normally on 8 vxlan.calico [fe80::6469:7aff:fec8:71f1%4]:123 Aug 5 22:19:58.101886 ntpd[1939]: Listen normally on 9 cali46a2ba1069b [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 22:19:58.101946 ntpd[1939]: Listen normally on 10 calif272013e0bb [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:19:58.102042 ntpd[1939]: Listen normally on 11 cali817fd4de119 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:19:58.102083 ntpd[1939]: Listen normally on 12 cali8c19e9e3c99 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:19:58.595523 containerd[1955]: time="2024-08-05T22:19:58.595324297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:58.597606 containerd[1955]: time="2024-08-05T22:19:58.596517616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:19:58.603307 containerd[1955]: time="2024-08-05T22:19:58.603251322Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:58.611243 containerd[1955]: time="2024-08-05T22:19:58.611097082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:19:58.612731 containerd[1955]: time="2024-08-05T22:19:58.612230561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.346095291s" Aug 5 22:19:58.613536 containerd[1955]: time="2024-08-05T22:19:58.612886041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:19:58.615245 containerd[1955]: time="2024-08-05T22:19:58.615213875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:19:58.680460 containerd[1955]: time="2024-08-05T22:19:58.680417282Z" level=info msg="CreateContainer within sandbox \"2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:19:58.719764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692000428.mount: Deactivated successfully. Aug 5 22:19:58.737872 containerd[1955]: time="2024-08-05T22:19:58.737823727Z" level=info msg="CreateContainer within sandbox \"2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5ceefc5b0837b4b7f486469706548f2c17bed16e7b907b5d83322722b74d8eaa\"" Aug 5 22:19:58.739027 containerd[1955]: time="2024-08-05T22:19:58.738995516Z" level=info msg="StartContainer for \"5ceefc5b0837b4b7f486469706548f2c17bed16e7b907b5d83322722b74d8eaa\"" Aug 5 22:19:58.821316 systemd[1]: Started cri-containerd-5ceefc5b0837b4b7f486469706548f2c17bed16e7b907b5d83322722b74d8eaa.scope - libcontainer container 5ceefc5b0837b4b7f486469706548f2c17bed16e7b907b5d83322722b74d8eaa. Aug 5 22:19:58.957029 containerd[1955]: time="2024-08-05T22:19:58.956492886Z" level=info msg="StartContainer for \"5ceefc5b0837b4b7f486469706548f2c17bed16e7b907b5d83322722b74d8eaa\" returns successfully" Aug 5 22:19:59.893285 systemd[1]: Started sshd@11-172.31.26.236:22-139.178.89.65:33082.service - OpenSSH per-connection server daemon (139.178.89.65:33082). Aug 5 22:20:00.480684 sshd[5212]: Accepted publickey for core from 139.178.89.65 port 33082 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:00.485026 sshd[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:00.515004 systemd-logind[1944]: New session 12 of user core. Aug 5 22:20:00.521147 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:20:00.594068 kubelet[3447]: I0805 22:20:00.592716 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-79c7b85bf-dzc2m" podStartSLOduration=33.264728213 podCreationTimestamp="2024-08-05 22:19:23 +0000 UTC" firstStartedPulling="2024-08-05 22:19:54.285417686 +0000 UTC m=+51.169553524" lastFinishedPulling="2024-08-05 22:19:58.613455177 +0000 UTC m=+55.497483035" observedRunningTime="2024-08-05 22:20:00.254418762 +0000 UTC m=+57.138446639" watchObservedRunningTime="2024-08-05 22:20:00.592657724 +0000 UTC m=+57.476685622" Aug 5 22:20:00.932832 containerd[1955]: time="2024-08-05T22:20:00.932779670Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:00.934737 containerd[1955]: time="2024-08-05T22:20:00.934595707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:20:00.937157 containerd[1955]: time="2024-08-05T22:20:00.937111904Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:00.958941 containerd[1955]: time="2024-08-05T22:20:00.957956098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:00.962534 containerd[1955]: time="2024-08-05T22:20:00.959736392Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.344472491s" Aug 5 22:20:00.962534 containerd[1955]: time="2024-08-05T22:20:00.962446902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:20:00.969007 containerd[1955]: time="2024-08-05T22:20:00.968946591Z" level=info msg="CreateContainer within sandbox \"9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:20:01.017569 containerd[1955]: time="2024-08-05T22:20:01.017417038Z" level=info msg="CreateContainer within sandbox \"9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8dca621a748d417d3ebb3ea9922b4812eba87bfebb641595221486197da5db48\"" Aug 5 22:20:01.023031 containerd[1955]: time="2024-08-05T22:20:01.019050879Z" level=info msg="StartContainer for \"8dca621a748d417d3ebb3ea9922b4812eba87bfebb641595221486197da5db48\"" Aug 5 22:20:01.211725 systemd[1]: Started cri-containerd-8dca621a748d417d3ebb3ea9922b4812eba87bfebb641595221486197da5db48.scope - libcontainer container 8dca621a748d417d3ebb3ea9922b4812eba87bfebb641595221486197da5db48. Aug 5 22:20:01.262222 sshd[5212]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:01.286613 systemd[1]: sshd@11-172.31.26.236:22-139.178.89.65:33082.service: Deactivated successfully. Aug 5 22:20:01.300765 containerd[1955]: time="2024-08-05T22:20:01.297143506Z" level=info msg="StartContainer for \"8dca621a748d417d3ebb3ea9922b4812eba87bfebb641595221486197da5db48\" returns successfully" Aug 5 22:20:01.309616 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:20:01.316553 systemd-logind[1944]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:20:01.321606 systemd-logind[1944]: Removed session 12. Aug 5 22:20:01.940411 kubelet[3447]: I0805 22:20:01.940276 3447 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:20:01.941545 kubelet[3447]: I0805 22:20:01.940497 3447 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:20:03.393310 containerd[1955]: time="2024-08-05T22:20:03.393193904Z" level=info msg="StopPodSandbox for \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\"" Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.451 [WARNING][5308] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6fc55fe-9251-4794-b8d5-cba9ade83b18", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b", Pod:"csi-node-driver-wgdwc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali817fd4de119", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.451 [INFO][5308] k8s.go 608: Cleaning up netns ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.451 [INFO][5308] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" iface="eth0" netns="" Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.451 [INFO][5308] k8s.go 615: Releasing IP address(es) ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.451 [INFO][5308] utils.go 188: Calico CNI releasing IP address ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.532 [INFO][5314] ipam_plugin.go 411: Releasing address using handleID ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" HandleID="k8s-pod-network.15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.532 [INFO][5314] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.533 [INFO][5314] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.547 [WARNING][5314] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" HandleID="k8s-pod-network.15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.547 [INFO][5314] ipam_plugin.go 439: Releasing address using workloadID ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" HandleID="k8s-pod-network.15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.551 [INFO][5314] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:20:03.560990 containerd[1955]: 2024-08-05 22:20:03.555 [INFO][5308] k8s.go 621: Teardown processing complete. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:20:03.560990 containerd[1955]: time="2024-08-05T22:20:03.559941540Z" level=info msg="TearDown network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\" successfully" Aug 5 22:20:03.560990 containerd[1955]: time="2024-08-05T22:20:03.559986938Z" level=info msg="StopPodSandbox for \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\" returns successfully" Aug 5 22:20:03.565686 containerd[1955]: time="2024-08-05T22:20:03.561363337Z" level=info msg="RemovePodSandbox for \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\"" Aug 5 22:20:03.565686 containerd[1955]: time="2024-08-05T22:20:03.561402867Z" level=info msg="Forcibly stopping sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\"" Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.634 [WARNING][5332] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6fc55fe-9251-4794-b8d5-cba9ade83b18", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"9ad67740bf8bf99b850fb718835c13d96c6ad95ed89f43ee22221b2f1037b13b", Pod:"csi-node-driver-wgdwc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali817fd4de119", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.634 [INFO][5332] k8s.go 608: Cleaning up netns ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.634 [INFO][5332] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" iface="eth0" netns="" Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.635 [INFO][5332] k8s.go 615: Releasing IP address(es) ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.635 [INFO][5332] utils.go 188: Calico CNI releasing IP address ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.673 [INFO][5338] ipam_plugin.go 411: Releasing address using handleID ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" HandleID="k8s-pod-network.15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.673 [INFO][5338] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.673 [INFO][5338] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.681 [WARNING][5338] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" HandleID="k8s-pod-network.15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.681 [INFO][5338] ipam_plugin.go 439: Releasing address using workloadID ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" HandleID="k8s-pod-network.15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Workload="ip--172--31--26--236-k8s-csi--node--driver--wgdwc-eth0" Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.684 [INFO][5338] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:20:03.695300 containerd[1955]: 2024-08-05 22:20:03.687 [INFO][5332] k8s.go 621: Teardown processing complete. ContainerID="15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a" Aug 5 22:20:03.696191 containerd[1955]: time="2024-08-05T22:20:03.695349491Z" level=info msg="TearDown network for sandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\" successfully" Aug 5 22:20:03.737057 containerd[1955]: time="2024-08-05T22:20:03.736943056Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:20:03.737224 containerd[1955]: time="2024-08-05T22:20:03.737107146Z" level=info msg="RemovePodSandbox \"15de146320eee112970ef3ab8055b1394698031547c6b8e24857ada5113fb41a\" returns successfully" Aug 5 22:20:03.739311 containerd[1955]: time="2024-08-05T22:20:03.739264908Z" level=info msg="StopPodSandbox for \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\"" Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.817 [WARNING][5356] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f5698d75-b00e-4848-8925-21116137974b", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e", Pod:"coredns-5dd5756b68-4czbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46a2ba1069b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.818 [INFO][5356] k8s.go 608: Cleaning up netns ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.818 [INFO][5356] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" iface="eth0" netns="" Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.818 [INFO][5356] k8s.go 615: Releasing IP address(es) ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.818 [INFO][5356] utils.go 188: Calico CNI releasing IP address ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.861 [INFO][5362] ipam_plugin.go 411: Releasing address using handleID ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" HandleID="k8s-pod-network.0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.862 [INFO][5362] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.862 [INFO][5362] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.874 [WARNING][5362] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" HandleID="k8s-pod-network.0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.874 [INFO][5362] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" HandleID="k8s-pod-network.0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.877 [INFO][5362] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:20:03.889314 containerd[1955]: 2024-08-05 22:20:03.881 [INFO][5356] k8s.go 621: Teardown processing complete. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:20:03.890630 containerd[1955]: time="2024-08-05T22:20:03.890465249Z" level=info msg="TearDown network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\" successfully" Aug 5 22:20:03.890630 containerd[1955]: time="2024-08-05T22:20:03.890502075Z" level=info msg="StopPodSandbox for \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\" returns successfully" Aug 5 22:20:03.891865 containerd[1955]: time="2024-08-05T22:20:03.891657731Z" level=info msg="RemovePodSandbox for \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\"" Aug 5 22:20:03.891865 containerd[1955]: time="2024-08-05T22:20:03.891817747Z" level=info msg="Forcibly stopping sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\"" Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:03.957 [WARNING][5380] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"f5698d75-b00e-4848-8925-21116137974b", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"c729709daf579907fdf0667380786c7975fc800099ba195f0042667c0eed8e2e", Pod:"coredns-5dd5756b68-4czbl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali46a2ba1069b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:03.957 [INFO][5380] k8s.go 608: Cleaning up netns ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:03.957 [INFO][5380] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" iface="eth0" netns="" Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:03.957 [INFO][5380] k8s.go 615: Releasing IP address(es) ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:03.957 [INFO][5380] utils.go 188: Calico CNI releasing IP address ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:03.987 [INFO][5387] ipam_plugin.go 411: Releasing address using handleID ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" HandleID="k8s-pod-network.0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:03.987 [INFO][5387] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:03.987 [INFO][5387] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:04.005 [WARNING][5387] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" HandleID="k8s-pod-network.0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:04.005 [INFO][5387] ipam_plugin.go 439: Releasing address using workloadID ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" HandleID="k8s-pod-network.0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--4czbl-eth0" Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:04.012 [INFO][5387] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:20:04.020035 containerd[1955]: 2024-08-05 22:20:04.016 [INFO][5380] k8s.go 621: Teardown processing complete. ContainerID="0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0" Aug 5 22:20:04.020035 containerd[1955]: time="2024-08-05T22:20:04.019742944Z" level=info msg="TearDown network for sandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\" successfully" Aug 5 22:20:04.038147 containerd[1955]: time="2024-08-05T22:20:04.037703852Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:20:04.038147 containerd[1955]: time="2024-08-05T22:20:04.037803482Z" level=info msg="RemovePodSandbox \"0759a17b2e3645294b010f7d74c2ffe22c2a8dd52ab2078ed5ea5e66b94f4ec0\" returns successfully" Aug 5 22:20:04.041314 containerd[1955]: time="2024-08-05T22:20:04.041276946Z" level=info msg="StopPodSandbox for \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\"" Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.105 [WARNING][5405] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b0a89043-6691-488b-a80a-a04aac1af3dc", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511", Pod:"coredns-5dd5756b68-7g7mq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif272013e0bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.105 [INFO][5405] k8s.go 608: Cleaning up netns ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.105 [INFO][5405] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" iface="eth0" netns="" Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.105 [INFO][5405] k8s.go 615: Releasing IP address(es) ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.105 [INFO][5405] utils.go 188: Calico CNI releasing IP address ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.168 [INFO][5413] ipam_plugin.go 411: Releasing address using handleID ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" HandleID="k8s-pod-network.81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.168 [INFO][5413] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.168 [INFO][5413] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.179 [WARNING][5413] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" HandleID="k8s-pod-network.81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.179 [INFO][5413] ipam_plugin.go 439: Releasing address using workloadID ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" HandleID="k8s-pod-network.81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.181 [INFO][5413] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:20:04.185107 containerd[1955]: 2024-08-05 22:20:04.183 [INFO][5405] k8s.go 621: Teardown processing complete. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:20:04.185747 containerd[1955]: time="2024-08-05T22:20:04.185148582Z" level=info msg="TearDown network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\" successfully" Aug 5 22:20:04.185747 containerd[1955]: time="2024-08-05T22:20:04.185182409Z" level=info msg="StopPodSandbox for \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\" returns successfully" Aug 5 22:20:04.185747 containerd[1955]: time="2024-08-05T22:20:04.185699245Z" level=info msg="RemovePodSandbox for \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\"" Aug 5 22:20:04.185747 containerd[1955]: time="2024-08-05T22:20:04.185735997Z" level=info msg="Forcibly stopping sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\"" Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.282 [WARNING][5432] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"b0a89043-6691-488b-a80a-a04aac1af3dc", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"7ec135b6601fb41aa2ddee3ab4ee7f911d32868e0217b39e9be91d29e91af511", Pod:"coredns-5dd5756b68-7g7mq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif272013e0bb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.282 [INFO][5432] k8s.go 608: Cleaning up netns ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.282 [INFO][5432] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" iface="eth0" netns="" Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.283 [INFO][5432] k8s.go 615: Releasing IP address(es) ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.283 [INFO][5432] utils.go 188: Calico CNI releasing IP address ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.316 [INFO][5438] ipam_plugin.go 411: Releasing address using handleID ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" HandleID="k8s-pod-network.81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.317 [INFO][5438] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.317 [INFO][5438] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.323 [WARNING][5438] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" HandleID="k8s-pod-network.81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.323 [INFO][5438] ipam_plugin.go 439: Releasing address using workloadID ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" HandleID="k8s-pod-network.81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Workload="ip--172--31--26--236-k8s-coredns--5dd5756b68--7g7mq-eth0" Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.326 [INFO][5438] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:20:04.346217 containerd[1955]: 2024-08-05 22:20:04.335 [INFO][5432] k8s.go 621: Teardown processing complete. ContainerID="81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71" Aug 5 22:20:04.346217 containerd[1955]: time="2024-08-05T22:20:04.346171842Z" level=info msg="TearDown network for sandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\" successfully" Aug 5 22:20:04.365717 containerd[1955]: time="2024-08-05T22:20:04.365650082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:20:04.365906 containerd[1955]: time="2024-08-05T22:20:04.365735908Z" level=info msg="RemovePodSandbox \"81ed8d46f0c010dd3dfa688452df0e78a11a9dea3173c3cb3bdfe641a1685f71\" returns successfully" Aug 5 22:20:04.366625 containerd[1955]: time="2024-08-05T22:20:04.366252001Z" level=info msg="StopPodSandbox for \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\"" Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.418 [WARNING][5456] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0", GenerateName:"calico-kube-controllers-79c7b85bf-", Namespace:"calico-system", SelfLink:"", UID:"2c94ec30-391b-47f9-8ba5-0553eac916fc", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79c7b85bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711", Pod:"calico-kube-controllers-79c7b85bf-dzc2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c19e9e3c99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.419 [INFO][5456] k8s.go 608: Cleaning up netns ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.419 [INFO][5456] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" iface="eth0" netns="" Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.419 [INFO][5456] k8s.go 615: Releasing IP address(es) ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.419 [INFO][5456] utils.go 188: Calico CNI releasing IP address ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.447 [INFO][5462] ipam_plugin.go 411: Releasing address using handleID ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" HandleID="k8s-pod-network.694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.447 [INFO][5462] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.447 [INFO][5462] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.455 [WARNING][5462] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" HandleID="k8s-pod-network.694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.455 [INFO][5462] ipam_plugin.go 439: Releasing address using workloadID ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" HandleID="k8s-pod-network.694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.457 [INFO][5462] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:20:04.461055 containerd[1955]: 2024-08-05 22:20:04.459 [INFO][5456] k8s.go 621: Teardown processing complete. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:20:04.462310 containerd[1955]: time="2024-08-05T22:20:04.462168542Z" level=info msg="TearDown network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\" successfully" Aug 5 22:20:04.462310 containerd[1955]: time="2024-08-05T22:20:04.462206393Z" level=info msg="StopPodSandbox for \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\" returns successfully" Aug 5 22:20:04.462756 containerd[1955]: time="2024-08-05T22:20:04.462720100Z" level=info msg="RemovePodSandbox for \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\"" Aug 5 22:20:04.462849 containerd[1955]: time="2024-08-05T22:20:04.462755621Z" level=info msg="Forcibly stopping sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\"" Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.521 [WARNING][5480] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0", GenerateName:"calico-kube-controllers-79c7b85bf-", Namespace:"calico-system", SelfLink:"", UID:"2c94ec30-391b-47f9-8ba5-0553eac916fc", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 19, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"79c7b85bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"2e83000679a07abced31714714420a6c25a67d311fe7595a8549f80c591e4711", Pod:"calico-kube-controllers-79c7b85bf-dzc2m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8c19e9e3c99", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.522 [INFO][5480] k8s.go 608: Cleaning up netns ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.522 [INFO][5480] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" iface="eth0" netns="" Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.522 [INFO][5480] k8s.go 615: Releasing IP address(es) ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.522 [INFO][5480] utils.go 188: Calico CNI releasing IP address ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.591 [INFO][5486] ipam_plugin.go 411: Releasing address using handleID ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" HandleID="k8s-pod-network.694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.591 [INFO][5486] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.591 [INFO][5486] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.598 [WARNING][5486] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" HandleID="k8s-pod-network.694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.598 [INFO][5486] ipam_plugin.go 439: Releasing address using workloadID ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" HandleID="k8s-pod-network.694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Workload="ip--172--31--26--236-k8s-calico--kube--controllers--79c7b85bf--dzc2m-eth0" Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.601 [INFO][5486] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:20:04.604646 containerd[1955]: 2024-08-05 22:20:04.602 [INFO][5480] k8s.go 621: Teardown processing complete. ContainerID="694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565" Aug 5 22:20:04.604646 containerd[1955]: time="2024-08-05T22:20:04.604615991Z" level=info msg="TearDown network for sandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\" successfully" Aug 5 22:20:04.611026 containerd[1955]: time="2024-08-05T22:20:04.610970042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:20:04.611199 containerd[1955]: time="2024-08-05T22:20:04.611052298Z" level=info msg="RemovePodSandbox \"694b6707d1afd652f8b6e9d718bcf0daae4617390d6cb0053909980f244b2565\" returns successfully" Aug 5 22:20:06.308688 systemd[1]: Started sshd@12-172.31.26.236:22-139.178.89.65:40884.service - OpenSSH per-connection server daemon (139.178.89.65:40884). Aug 5 22:20:06.501905 sshd[5493]: Accepted publickey for core from 139.178.89.65 port 40884 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:06.503946 sshd[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:06.512299 systemd-logind[1944]: New session 13 of user core. Aug 5 22:20:06.516309 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:20:06.781625 sshd[5493]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:06.788289 systemd[1]: sshd@12-172.31.26.236:22-139.178.89.65:40884.service: Deactivated successfully. Aug 5 22:20:06.791199 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:20:06.793531 systemd-logind[1944]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:20:06.796733 systemd-logind[1944]: Removed session 13. Aug 5 22:20:06.819733 systemd[1]: Started sshd@13-172.31.26.236:22-139.178.89.65:40890.service - OpenSSH per-connection server daemon (139.178.89.65:40890). Aug 5 22:20:07.015934 sshd[5517]: Accepted publickey for core from 139.178.89.65 port 40890 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:07.016907 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:07.024142 systemd-logind[1944]: New session 14 of user core. Aug 5 22:20:07.031199 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:20:07.597613 sshd[5517]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:07.609340 systemd[1]: sshd@13-172.31.26.236:22-139.178.89.65:40890.service: Deactivated successfully. Aug 5 22:20:07.616097 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:20:07.617868 systemd-logind[1944]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:20:07.643429 systemd[1]: Started sshd@14-172.31.26.236:22-139.178.89.65:40900.service - OpenSSH per-connection server daemon (139.178.89.65:40900). Aug 5 22:20:07.646568 systemd-logind[1944]: Removed session 14. Aug 5 22:20:07.829997 sshd[5528]: Accepted publickey for core from 139.178.89.65 port 40900 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:07.833247 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:07.847938 systemd-logind[1944]: New session 15 of user core. Aug 5 22:20:07.854670 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:20:08.096058 sshd[5528]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:08.101371 systemd-logind[1944]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:20:08.102054 systemd[1]: sshd@14-172.31.26.236:22-139.178.89.65:40900.service: Deactivated successfully. Aug 5 22:20:08.105347 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:20:08.107500 systemd-logind[1944]: Removed session 15. Aug 5 22:20:12.048895 kubelet[3447]: I0805 22:20:12.048645 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-wgdwc" podStartSLOduration=41.366711521 podCreationTimestamp="2024-08-05 22:19:23 +0000 UTC" firstStartedPulling="2024-08-05 22:19:53.280851804 +0000 UTC m=+50.164879661" lastFinishedPulling="2024-08-05 22:20:00.962735161 +0000 UTC m=+57.846763017" observedRunningTime="2024-08-05 22:20:02.337231311 +0000 UTC m=+59.221259189" watchObservedRunningTime="2024-08-05 22:20:12.048594877 +0000 UTC m=+68.932622743" Aug 5 22:20:13.136417 systemd[1]: Started sshd@15-172.31.26.236:22-139.178.89.65:58754.service - OpenSSH per-connection server daemon (139.178.89.65:58754). Aug 5 22:20:13.314340 sshd[5614]: Accepted publickey for core from 139.178.89.65 port 58754 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:13.316440 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:13.323146 systemd-logind[1944]: New session 16 of user core. Aug 5 22:20:13.338277 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:20:13.613348 sshd[5614]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:13.620852 systemd[1]: sshd@15-172.31.26.236:22-139.178.89.65:58754.service: Deactivated successfully. Aug 5 22:20:13.624201 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:20:13.626845 systemd-logind[1944]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:20:13.628102 systemd-logind[1944]: Removed session 16. Aug 5 22:20:18.664227 systemd[1]: Started sshd@16-172.31.26.236:22-139.178.89.65:58770.service - OpenSSH per-connection server daemon (139.178.89.65:58770). Aug 5 22:20:18.898414 sshd[5636]: Accepted publickey for core from 139.178.89.65 port 58770 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:18.903260 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:18.928546 systemd-logind[1944]: New session 17 of user core. Aug 5 22:20:18.930652 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:20:19.208639 sshd[5636]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:19.219545 systemd-logind[1944]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:20:19.220844 systemd[1]: sshd@16-172.31.26.236:22-139.178.89.65:58770.service: Deactivated successfully. Aug 5 22:20:19.224756 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:20:19.226118 systemd-logind[1944]: Removed session 17. Aug 5 22:20:24.273977 systemd[1]: Started sshd@17-172.31.26.236:22-139.178.89.65:58156.service - OpenSSH per-connection server daemon (139.178.89.65:58156). Aug 5 22:20:24.446809 sshd[5656]: Accepted publickey for core from 139.178.89.65 port 58156 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:24.448500 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:24.457216 systemd-logind[1944]: New session 18 of user core. Aug 5 22:20:24.461280 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:20:24.799571 sshd[5656]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:24.804617 systemd[1]: sshd@17-172.31.26.236:22-139.178.89.65:58156.service: Deactivated successfully. Aug 5 22:20:24.807549 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:20:24.809235 systemd-logind[1944]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:20:24.810753 systemd-logind[1944]: Removed session 18. Aug 5 22:20:29.841686 systemd[1]: Started sshd@18-172.31.26.236:22-139.178.89.65:58160.service - OpenSSH per-connection server daemon (139.178.89.65:58160). Aug 5 22:20:30.061252 sshd[5669]: Accepted publickey for core from 139.178.89.65 port 58160 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:30.063652 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:30.073543 systemd-logind[1944]: New session 19 of user core. Aug 5 22:20:30.082316 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:20:30.406761 sshd[5669]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:30.414299 systemd[1]: sshd@18-172.31.26.236:22-139.178.89.65:58160.service: Deactivated successfully. Aug 5 22:20:30.418071 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:20:30.421469 systemd-logind[1944]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:20:30.426700 systemd-logind[1944]: Removed session 19. Aug 5 22:20:35.449461 systemd[1]: Started sshd@19-172.31.26.236:22-139.178.89.65:54246.service - OpenSSH per-connection server daemon (139.178.89.65:54246). Aug 5 22:20:35.650783 sshd[5694]: Accepted publickey for core from 139.178.89.65 port 54246 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:35.655137 sshd[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:35.666080 systemd-logind[1944]: New session 20 of user core. Aug 5 22:20:35.673127 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:20:36.018344 sshd[5694]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:36.027265 systemd[1]: sshd@19-172.31.26.236:22-139.178.89.65:54246.service: Deactivated successfully. Aug 5 22:20:36.036536 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:20:36.046642 systemd-logind[1944]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:20:36.073392 systemd[1]: Started sshd@20-172.31.26.236:22-139.178.89.65:54262.service - OpenSSH per-connection server daemon (139.178.89.65:54262). Aug 5 22:20:36.075354 systemd-logind[1944]: Removed session 20. Aug 5 22:20:36.270357 sshd[5707]: Accepted publickey for core from 139.178.89.65 port 54262 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:36.272704 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:36.298287 systemd-logind[1944]: New session 21 of user core. Aug 5 22:20:36.312423 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:20:37.084219 sshd[5707]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:37.091436 systemd[1]: sshd@20-172.31.26.236:22-139.178.89.65:54262.service: Deactivated successfully. Aug 5 22:20:37.097255 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:20:37.100491 systemd-logind[1944]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:20:37.103278 systemd-logind[1944]: Removed session 21. Aug 5 22:20:37.122236 systemd[1]: Started sshd@21-172.31.26.236:22-139.178.89.65:54268.service - OpenSSH per-connection server daemon (139.178.89.65:54268). Aug 5 22:20:37.309024 sshd[5718]: Accepted publickey for core from 139.178.89.65 port 54268 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:37.311245 sshd[5718]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:37.317527 systemd-logind[1944]: New session 22 of user core. Aug 5 22:20:37.326115 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:20:38.606978 sshd[5718]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:38.639087 systemd[1]: sshd@21-172.31.26.236:22-139.178.89.65:54268.service: Deactivated successfully. Aug 5 22:20:38.647530 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:20:38.650464 systemd-logind[1944]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:20:38.668254 systemd[1]: Started sshd@22-172.31.26.236:22-139.178.89.65:54284.service - OpenSSH per-connection server daemon (139.178.89.65:54284). Aug 5 22:20:38.671991 systemd-logind[1944]: Removed session 22. Aug 5 22:20:38.869198 sshd[5755]: Accepted publickey for core from 139.178.89.65 port 54284 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:38.870917 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:38.876756 systemd-logind[1944]: New session 23 of user core. Aug 5 22:20:38.882109 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:20:39.797771 sshd[5755]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:39.804299 systemd[1]: sshd@22-172.31.26.236:22-139.178.89.65:54284.service: Deactivated successfully. Aug 5 22:20:39.809196 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:20:39.811302 systemd-logind[1944]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:20:39.812627 systemd-logind[1944]: Removed session 23. Aug 5 22:20:39.830719 systemd[1]: Started sshd@23-172.31.26.236:22-139.178.89.65:54294.service - OpenSSH per-connection server daemon (139.178.89.65:54294). Aug 5 22:20:40.025327 sshd[5766]: Accepted publickey for core from 139.178.89.65 port 54294 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:40.033225 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:40.054200 systemd-logind[1944]: New session 24 of user core. Aug 5 22:20:40.063159 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:20:40.425296 sshd[5766]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:40.433197 systemd[1]: sshd@23-172.31.26.236:22-139.178.89.65:54294.service: Deactivated successfully. Aug 5 22:20:40.436963 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:20:40.438368 systemd-logind[1944]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:20:40.439569 systemd-logind[1944]: Removed session 24. Aug 5 22:20:41.985892 systemd[1]: run-containerd-runc-k8s.io-571a5f738025de84478f1d46e51dea47dec17a769b7545668993dd79bb85bf94-runc.UYeJpx.mount: Deactivated successfully. Aug 5 22:20:45.453248 systemd[1]: Started sshd@24-172.31.26.236:22-139.178.89.65:53886.service - OpenSSH per-connection server daemon (139.178.89.65:53886). Aug 5 22:20:45.636924 sshd[5807]: Accepted publickey for core from 139.178.89.65 port 53886 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:45.639327 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:45.654631 systemd-logind[1944]: New session 25 of user core. Aug 5 22:20:45.662208 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:20:45.862443 sshd[5807]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:45.868184 systemd-logind[1944]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:20:45.869469 systemd[1]: sshd@24-172.31.26.236:22-139.178.89.65:53886.service: Deactivated successfully. Aug 5 22:20:45.874417 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:20:45.876101 systemd-logind[1944]: Removed session 25. Aug 5 22:20:48.084547 kubelet[3447]: I0805 22:20:48.083345 3447 topology_manager.go:215] "Topology Admit Handler" podUID="2badf744-89c5-43c1-9ab9-b1af5419bf3b" podNamespace="calico-apiserver" podName="calico-apiserver-65f8c79d54-kcmg2" Aug 5 22:20:48.157322 systemd[1]: Created slice kubepods-besteffort-pod2badf744_89c5_43c1_9ab9_b1af5419bf3b.slice - libcontainer container kubepods-besteffort-pod2badf744_89c5_43c1_9ab9_b1af5419bf3b.slice. Aug 5 22:20:48.196140 kubelet[3447]: I0805 22:20:48.195207 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2badf744-89c5-43c1-9ab9-b1af5419bf3b-calico-apiserver-certs\") pod \"calico-apiserver-65f8c79d54-kcmg2\" (UID: \"2badf744-89c5-43c1-9ab9-b1af5419bf3b\") " pod="calico-apiserver/calico-apiserver-65f8c79d54-kcmg2" Aug 5 22:20:48.196140 kubelet[3447]: I0805 22:20:48.195437 3447 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqbdq\" (UniqueName: \"kubernetes.io/projected/2badf744-89c5-43c1-9ab9-b1af5419bf3b-kube-api-access-qqbdq\") pod \"calico-apiserver-65f8c79d54-kcmg2\" (UID: \"2badf744-89c5-43c1-9ab9-b1af5419bf3b\") " pod="calico-apiserver/calico-apiserver-65f8c79d54-kcmg2" Aug 5 22:20:48.325902 kubelet[3447]: E0805 22:20:48.313852 3447 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:20:48.359213 kubelet[3447]: E0805 22:20:48.358956 3447 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2badf744-89c5-43c1-9ab9-b1af5419bf3b-calico-apiserver-certs podName:2badf744-89c5-43c1-9ab9-b1af5419bf3b nodeName:}" failed. No retries permitted until 2024-08-05 22:20:48.835062579 +0000 UTC m=+105.719090441 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/2badf744-89c5-43c1-9ab9-b1af5419bf3b-calico-apiserver-certs") pod "calico-apiserver-65f8c79d54-kcmg2" (UID: "2badf744-89c5-43c1-9ab9-b1af5419bf3b") : secret "calico-apiserver-certs" not found Aug 5 22:20:49.163101 containerd[1955]: time="2024-08-05T22:20:49.162072815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8c79d54-kcmg2,Uid:2badf744-89c5-43c1-9ab9-b1af5419bf3b,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:20:49.491406 systemd-networkd[1802]: calia41387f9cd3: Link UP Aug 5 22:20:49.491633 systemd-networkd[1802]: calia41387f9cd3: Gained carrier Aug 5 22:20:49.500744 (udev-worker)[5849]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.381 [INFO][5830] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0 calico-apiserver-65f8c79d54- calico-apiserver 2badf744-89c5-43c1-9ab9-b1af5419bf3b 1122 0 2024-08-05 22:20:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65f8c79d54 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-26-236 calico-apiserver-65f8c79d54-kcmg2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia41387f9cd3 [] []}} ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Namespace="calico-apiserver" Pod="calico-apiserver-65f8c79d54-kcmg2" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.381 [INFO][5830] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Namespace="calico-apiserver" Pod="calico-apiserver-65f8c79d54-kcmg2" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.435 [INFO][5842] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" HandleID="k8s-pod-network.d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Workload="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.445 [INFO][5842] ipam_plugin.go 264: Auto assigning IP ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" HandleID="k8s-pod-network.d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Workload="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291930), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-26-236", "pod":"calico-apiserver-65f8c79d54-kcmg2", "timestamp":"2024-08-05 22:20:49.435099425 +0000 UTC"}, Hostname:"ip-172-31-26-236", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.445 [INFO][5842] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.445 [INFO][5842] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.445 [INFO][5842] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-26-236' Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.448 [INFO][5842] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" host="ip-172-31-26-236" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.452 [INFO][5842] ipam.go 372: Looking up existing affinities for host host="ip-172-31-26-236" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.462 [INFO][5842] ipam.go 489: Trying affinity for 192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.466 [INFO][5842] ipam.go 155: Attempting to load block cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.469 [INFO][5842] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ip-172-31-26-236" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.469 [INFO][5842] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" host="ip-172-31-26-236" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.471 [INFO][5842] ipam.go 1685: Creating new handle: k8s-pod-network.d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4 Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.475 [INFO][5842] ipam.go 1203: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" host="ip-172-31-26-236" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.483 [INFO][5842] ipam.go 1216: Successfully claimed IPs: [192.168.16.197/26] block=192.168.16.192/26 handle="k8s-pod-network.d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" host="ip-172-31-26-236" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.483 [INFO][5842] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.16.197/26] handle="k8s-pod-network.d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" host="ip-172-31-26-236" Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.483 [INFO][5842] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:20:49.506894 containerd[1955]: 2024-08-05 22:20:49.483 [INFO][5842] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.16.197/26] IPv6=[] ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" HandleID="k8s-pod-network.d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Workload="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" Aug 5 22:20:49.507866 containerd[1955]: 2024-08-05 22:20:49.486 [INFO][5830] k8s.go 386: Populated endpoint ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Namespace="calico-apiserver" Pod="calico-apiserver-65f8c79d54-kcmg2" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0", GenerateName:"calico-apiserver-65f8c79d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"2badf744-89c5-43c1-9ab9-b1af5419bf3b", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f8c79d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"", Pod:"calico-apiserver-65f8c79d54-kcmg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia41387f9cd3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:49.507866 containerd[1955]: 2024-08-05 22:20:49.487 [INFO][5830] k8s.go 387: Calico CNI using IPs: [192.168.16.197/32] ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Namespace="calico-apiserver" Pod="calico-apiserver-65f8c79d54-kcmg2" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" Aug 5 22:20:49.507866 containerd[1955]: 2024-08-05 22:20:49.487 [INFO][5830] dataplane_linux.go 68: Setting the host side veth name to calia41387f9cd3 ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Namespace="calico-apiserver" Pod="calico-apiserver-65f8c79d54-kcmg2" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" Aug 5 22:20:49.507866 containerd[1955]: 2024-08-05 22:20:49.489 [INFO][5830] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Namespace="calico-apiserver" Pod="calico-apiserver-65f8c79d54-kcmg2" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" Aug 5 22:20:49.507866 containerd[1955]: 2024-08-05 22:20:49.489 [INFO][5830] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Namespace="calico-apiserver" Pod="calico-apiserver-65f8c79d54-kcmg2" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0", GenerateName:"calico-apiserver-65f8c79d54-", Namespace:"calico-apiserver", SelfLink:"", UID:"2badf744-89c5-43c1-9ab9-b1af5419bf3b", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 20, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65f8c79d54", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-26-236", ContainerID:"d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4", Pod:"calico-apiserver-65f8c79d54-kcmg2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia41387f9cd3", MAC:"7a:5a:49:ec:60:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:20:49.507866 containerd[1955]: 2024-08-05 22:20:49.501 [INFO][5830] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4" Namespace="calico-apiserver" Pod="calico-apiserver-65f8c79d54-kcmg2" WorkloadEndpoint="ip--172--31--26--236-k8s-calico--apiserver--65f8c79d54--kcmg2-eth0" Aug 5 22:20:49.582710 containerd[1955]: time="2024-08-05T22:20:49.581616509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:20:49.582710 containerd[1955]: time="2024-08-05T22:20:49.581893651Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:20:49.582710 containerd[1955]: time="2024-08-05T22:20:49.581973823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:20:49.583113 containerd[1955]: time="2024-08-05T22:20:49.582465685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:20:49.628565 systemd[1]: Started cri-containerd-d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4.scope - libcontainer container d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4. Aug 5 22:20:49.700336 containerd[1955]: time="2024-08-05T22:20:49.700280507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65f8c79d54-kcmg2,Uid:2badf744-89c5-43c1-9ab9-b1af5419bf3b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4\"" Aug 5 22:20:49.704744 containerd[1955]: time="2024-08-05T22:20:49.704708738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:20:50.914761 systemd[1]: Started sshd@25-172.31.26.236:22-139.178.89.65:33776.service - OpenSSH per-connection server daemon (139.178.89.65:33776). Aug 5 22:20:51.153361 sshd[5906]: Accepted publickey for core from 139.178.89.65 port 33776 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:51.156782 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:51.179269 systemd-logind[1944]: New session 26 of user core. Aug 5 22:20:51.181714 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:20:51.352240 systemd-networkd[1802]: calia41387f9cd3: Gained IPv6LL Aug 5 22:20:51.808857 sshd[5906]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:51.816788 systemd[1]: sshd@25-172.31.26.236:22-139.178.89.65:33776.service: Deactivated successfully. Aug 5 22:20:51.821186 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:20:51.823079 systemd-logind[1944]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:20:51.826014 systemd-logind[1944]: Removed session 26. Aug 5 22:20:53.108896 containerd[1955]: time="2024-08-05T22:20:53.108414218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:20:53.119059 containerd[1955]: time="2024-08-05T22:20:53.117234381Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.412486431s" Aug 5 22:20:53.119059 containerd[1955]: time="2024-08-05T22:20:53.117287835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:20:53.125134 containerd[1955]: time="2024-08-05T22:20:53.124756480Z" level=info msg="CreateContainer within sandbox \"d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:20:53.137681 containerd[1955]: time="2024-08-05T22:20:53.137628683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:53.139309 containerd[1955]: time="2024-08-05T22:20:53.138907659Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:53.140912 containerd[1955]: time="2024-08-05T22:20:53.140600067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:20:53.154250 containerd[1955]: time="2024-08-05T22:20:53.154209365Z" level=info msg="CreateContainer within sandbox \"d5526b8b4daef699900311bbb948b712c9b111112c653aff3aaec01c2cd1dcf4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf1898c623c415f578caf5c26e0a9043188f2ff7d80766efdc6c798450d8f756\"" Aug 5 22:20:53.155735 containerd[1955]: time="2024-08-05T22:20:53.155434471Z" level=info msg="StartContainer for \"bf1898c623c415f578caf5c26e0a9043188f2ff7d80766efdc6c798450d8f756\"" Aug 5 22:20:53.269086 systemd[1]: Started cri-containerd-bf1898c623c415f578caf5c26e0a9043188f2ff7d80766efdc6c798450d8f756.scope - libcontainer container bf1898c623c415f578caf5c26e0a9043188f2ff7d80766efdc6c798450d8f756. Aug 5 22:20:53.327958 containerd[1955]: time="2024-08-05T22:20:53.327912684Z" level=info msg="StartContainer for \"bf1898c623c415f578caf5c26e0a9043188f2ff7d80766efdc6c798450d8f756\" returns successfully" Aug 5 22:20:53.888000 kubelet[3447]: I0805 22:20:53.887956 3447 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65f8c79d54-kcmg2" podStartSLOduration=2.453074431 podCreationTimestamp="2024-08-05 22:20:48 +0000 UTC" firstStartedPulling="2024-08-05 22:20:49.701952189 +0000 UTC m=+106.585980033" lastFinishedPulling="2024-08-05 22:20:53.117708748 +0000 UTC m=+110.001736603" observedRunningTime="2024-08-05 22:20:53.414901031 +0000 UTC m=+110.298928894" watchObservedRunningTime="2024-08-05 22:20:53.868831001 +0000 UTC m=+110.752858866" Aug 5 22:20:54.098744 ntpd[1939]: Listen normally on 13 calia41387f9cd3 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:20:54.099188 ntpd[1939]: 5 Aug 22:20:54 ntpd[1939]: Listen normally on 13 calia41387f9cd3 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:20:56.850262 systemd[1]: Started sshd@26-172.31.26.236:22-139.178.89.65:33778.service - OpenSSH per-connection server daemon (139.178.89.65:33778). Aug 5 22:20:57.077791 sshd[5979]: Accepted publickey for core from 139.178.89.65 port 33778 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:20:57.081524 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:20:57.095864 systemd-logind[1944]: New session 27 of user core. Aug 5 22:20:57.100145 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 22:20:57.826299 sshd[5979]: pam_unix(sshd:session): session closed for user core Aug 5 22:20:57.834367 systemd[1]: sshd@26-172.31.26.236:22-139.178.89.65:33778.service: Deactivated successfully. Aug 5 22:20:57.838245 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 22:20:57.839628 systemd-logind[1944]: Session 27 logged out. Waiting for processes to exit. Aug 5 22:20:57.841971 systemd-logind[1944]: Removed session 27. Aug 5 22:21:02.865268 systemd[1]: Started sshd@27-172.31.26.236:22-139.178.89.65:50910.service - OpenSSH per-connection server daemon (139.178.89.65:50910). Aug 5 22:21:03.046202 sshd[6002]: Accepted publickey for core from 139.178.89.65 port 50910 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:21:03.047907 sshd[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:03.054427 systemd-logind[1944]: New session 28 of user core. Aug 5 22:21:03.062103 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 22:21:03.460116 sshd[6002]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:03.468825 systemd[1]: sshd@27-172.31.26.236:22-139.178.89.65:50910.service: Deactivated successfully. Aug 5 22:21:03.475314 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 22:21:03.482030 systemd-logind[1944]: Session 28 logged out. Waiting for processes to exit. Aug 5 22:21:03.486682 systemd-logind[1944]: Removed session 28. Aug 5 22:21:08.496258 systemd[1]: Started sshd@28-172.31.26.236:22-139.178.89.65:50924.service - OpenSSH per-connection server daemon (139.178.89.65:50924). Aug 5 22:21:08.717965 sshd[6041]: Accepted publickey for core from 139.178.89.65 port 50924 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:21:08.722811 sshd[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:08.728921 systemd-logind[1944]: New session 29 of user core. Aug 5 22:21:08.734105 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 5 22:21:08.938380 sshd[6041]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:08.943066 systemd-logind[1944]: Session 29 logged out. Waiting for processes to exit. Aug 5 22:21:08.944325 systemd[1]: sshd@28-172.31.26.236:22-139.178.89.65:50924.service: Deactivated successfully. Aug 5 22:21:08.947792 systemd[1]: session-29.scope: Deactivated successfully. Aug 5 22:21:08.949554 systemd-logind[1944]: Removed session 29. Aug 5 22:21:10.898774 systemd[1]: run-containerd-runc-k8s.io-5ceefc5b0837b4b7f486469706548f2c17bed16e7b907b5d83322722b74d8eaa-runc.52RpUx.mount: Deactivated successfully. Aug 5 22:21:13.977381 systemd[1]: Started sshd@29-172.31.26.236:22-139.178.89.65:39944.service - OpenSSH per-connection server daemon (139.178.89.65:39944). Aug 5 22:21:14.209912 sshd[6102]: Accepted publickey for core from 139.178.89.65 port 39944 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:21:14.212392 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:14.220678 systemd-logind[1944]: New session 30 of user core. Aug 5 22:21:14.228611 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 5 22:21:14.602466 sshd[6102]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:14.606671 systemd[1]: sshd@29-172.31.26.236:22-139.178.89.65:39944.service: Deactivated successfully. Aug 5 22:21:14.609583 systemd[1]: session-30.scope: Deactivated successfully. Aug 5 22:21:14.611511 systemd-logind[1944]: Session 30 logged out. Waiting for processes to exit. Aug 5 22:21:14.613541 systemd-logind[1944]: Removed session 30. Aug 5 22:21:19.640246 systemd[1]: Started sshd@30-172.31.26.236:22-139.178.89.65:39950.service - OpenSSH per-connection server daemon (139.178.89.65:39950). Aug 5 22:21:19.826681 sshd[6122]: Accepted publickey for core from 139.178.89.65 port 39950 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:21:19.829829 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:21:19.839665 systemd-logind[1944]: New session 31 of user core. Aug 5 22:21:19.849113 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 5 22:21:20.158174 sshd[6122]: pam_unix(sshd:session): session closed for user core Aug 5 22:21:20.162627 systemd[1]: sshd@30-172.31.26.236:22-139.178.89.65:39950.service: Deactivated successfully. Aug 5 22:21:20.165490 systemd[1]: session-31.scope: Deactivated successfully. Aug 5 22:21:20.166668 systemd-logind[1944]: Session 31 logged out. Waiting for processes to exit. Aug 5 22:21:20.170577 systemd-logind[1944]: Removed session 31. Aug 5 22:21:47.075208 systemd[1]: cri-containerd-8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e.scope: Deactivated successfully. Aug 5 22:21:47.076711 systemd[1]: cri-containerd-8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e.scope: Consumed 3.175s CPU time, 20.6M memory peak, 0B memory swap peak. Aug 5 22:21:47.115329 kubelet[3447]: E0805 22:21:47.115278 3447 controller.go:193] "Failed to update lease" err="Put \"https://172.31.26.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-236?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 5 22:21:47.161489 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e-rootfs.mount: Deactivated successfully. Aug 5 22:21:47.198838 containerd[1955]: time="2024-08-05T22:21:47.161713176Z" level=info msg="shim disconnected" id=8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e namespace=k8s.io Aug 5 22:21:47.199376 containerd[1955]: time="2024-08-05T22:21:47.198839110Z" level=warning msg="cleaning up after shim disconnected" id=8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e namespace=k8s.io Aug 5 22:21:47.199376 containerd[1955]: time="2024-08-05T22:21:47.198860229Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:21:47.571199 systemd[1]: cri-containerd-55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031.scope: Deactivated successfully. Aug 5 22:21:47.571525 systemd[1]: cri-containerd-55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031.scope: Consumed 6.362s CPU time. Aug 5 22:21:47.603104 containerd[1955]: time="2024-08-05T22:21:47.603029339Z" level=info msg="shim disconnected" id=55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031 namespace=k8s.io Aug 5 22:21:47.603331 containerd[1955]: time="2024-08-05T22:21:47.603304456Z" level=warning msg="cleaning up after shim disconnected" id=55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031 namespace=k8s.io Aug 5 22:21:47.604935 containerd[1955]: time="2024-08-05T22:21:47.604899650Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:21:47.608828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031-rootfs.mount: Deactivated successfully. Aug 5 22:21:47.644066 kubelet[3447]: I0805 22:21:47.644018 3447 scope.go:117] "RemoveContainer" containerID="8abb93aebe9624b6f0c5757123b5b151c8275666b098d6e1d5643adab234620e" Aug 5 22:21:47.656301 containerd[1955]: time="2024-08-05T22:21:47.656217011Z" level=info msg="CreateContainer within sandbox \"0bd7331028911d68a964370104bc6da9cd2054db8cd5ce4d32a4d3c0428b94b1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 5 22:21:47.716564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1089847787.mount: Deactivated successfully. Aug 5 22:21:47.723140 containerd[1955]: time="2024-08-05T22:21:47.722627564Z" level=info msg="CreateContainer within sandbox \"0bd7331028911d68a964370104bc6da9cd2054db8cd5ce4d32a4d3c0428b94b1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"22832579c60b770ccb702e0fbf43f4a0211ea5b9854d580ac96078680e35e3a0\"" Aug 5 22:21:47.728128 containerd[1955]: time="2024-08-05T22:21:47.723845640Z" level=info msg="StartContainer for \"22832579c60b770ccb702e0fbf43f4a0211ea5b9854d580ac96078680e35e3a0\"" Aug 5 22:21:47.793100 systemd[1]: Started cri-containerd-22832579c60b770ccb702e0fbf43f4a0211ea5b9854d580ac96078680e35e3a0.scope - libcontainer container 22832579c60b770ccb702e0fbf43f4a0211ea5b9854d580ac96078680e35e3a0. Aug 5 22:21:47.886613 containerd[1955]: time="2024-08-05T22:21:47.886503910Z" level=info msg="StartContainer for \"22832579c60b770ccb702e0fbf43f4a0211ea5b9854d580ac96078680e35e3a0\" returns successfully" Aug 5 22:21:48.647095 kubelet[3447]: I0805 22:21:48.647033 3447 scope.go:117] "RemoveContainer" containerID="55475651394e3d918c272371b70839c45fb4c869a548d150b926c03b20ee2031" Aug 5 22:21:48.650237 containerd[1955]: time="2024-08-05T22:21:48.650196479Z" level=info msg="CreateContainer within sandbox \"0c095b236651d7a125c05440c3e66abb8baaed16f807a3cda6ffec823bf03469\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 5 22:21:48.691001 containerd[1955]: time="2024-08-05T22:21:48.690362831Z" level=info msg="CreateContainer within sandbox \"0c095b236651d7a125c05440c3e66abb8baaed16f807a3cda6ffec823bf03469\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"f81665ce141c76f46c1b360c0a752ecffc14ace6a2fab37694111d3eb57536dc\"" Aug 5 22:21:48.691183 containerd[1955]: time="2024-08-05T22:21:48.691153717Z" level=info msg="StartContainer for \"f81665ce141c76f46c1b360c0a752ecffc14ace6a2fab37694111d3eb57536dc\"" Aug 5 22:21:48.765355 systemd[1]: Started cri-containerd-f81665ce141c76f46c1b360c0a752ecffc14ace6a2fab37694111d3eb57536dc.scope - libcontainer container f81665ce141c76f46c1b360c0a752ecffc14ace6a2fab37694111d3eb57536dc. Aug 5 22:21:48.829988 containerd[1955]: time="2024-08-05T22:21:48.829227135Z" level=info msg="StartContainer for \"f81665ce141c76f46c1b360c0a752ecffc14ace6a2fab37694111d3eb57536dc\" returns successfully" Aug 5 22:21:49.161636 systemd[1]: run-containerd-runc-k8s.io-f81665ce141c76f46c1b360c0a752ecffc14ace6a2fab37694111d3eb57536dc-runc.vV9Yg9.mount: Deactivated successfully. Aug 5 22:21:52.449715 systemd[1]: cri-containerd-1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c.scope: Deactivated successfully. Aug 5 22:21:52.450501 systemd[1]: cri-containerd-1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c.scope: Consumed 1.945s CPU time, 17.4M memory peak, 0B memory swap peak. Aug 5 22:21:52.485386 containerd[1955]: time="2024-08-05T22:21:52.485316566Z" level=info msg="shim disconnected" id=1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c namespace=k8s.io Aug 5 22:21:52.485386 containerd[1955]: time="2024-08-05T22:21:52.485386123Z" level=warning msg="cleaning up after shim disconnected" id=1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c namespace=k8s.io Aug 5 22:21:52.486103 containerd[1955]: time="2024-08-05T22:21:52.485398587Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:21:52.491257 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c-rootfs.mount: Deactivated successfully. Aug 5 22:21:52.677382 kubelet[3447]: I0805 22:21:52.675386 3447 scope.go:117] "RemoveContainer" containerID="1fb866133f520bdb4b4967c240edbfd63129b0364ec63175578ef66e9e3a0d1c" Aug 5 22:21:52.685455 containerd[1955]: time="2024-08-05T22:21:52.685405186Z" level=info msg="CreateContainer within sandbox \"560f7309407254cebbe21e13cb3c1f9f786ceaa443e419cd43cdb45376f2ca7f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 5 22:21:52.716978 containerd[1955]: time="2024-08-05T22:21:52.716795186Z" level=info msg="CreateContainer within sandbox \"560f7309407254cebbe21e13cb3c1f9f786ceaa443e419cd43cdb45376f2ca7f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"220000297ca89cc8c4f4b6c0d044194569c7f8130274781406242f61da69af4a\"" Aug 5 22:21:52.719904 containerd[1955]: time="2024-08-05T22:21:52.718603574Z" level=info msg="StartContainer for \"220000297ca89cc8c4f4b6c0d044194569c7f8130274781406242f61da69af4a\"" Aug 5 22:21:52.780092 systemd[1]: Started cri-containerd-220000297ca89cc8c4f4b6c0d044194569c7f8130274781406242f61da69af4a.scope - libcontainer container 220000297ca89cc8c4f4b6c0d044194569c7f8130274781406242f61da69af4a. Aug 5 22:21:52.860823 containerd[1955]: time="2024-08-05T22:21:52.860771059Z" level=info msg="StartContainer for \"220000297ca89cc8c4f4b6c0d044194569c7f8130274781406242f61da69af4a\" returns successfully" Aug 5 22:21:53.492757 systemd[1]: run-containerd-runc-k8s.io-220000297ca89cc8c4f4b6c0d044194569c7f8130274781406242f61da69af4a-runc.ImsWbF.mount: Deactivated successfully. Aug 5 22:21:57.116577 kubelet[3447]: E0805 22:21:57.116026 3447 controller.go:193] "Failed to update lease" err="Put \"https://172.31.26.236:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-26-236?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"