Aug 5 22:32:44.094683 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 5 22:32:44.094723 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:32:44.094737 kernel: BIOS-provided physical RAM map: Aug 5 22:32:44.094748 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 5 22:32:44.094757 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 5 22:32:44.094819 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 5 22:32:44.094843 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Aug 5 22:32:44.094854 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Aug 5 22:32:44.094864 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Aug 5 22:32:44.094875 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 5 22:32:44.094886 kernel: NX (Execute Disable) protection: active Aug 5 22:32:44.094897 kernel: APIC: Static calls initialized Aug 5 22:32:44.094907 kernel: SMBIOS 2.7 present. Aug 5 22:32:44.094918 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Aug 5 22:32:44.094934 kernel: Hypervisor detected: KVM Aug 5 22:32:44.094946 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:32:44.095018 kernel: kvm-clock: using sched offset of 6255683519 cycles Aug 5 22:32:44.095085 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:32:44.095101 kernel: tsc: Detected 2499.996 MHz processor Aug 5 22:32:44.095116 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:32:44.095131 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:32:44.095150 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Aug 5 22:32:44.095213 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 5 22:32:44.095229 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:32:44.095244 kernel: Using GB pages for direct mapping Aug 5 22:32:44.095258 kernel: ACPI: Early table checksum verification disabled Aug 5 22:32:44.095272 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Aug 5 22:32:44.095284 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Aug 5 22:32:44.095298 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 5 22:32:44.095312 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Aug 5 22:32:44.095330 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Aug 5 22:32:44.095344 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:32:44.095414 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 5 22:32:44.095459 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Aug 5 22:32:44.095475 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 5 22:32:44.095489 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Aug 5 22:32:44.095502 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Aug 5 22:32:44.095517 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:32:44.095535 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Aug 5 22:32:44.095549 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Aug 5 22:32:44.095570 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Aug 5 22:32:44.095584 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Aug 5 22:32:44.095599 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Aug 5 22:32:44.095614 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Aug 5 22:32:44.095652 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Aug 5 22:32:44.095667 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Aug 5 22:32:44.095682 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Aug 5 22:32:44.095791 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Aug 5 22:32:44.095808 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:32:44.095823 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:32:44.095838 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Aug 5 22:32:44.095852 kernel: NUMA: Initialized distance table, cnt=1 Aug 5 22:32:44.095939 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Aug 5 22:32:44.095962 kernel: Zone ranges: Aug 5 22:32:44.095978 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:32:44.095993 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Aug 5 22:32:44.096008 kernel: Normal empty Aug 5 22:32:44.096023 kernel: Movable zone start for each node Aug 5 22:32:44.096039 kernel: Early memory node ranges Aug 5 22:32:44.096054 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 5 22:32:44.096069 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Aug 5 22:32:44.096084 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Aug 5 22:32:44.096102 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:32:44.096118 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 5 22:32:44.096133 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Aug 5 22:32:44.096148 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 5 22:32:44.096163 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:32:44.096178 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Aug 5 22:32:44.096194 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:32:44.096209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:32:44.096224 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:32:44.096242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:32:44.096257 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:32:44.096272 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 5 22:32:44.096287 kernel: TSC deadline timer available Aug 5 22:32:44.096303 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:32:44.096318 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:32:44.096333 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Aug 5 22:32:44.096349 kernel: Booting paravirtualized kernel on KVM Aug 5 22:32:44.096364 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:32:44.096383 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:32:44.096398 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:32:44.096413 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:32:44.096428 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:32:44.096442 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:32:44.096457 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:32:44.096474 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:32:44.096490 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:32:44.096508 kernel: random: crng init done Aug 5 22:32:44.096523 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:32:44.096538 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:32:44.096553 kernel: Fallback order for Node 0: 0 Aug 5 22:32:44.096568 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Aug 5 22:32:44.096583 kernel: Policy zone: DMA32 Aug 5 22:32:44.096598 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:32:44.096614 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 131296K reserved, 0K cma-reserved) Aug 5 22:32:44.096629 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:32:44.096665 kernel: Kernel/User page tables isolation: enabled Aug 5 22:32:44.096680 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:32:44.096694 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:32:44.096707 kernel: Dynamic Preempt: voluntary Aug 5 22:32:44.096719 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:32:44.096732 kernel: rcu: RCU event tracing is enabled. Aug 5 22:32:44.096745 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:32:44.096759 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:32:44.096772 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:32:44.096788 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:32:44.096891 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:32:44.096910 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:32:44.096924 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 5 22:32:44.096939 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:32:44.096953 kernel: Console: colour VGA+ 80x25 Aug 5 22:32:44.096966 kernel: printk: console [ttyS0] enabled Aug 5 22:32:44.096980 kernel: ACPI: Core revision 20230628 Aug 5 22:32:44.096995 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Aug 5 22:32:44.097015 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:32:44.097030 kernel: x2apic enabled Aug 5 22:32:44.097046 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:32:44.097074 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 5 22:32:44.097094 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Aug 5 22:32:44.097111 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 5 22:32:44.097127 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Aug 5 22:32:44.097143 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:32:44.097157 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:32:44.097172 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:32:44.097189 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:32:44.097206 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 5 22:32:44.097222 kernel: RETBleed: Vulnerable Aug 5 22:32:44.097242 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:32:44.097258 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:32:44.097275 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:32:44.097292 kernel: GDS: Unknown: Dependent on hypervisor status Aug 5 22:32:44.097308 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:32:44.097324 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:32:44.097340 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:32:44.097354 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 5 22:32:44.097367 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 5 22:32:44.097382 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 5 22:32:44.097396 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 5 22:32:44.097409 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 5 22:32:44.097421 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 5 22:32:44.097434 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:32:44.097448 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 5 22:32:44.097460 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 5 22:32:44.097473 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Aug 5 22:32:44.098376 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Aug 5 22:32:44.098403 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Aug 5 22:32:44.098417 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Aug 5 22:32:44.098431 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Aug 5 22:32:44.098444 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:32:44.098457 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:32:44.098470 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:32:44.098483 kernel: SELinux: Initializing. Aug 5 22:32:44.098496 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:32:44.098513 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:32:44.098527 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 5 22:32:44.098547 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:32:44.098560 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:32:44.098573 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:32:44.098588 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 5 22:32:44.098602 kernel: signal: max sigframe size: 3632 Aug 5 22:32:44.098616 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:32:44.098647 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:32:44.098663 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:32:44.098679 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:32:44.098699 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:32:44.098714 kernel: .... node #0, CPUs: #1 Aug 5 22:32:44.098731 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 5 22:32:44.098798 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:32:44.098816 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:32:44.098839 kernel: smpboot: Max logical packages: 1 Aug 5 22:32:44.098855 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Aug 5 22:32:44.098871 kernel: devtmpfs: initialized Aug 5 22:32:44.098887 kernel: x86/mm: Memory block size: 128MB Aug 5 22:32:44.098905 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:32:44.098919 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:32:44.098933 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:32:44.098946 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:32:44.099013 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:32:44.099031 kernel: audit: type=2000 audit(1722897162.662:1): state=initialized audit_enabled=0 res=1 Aug 5 22:32:44.099087 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:32:44.099107 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:32:44.099124 kernel: cpuidle: using governor menu Aug 5 22:32:44.099145 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:32:44.099159 kernel: dca service started, version 1.12.1 Aug 5 22:32:44.099175 kernel: PCI: Using configuration type 1 for base access Aug 5 22:32:44.099192 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:32:44.099209 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:32:44.099264 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:32:44.099283 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:32:44.099300 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:32:44.099317 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:32:44.099338 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:32:44.099416 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:32:44.099483 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:32:44.099499 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 5 22:32:44.099516 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:32:44.099533 kernel: ACPI: Interpreter enabled Aug 5 22:32:44.099549 kernel: ACPI: PM: (supports S0 S5) Aug 5 22:32:44.099566 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:32:44.099582 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:32:44.099603 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:32:44.099620 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 5 22:32:44.099649 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:32:44.099873 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:32:44.100018 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 5 22:32:44.100257 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 5 22:32:44.100283 kernel: acpiphp: Slot [3] registered Aug 5 22:32:44.100306 kernel: acpiphp: Slot [4] registered Aug 5 22:32:44.100322 kernel: acpiphp: Slot [5] registered Aug 5 22:32:44.100339 kernel: acpiphp: Slot [6] registered Aug 5 22:32:44.100355 kernel: acpiphp: Slot [7] registered Aug 5 22:32:44.100371 kernel: acpiphp: Slot [8] registered Aug 5 22:32:44.100388 kernel: acpiphp: Slot [9] registered Aug 5 22:32:44.100404 kernel: acpiphp: Slot [10] registered Aug 5 22:32:44.100420 kernel: acpiphp: Slot [11] registered Aug 5 22:32:44.100437 kernel: acpiphp: Slot [12] registered Aug 5 22:32:44.100456 kernel: acpiphp: Slot [13] registered Aug 5 22:32:44.100473 kernel: acpiphp: Slot [14] registered Aug 5 22:32:44.100489 kernel: acpiphp: Slot [15] registered Aug 5 22:32:44.100505 kernel: acpiphp: Slot [16] registered Aug 5 22:32:44.100521 kernel: acpiphp: Slot [17] registered Aug 5 22:32:44.100537 kernel: acpiphp: Slot [18] registered Aug 5 22:32:44.100554 kernel: acpiphp: Slot [19] registered Aug 5 22:32:44.100570 kernel: acpiphp: Slot [20] registered Aug 5 22:32:44.100586 kernel: acpiphp: Slot [21] registered Aug 5 22:32:44.100602 kernel: acpiphp: Slot [22] registered Aug 5 22:32:44.100622 kernel: acpiphp: Slot [23] registered Aug 5 22:32:44.100659 kernel: acpiphp: Slot [24] registered Aug 5 22:32:44.100676 kernel: acpiphp: Slot [25] registered Aug 5 22:32:44.100692 kernel: acpiphp: Slot [26] registered Aug 5 22:32:44.100709 kernel: acpiphp: Slot [27] registered Aug 5 22:32:44.100725 kernel: acpiphp: Slot [28] registered Aug 5 22:32:44.100741 kernel: acpiphp: Slot [29] registered Aug 5 22:32:44.100758 kernel: acpiphp: Slot [30] registered Aug 5 22:32:44.100774 kernel: acpiphp: Slot [31] registered Aug 5 22:32:44.100794 kernel: PCI host bridge to bus 0000:00 Aug 5 22:32:44.100945 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:32:44.101071 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:32:44.101462 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:32:44.101603 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 5 22:32:44.101739 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:32:44.101895 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:32:44.102051 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:32:44.102199 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Aug 5 22:32:44.102338 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 5 22:32:44.102476 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Aug 5 22:32:44.102614 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Aug 5 22:32:44.102768 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Aug 5 22:32:44.102917 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Aug 5 22:32:44.103121 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Aug 5 22:32:44.103253 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Aug 5 22:32:44.103388 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Aug 5 22:32:44.103605 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Aug 5 22:32:44.103769 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Aug 5 22:32:44.103908 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 5 22:32:44.104042 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:32:44.104192 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 5 22:32:44.104376 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Aug 5 22:32:44.104530 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 5 22:32:44.104705 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Aug 5 22:32:44.104727 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:32:44.104744 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:32:44.104760 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:32:44.104782 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:32:44.104798 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:32:44.104815 kernel: iommu: Default domain type: Translated Aug 5 22:32:44.104831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:32:44.104847 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:32:44.104863 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:32:44.104880 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 5 22:32:44.104895 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Aug 5 22:32:44.105029 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Aug 5 22:32:44.105254 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Aug 5 22:32:44.105396 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:32:44.105417 kernel: vgaarb: loaded Aug 5 22:32:44.105433 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 5 22:32:44.105449 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Aug 5 22:32:44.105465 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:32:44.105481 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:32:44.105556 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:32:44.105578 kernel: pnp: PnP ACPI init Aug 5 22:32:44.105594 kernel: pnp: PnP ACPI: found 5 devices Aug 5 22:32:44.105610 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:32:44.105626 kernel: NET: Registered PF_INET protocol family Aug 5 22:32:44.105654 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:32:44.105671 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 5 22:32:44.105687 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:32:44.105703 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:32:44.105719 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 5 22:32:44.105740 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 5 22:32:44.105756 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:32:44.105772 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:32:44.105788 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:32:44.105804 kernel: NET: Registered PF_XDP protocol family Aug 5 22:32:44.106086 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:32:44.106327 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:32:44.106561 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:32:44.107176 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 5 22:32:44.107334 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:32:44.107356 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:32:44.107373 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:32:44.107389 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 5 22:32:44.107405 kernel: clocksource: Switched to clocksource tsc Aug 5 22:32:44.107458 kernel: Initialise system trusted keyrings Aug 5 22:32:44.107478 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 5 22:32:44.107501 kernel: Key type asymmetric registered Aug 5 22:32:44.107516 kernel: Asymmetric key parser 'x509' registered Aug 5 22:32:44.107532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:32:44.107549 kernel: io scheduler mq-deadline registered Aug 5 22:32:44.107565 kernel: io scheduler kyber registered Aug 5 22:32:44.107580 kernel: io scheduler bfq registered Aug 5 22:32:44.107597 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:32:44.107613 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:32:44.107629 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:32:44.107662 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:32:44.107679 kernel: i8042: Warning: Keylock active Aug 5 22:32:44.107695 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:32:44.107711 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:32:44.107867 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 5 22:32:44.107996 kernel: rtc_cmos 00:00: registered as rtc0 Aug 5 22:32:44.108122 kernel: rtc_cmos 00:00: setting system clock to 2024-08-05T22:32:43 UTC (1722897163) Aug 5 22:32:44.108314 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 5 22:32:44.108340 kernel: intel_pstate: CPU model not supported Aug 5 22:32:44.108356 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:32:44.108372 kernel: Segment Routing with IPv6 Aug 5 22:32:44.108388 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:32:44.108404 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:32:44.108420 kernel: Key type dns_resolver registered Aug 5 22:32:44.108436 kernel: IPI shorthand broadcast: enabled Aug 5 22:32:44.108452 kernel: sched_clock: Marking stable (728003853, 350002202)->(1202779180, -124773125) Aug 5 22:32:44.108468 kernel: registered taskstats version 1 Aug 5 22:32:44.108487 kernel: Loading compiled-in X.509 certificates Aug 5 22:32:44.108503 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 5 22:32:44.108519 kernel: Key type .fscrypt registered Aug 5 22:32:44.108535 kernel: Key type fscrypt-provisioning registered Aug 5 22:32:44.108551 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:32:44.108566 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:32:44.108582 kernel: ima: No architecture policies found Aug 5 22:32:44.108598 kernel: clk: Disabling unused clocks Aug 5 22:32:44.108614 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 5 22:32:44.108645 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:32:44.108661 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:32:44.108677 kernel: Run /init as init process Aug 5 22:32:44.108693 kernel: with arguments: Aug 5 22:32:44.108708 kernel: /init Aug 5 22:32:44.108723 kernel: with environment: Aug 5 22:32:44.108739 kernel: HOME=/ Aug 5 22:32:44.108754 kernel: TERM=linux Aug 5 22:32:44.108769 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:32:44.108792 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:32:44.108812 systemd[1]: Detected virtualization amazon. Aug 5 22:32:44.108848 systemd[1]: Detected architecture x86-64. Aug 5 22:32:44.108865 systemd[1]: Running in initrd. Aug 5 22:32:44.108882 systemd[1]: No hostname configured, using default hostname. Aug 5 22:32:44.108902 systemd[1]: Hostname set to . Aug 5 22:32:44.108920 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:32:44.108937 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:32:44.108954 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:32:44.109028 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:32:44.109049 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:32:44.109067 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:32:44.109085 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:32:44.109107 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:32:44.109128 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:32:44.109145 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:32:44.109163 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:32:44.109181 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:32:44.109198 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:32:44.109216 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:32:44.109237 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:32:44.109253 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:32:44.109267 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:32:44.109280 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:32:44.109412 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:32:44.109430 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:32:44.109447 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:32:44.109462 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:32:44.109481 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:32:44.109497 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:32:44.109512 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:32:44.109532 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:32:44.109550 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:32:44.109567 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:32:44.109582 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:32:44.109598 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:32:44.109700 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:32:44.109756 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:32:44.109773 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:32:44.109788 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:32:44.109870 systemd-journald[178]: Collecting audit messages is disabled. Aug 5 22:32:44.109949 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:32:44.109969 systemd-journald[178]: Journal started Aug 5 22:32:44.110035 systemd-journald[178]: Runtime Journal (/run/log/journal/ec235acf870f779de7a2b6f68bbd968a) is 4.8M, max 38.6M, 33.8M free. Aug 5 22:32:44.113664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:32:44.118650 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:32:44.120297 systemd-modules-load[179]: Inserted module 'overlay' Aug 5 22:32:44.138827 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:32:44.170792 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:32:44.366094 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:32:44.366141 kernel: Bridge firewalling registered Aug 5 22:32:44.188755 systemd-modules-load[179]: Inserted module 'br_netfilter' Aug 5 22:32:44.373260 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:32:44.375587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:32:44.378041 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:32:44.391212 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:32:44.407851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:32:44.427227 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:32:44.429361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:32:44.442435 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:32:44.451819 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:32:44.492983 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:32:44.496185 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:32:44.519432 dracut-cmdline[210]: dracut-dracut-053 Aug 5 22:32:44.524460 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:32:44.564340 systemd-resolved[212]: Positive Trust Anchors: Aug 5 22:32:44.565007 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:32:44.565067 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:32:44.572942 systemd-resolved[212]: Defaulting to hostname 'linux'. Aug 5 22:32:44.574571 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:32:44.579443 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:32:44.648702 kernel: SCSI subsystem initialized Aug 5 22:32:44.661663 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:32:44.679660 kernel: iscsi: registered transport (tcp) Aug 5 22:32:44.709665 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:32:44.709739 kernel: QLogic iSCSI HBA Driver Aug 5 22:32:44.760545 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:32:44.767869 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:32:44.807250 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:32:44.807335 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:32:44.807358 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:32:44.855679 kernel: raid6: avx512x4 gen() 15968 MB/s Aug 5 22:32:44.872696 kernel: raid6: avx512x2 gen() 17148 MB/s Aug 5 22:32:44.889722 kernel: raid6: avx512x1 gen() 15569 MB/s Aug 5 22:32:44.906735 kernel: raid6: avx2x4 gen() 12284 MB/s Aug 5 22:32:44.923713 kernel: raid6: avx2x2 gen() 15057 MB/s Aug 5 22:32:44.940688 kernel: raid6: avx2x1 gen() 10917 MB/s Aug 5 22:32:44.940775 kernel: raid6: using algorithm avx512x2 gen() 17148 MB/s Aug 5 22:32:44.957910 kernel: raid6: .... xor() 17894 MB/s, rmw enabled Aug 5 22:32:44.958069 kernel: raid6: using avx512x2 recovery algorithm Aug 5 22:32:44.986665 kernel: xor: automatically using best checksumming function avx Aug 5 22:32:45.204661 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:32:45.216242 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:32:45.223904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:32:45.274188 systemd-udevd[396]: Using default interface naming scheme 'v255'. Aug 5 22:32:45.282788 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:32:45.295478 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:32:45.340965 dracut-pre-trigger[401]: rd.md=0: removing MD RAID activation Aug 5 22:32:45.396228 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:32:45.405173 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:32:45.489543 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:32:45.512240 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:32:45.565306 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:32:45.580399 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:32:45.599021 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:32:45.601937 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:32:45.613882 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:32:45.659964 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:32:45.669094 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 5 22:32:45.688986 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 5 22:32:45.689177 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:32:45.689200 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Aug 5 22:32:45.689623 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:d4:38:e5:7c:09 Aug 5 22:32:45.704672 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 5 22:32:45.704943 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 5 22:32:45.716007 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 5 22:32:45.723840 (udev-worker)[444]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:32:45.730881 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:32:45.730982 kernel: GPT:9289727 != 16777215 Aug 5 22:32:45.731009 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:32:45.731030 kernel: GPT:9289727 != 16777215 Aug 5 22:32:45.731057 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:32:45.731078 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:32:45.732607 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:32:45.733188 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:32:45.755880 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:32:45.768412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:32:45.768642 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:32:45.771603 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:32:45.783018 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:32:45.800459 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:32:45.800523 kernel: AES CTR mode by8 optimization enabled Aug 5 22:32:45.882660 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (450) Aug 5 22:32:45.917663 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (442) Aug 5 22:32:45.972487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:32:45.978904 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:32:46.018349 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 5 22:32:46.039623 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 5 22:32:46.042647 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:32:46.054517 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 5 22:32:46.054665 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 5 22:32:46.067505 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:32:46.074829 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:32:46.087563 disk-uuid[623]: Primary Header is updated. Aug 5 22:32:46.087563 disk-uuid[623]: Secondary Entries is updated. Aug 5 22:32:46.087563 disk-uuid[623]: Secondary Header is updated. Aug 5 22:32:46.096704 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:32:46.102746 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:32:46.129662 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:32:47.117192 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:32:47.120719 disk-uuid[624]: The operation has completed successfully. Aug 5 22:32:47.351569 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:32:47.351776 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:32:47.399910 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:32:47.407879 sh[967]: Success Aug 5 22:32:47.448943 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:32:47.563099 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:32:47.574001 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:32:47.577428 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:32:47.607311 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 5 22:32:47.607379 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:32:47.607400 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:32:47.607547 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:32:47.608067 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:32:47.679662 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 5 22:32:47.701081 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:32:47.704198 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:32:47.714003 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:32:47.719839 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:32:47.762395 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:32:47.762464 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:32:47.762483 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:32:47.770712 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:32:47.791669 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:32:47.792623 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:32:47.803002 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:32:47.810154 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:32:47.896944 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:32:47.911769 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:32:47.951930 systemd-networkd[1159]: lo: Link UP Aug 5 22:32:47.951942 systemd-networkd[1159]: lo: Gained carrier Aug 5 22:32:47.953599 systemd-networkd[1159]: Enumeration completed Aug 5 22:32:47.954029 systemd-networkd[1159]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:32:47.954034 systemd-networkd[1159]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:32:47.955418 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:32:47.959856 systemd[1]: Reached target network.target - Network. Aug 5 22:32:47.961801 systemd-networkd[1159]: eth0: Link UP Aug 5 22:32:47.961805 systemd-networkd[1159]: eth0: Gained carrier Aug 5 22:32:47.961817 systemd-networkd[1159]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:32:47.978754 systemd-networkd[1159]: eth0: DHCPv4 address 172.31.23.20/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:32:48.112814 ignition[1074]: Ignition 2.19.0 Aug 5 22:32:48.112861 ignition[1074]: Stage: fetch-offline Aug 5 22:32:48.113366 ignition[1074]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:32:48.115201 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:32:48.113381 ignition[1074]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:32:48.113863 ignition[1074]: Ignition finished successfully Aug 5 22:32:48.125150 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:32:48.148157 ignition[1170]: Ignition 2.19.0 Aug 5 22:32:48.148175 ignition[1170]: Stage: fetch Aug 5 22:32:48.148656 ignition[1170]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:32:48.148671 ignition[1170]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:32:48.148780 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:32:48.174181 ignition[1170]: PUT result: OK Aug 5 22:32:48.177345 ignition[1170]: parsed url from cmdline: "" Aug 5 22:32:48.177357 ignition[1170]: no config URL provided Aug 5 22:32:48.177368 ignition[1170]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:32:48.177383 ignition[1170]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:32:48.177416 ignition[1170]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:32:48.180954 ignition[1170]: PUT result: OK Aug 5 22:32:48.182766 ignition[1170]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 5 22:32:48.186182 ignition[1170]: GET result: OK Aug 5 22:32:48.187770 ignition[1170]: parsing config with SHA512: 3da8171a1202929784225f8f982bd044f69bb2ea9fd26445aaef886b98b88d491b30161847af6f649b114b315abc642ccc6a7f392da999b6502d09c325add351 Aug 5 22:32:48.218183 unknown[1170]: fetched base config from "system" Aug 5 22:32:48.218200 unknown[1170]: fetched base config from "system" Aug 5 22:32:48.218968 ignition[1170]: fetch: fetch complete Aug 5 22:32:48.218208 unknown[1170]: fetched user config from "aws" Aug 5 22:32:48.218976 ignition[1170]: fetch: fetch passed Aug 5 22:32:48.222749 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:32:48.219036 ignition[1170]: Ignition finished successfully Aug 5 22:32:48.232313 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:32:48.271729 ignition[1177]: Ignition 2.19.0 Aug 5 22:32:48.271745 ignition[1177]: Stage: kargs Aug 5 22:32:48.272611 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:32:48.272738 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:32:48.272867 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:32:48.274813 ignition[1177]: PUT result: OK Aug 5 22:32:48.284537 ignition[1177]: kargs: kargs passed Aug 5 22:32:48.284596 ignition[1177]: Ignition finished successfully Aug 5 22:32:48.287457 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:32:48.295147 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:32:48.313915 ignition[1184]: Ignition 2.19.0 Aug 5 22:32:48.313930 ignition[1184]: Stage: disks Aug 5 22:32:48.314489 ignition[1184]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:32:48.314503 ignition[1184]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:32:48.314915 ignition[1184]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:32:48.316357 ignition[1184]: PUT result: OK Aug 5 22:32:48.321654 ignition[1184]: disks: disks passed Aug 5 22:32:48.321715 ignition[1184]: Ignition finished successfully Aug 5 22:32:48.324318 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:32:48.330110 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:32:48.331670 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:32:48.335331 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:32:48.335428 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:32:48.335589 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:32:48.346331 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:32:48.387191 systemd-fsck[1193]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:32:48.391747 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:32:48.404162 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:32:48.576428 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 5 22:32:48.576310 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:32:48.578372 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:32:48.586807 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:32:48.599976 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:32:48.603838 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:32:48.603954 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:32:48.603986 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:32:48.611776 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:32:48.629762 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1212) Aug 5 22:32:48.635650 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:32:48.639272 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:32:48.639306 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:32:48.639521 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:32:48.654777 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:32:48.664465 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:32:49.078302 initrd-setup-root[1236]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:32:49.088867 systemd-networkd[1159]: eth0: Gained IPv6LL Aug 5 22:32:49.092752 initrd-setup-root[1243]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:32:49.101421 initrd-setup-root[1250]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:32:49.112369 initrd-setup-root[1257]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:32:49.435170 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:32:49.447786 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:32:49.450852 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:32:49.481782 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:32:49.483392 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:32:49.514400 ignition[1324]: INFO : Ignition 2.19.0 Aug 5 22:32:49.514400 ignition[1324]: INFO : Stage: mount Aug 5 22:32:49.514400 ignition[1324]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:32:49.514400 ignition[1324]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:32:49.523819 ignition[1324]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:32:49.523819 ignition[1324]: INFO : PUT result: OK Aug 5 22:32:49.526365 ignition[1324]: INFO : mount: mount passed Aug 5 22:32:49.526365 ignition[1324]: INFO : Ignition finished successfully Aug 5 22:32:49.529969 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:32:49.539906 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:32:49.552330 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:32:49.568119 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:32:49.599699 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1337) Aug 5 22:32:49.602323 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:32:49.602389 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:32:49.602455 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:32:49.608863 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:32:49.612866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:32:49.643935 ignition[1355]: INFO : Ignition 2.19.0 Aug 5 22:32:49.643935 ignition[1355]: INFO : Stage: files Aug 5 22:32:49.643935 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:32:49.643935 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:32:49.643935 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:32:49.649396 ignition[1355]: INFO : PUT result: OK Aug 5 22:32:49.651983 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:32:49.653650 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:32:49.655116 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:32:49.675771 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:32:49.677300 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:32:49.677300 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:32:49.676330 unknown[1355]: wrote ssh authorized keys file for user: core Aug 5 22:32:49.697337 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:32:49.701545 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:32:49.796767 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:32:49.944656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:32:49.944656 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:32:49.950930 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Aug 5 22:32:50.371195 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:32:51.352954 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Aug 5 22:32:51.352954 ignition[1355]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:32:51.358624 ignition[1355]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:32:51.358624 ignition[1355]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:32:51.358624 ignition[1355]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:32:51.358624 ignition[1355]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:32:51.358624 ignition[1355]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:32:51.358624 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:32:51.358624 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:32:51.358624 ignition[1355]: INFO : files: files passed Aug 5 22:32:51.358624 ignition[1355]: INFO : Ignition finished successfully Aug 5 22:32:51.373574 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:32:51.382897 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:32:51.397344 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:32:51.414202 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:32:51.415135 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:32:51.434248 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:32:51.434248 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:32:51.441252 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:32:51.442076 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:32:51.452285 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:32:51.459018 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:32:51.516284 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:32:51.516418 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:32:51.519624 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:32:51.522456 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:32:51.524428 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:32:51.532918 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:32:51.554291 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:32:51.571376 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:32:51.599836 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:32:51.601473 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:32:51.604389 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:32:51.606601 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:32:51.606741 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:32:51.613265 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:32:51.615832 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:32:51.619264 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:32:51.621421 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:32:51.625152 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:32:51.625332 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:32:51.630506 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:32:51.632414 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:32:51.638103 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:32:51.639171 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:32:51.640883 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:32:51.641011 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:32:51.643405 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:32:51.645434 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:32:51.649482 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:32:51.649558 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:32:51.653221 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:32:51.653341 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:32:51.658051 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:32:51.658156 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:32:51.669026 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:32:51.670850 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:32:51.683018 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:32:51.686735 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:32:51.690250 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:32:51.690549 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:32:51.699557 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:32:51.718472 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:32:51.748763 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:32:51.748872 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:32:51.800781 ignition[1407]: INFO : Ignition 2.19.0 Aug 5 22:32:51.800781 ignition[1407]: INFO : Stage: umount Aug 5 22:32:51.803535 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:32:51.803535 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:32:51.803535 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:32:51.803535 ignition[1407]: INFO : PUT result: OK Aug 5 22:32:51.811664 ignition[1407]: INFO : umount: umount passed Aug 5 22:32:51.811664 ignition[1407]: INFO : Ignition finished successfully Aug 5 22:32:51.815754 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:32:51.816448 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:32:51.820775 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:32:51.820946 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:32:51.822562 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:32:51.822720 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:32:51.830024 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:32:51.830426 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:32:51.833082 systemd[1]: Stopped target network.target - Network. Aug 5 22:32:51.835766 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:32:51.835837 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:32:51.837470 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:32:51.839948 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:32:51.841018 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:32:51.842623 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:32:51.844344 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:32:51.847141 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:32:51.847193 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:32:51.850386 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:32:51.850437 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:32:51.860480 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:32:51.860546 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:32:51.863007 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:32:51.863112 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:32:51.871314 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:32:51.876858 systemd-networkd[1159]: eth0: DHCPv6 lease lost Aug 5 22:32:51.878048 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:32:51.894708 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:32:51.900897 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:32:51.901008 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:32:51.906615 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:32:51.907695 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:32:51.911028 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:32:51.911109 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:32:51.914160 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:32:51.914220 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:32:51.922045 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:32:51.924452 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:32:51.924536 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:32:51.927975 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:32:51.944197 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:32:51.944433 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:32:51.960324 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:32:51.962087 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:32:51.969802 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:32:51.969876 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:32:51.972835 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:32:51.972888 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:32:51.975249 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:32:51.975323 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:32:51.978518 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:32:51.978647 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:32:51.981903 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:32:51.981977 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:32:51.999846 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:32:52.001395 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:32:52.001464 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:32:52.003884 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:32:52.003949 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:32:52.005093 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:32:52.005136 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:32:52.006552 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:32:52.006596 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:32:52.007842 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:32:52.007879 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:32:52.009145 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:32:52.009183 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:32:52.010595 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:32:52.010645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:32:52.014605 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:32:52.014769 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:32:52.032036 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:32:52.032237 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:32:52.034325 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:32:52.046119 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:32:52.055627 systemd[1]: Switching root. Aug 5 22:32:52.118581 systemd-journald[178]: Journal stopped Aug 5 22:32:54.177519 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Aug 5 22:32:54.177601 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:32:54.177624 kernel: SELinux: policy capability open_perms=1 Aug 5 22:32:54.196732 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:32:54.196782 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:32:54.196803 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:32:54.196879 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:32:54.196901 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:32:54.197856 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:32:54.197888 kernel: audit: type=1403 audit(1722897172.548:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:32:54.198012 systemd[1]: Successfully loaded SELinux policy in 59.237ms. Aug 5 22:32:54.198053 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.071ms. Aug 5 22:32:54.198077 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:32:54.198101 systemd[1]: Detected virtualization amazon. Aug 5 22:32:54.198122 systemd[1]: Detected architecture x86-64. Aug 5 22:32:54.198149 systemd[1]: Detected first boot. Aug 5 22:32:54.198171 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:32:54.198229 zram_generator::config[1450]: No configuration found. Aug 5 22:32:54.198258 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:32:54.198281 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:32:54.198391 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:32:54.198417 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:32:54.198442 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:32:54.198534 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:32:54.198561 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:32:54.198583 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:32:54.198606 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:32:54.198630 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:32:54.198672 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:32:54.198694 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:32:54.198716 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:32:54.198737 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:32:54.198765 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:32:54.198786 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:32:54.198808 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:32:54.198831 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:32:54.198852 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:32:54.198873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:32:54.198896 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:32:54.198930 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:32:54.198958 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:32:54.198983 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:32:54.199009 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:32:54.199131 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:32:54.199157 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:32:54.199179 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:32:54.199267 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:32:54.199292 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:32:54.199317 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:32:54.199339 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:32:54.199361 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:32:54.199383 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:32:54.199405 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:32:54.199428 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:32:54.199451 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:32:54.199516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:32:54.199539 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:32:54.220728 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:32:54.220851 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:32:54.220901 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:32:54.220921 systemd[1]: Reached target machines.target - Containers. Aug 5 22:32:54.220941 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:32:54.220960 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:32:54.220979 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:32:54.220998 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:32:54.221016 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:32:54.221115 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:32:54.221135 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:32:54.221153 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:32:54.221177 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:32:54.221195 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:32:54.221214 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:32:54.221235 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:32:54.221256 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:32:54.221326 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:32:54.221350 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:32:54.221372 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:32:54.221393 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:32:54.221412 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:32:54.221431 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:32:54.221452 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:32:54.221472 systemd[1]: Stopped verity-setup.service. Aug 5 22:32:54.221492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:32:54.221516 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:32:54.221535 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:32:54.221555 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:32:54.221980 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:32:54.222008 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:32:54.222037 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:32:54.222060 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:32:54.222083 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:32:54.223693 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:32:54.223973 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:32:54.224004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:32:54.224027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:32:54.224179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:32:54.224211 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:32:54.224235 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:32:54.224262 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:32:54.224285 kernel: loop: module loaded Aug 5 22:32:54.224308 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:32:54.224329 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:32:54.224352 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:32:54.224374 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:32:54.224442 systemd-journald[1521]: Collecting audit messages is disabled. Aug 5 22:32:54.224480 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:32:54.224501 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:32:54.237245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:32:54.237285 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:32:54.237313 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:32:54.237333 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:32:54.237354 systemd-journald[1521]: Journal started Aug 5 22:32:54.237395 systemd-journald[1521]: Runtime Journal (/run/log/journal/ec235acf870f779de7a2b6f68bbd968a) is 4.8M, max 38.6M, 33.8M free. Aug 5 22:32:53.578721 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:32:54.247342 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:32:53.636619 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 5 22:32:53.637107 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:32:54.279310 kernel: fuse: init (API version 7.39) Aug 5 22:32:54.284668 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:32:54.292917 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:32:54.292357 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:32:54.292582 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:32:54.294444 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:32:54.296053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:32:54.297574 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:32:54.298989 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:32:54.300611 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:32:54.383820 kernel: loop0: detected capacity change from 0 to 210664 Aug 5 22:32:54.390982 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:32:54.353172 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:32:54.357529 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:32:54.368039 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:32:54.383709 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:32:54.410855 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:32:54.417559 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:32:54.428343 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:32:54.431288 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:32:54.442713 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:32:54.456907 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:32:54.492344 kernel: ACPI: bus type drm_connector registered Aug 5 22:32:54.486652 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:32:54.487237 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:32:54.531958 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:32:54.547944 systemd-journald[1521]: Time spent on flushing to /var/log/journal/ec235acf870f779de7a2b6f68bbd968a is 133.171ms for 967 entries. Aug 5 22:32:54.547944 systemd-journald[1521]: System Journal (/var/log/journal/ec235acf870f779de7a2b6f68bbd968a) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:32:54.693880 systemd-journald[1521]: Received client request to flush runtime journal. Aug 5 22:32:54.693948 kernel: loop1: detected capacity change from 0 to 139760 Aug 5 22:32:54.580467 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:32:54.596427 udevadm[1576]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 22:32:54.605575 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Aug 5 22:32:54.605611 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Aug 5 22:32:54.631444 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:32:54.644127 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:32:54.657357 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:32:54.697895 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:32:54.729205 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:32:54.730374 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:32:54.765744 kernel: loop2: detected capacity change from 0 to 80568 Aug 5 22:32:54.796558 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:32:54.816492 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:32:54.869853 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Aug 5 22:32:54.869882 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Aug 5 22:32:54.897397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:32:54.911670 kernel: loop3: detected capacity change from 0 to 60984 Aug 5 22:32:54.988945 kernel: loop4: detected capacity change from 0 to 210664 Aug 5 22:32:55.035917 kernel: loop5: detected capacity change from 0 to 139760 Aug 5 22:32:55.073893 kernel: loop6: detected capacity change from 0 to 80568 Aug 5 22:32:55.090674 kernel: loop7: detected capacity change from 0 to 60984 Aug 5 22:32:55.114241 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 5 22:32:55.115026 (sd-merge)[1608]: Merged extensions into '/usr'. Aug 5 22:32:55.125617 systemd[1]: Reloading requested from client PID 1545 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:32:55.125652 systemd[1]: Reloading... Aug 5 22:32:55.311787 zram_generator::config[1635]: No configuration found. Aug 5 22:32:55.699701 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:32:55.850090 systemd[1]: Reloading finished in 723 ms. Aug 5 22:32:55.888128 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:32:55.905860 systemd[1]: Starting ensure-sysext.service... Aug 5 22:32:55.919877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:32:55.955946 systemd[1]: Reloading requested from client PID 1680 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:32:55.955974 systemd[1]: Reloading... Aug 5 22:32:56.050200 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:32:56.055927 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:32:56.057507 systemd-tmpfiles[1681]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:32:56.058993 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Aug 5 22:32:56.059087 systemd-tmpfiles[1681]: ACLs are not supported, ignoring. Aug 5 22:32:56.065936 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:32:56.065956 systemd-tmpfiles[1681]: Skipping /boot Aug 5 22:32:56.128559 systemd-tmpfiles[1681]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:32:56.128575 systemd-tmpfiles[1681]: Skipping /boot Aug 5 22:32:56.226667 zram_generator::config[1716]: No configuration found. Aug 5 22:32:56.294818 ldconfig[1536]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:32:56.407118 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:32:56.533722 systemd[1]: Reloading finished in 577 ms. Aug 5 22:32:56.556110 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:32:56.558515 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:32:56.564407 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:32:56.596416 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:32:56.609943 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:32:56.624990 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:32:56.639418 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:32:56.657915 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:32:56.673377 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:32:56.694050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:32:56.694429 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:32:56.710833 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:32:56.719237 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:32:56.727081 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:32:56.728462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:32:56.728672 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:32:56.742960 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:32:56.743370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:32:56.743886 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:32:56.757854 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:32:56.759839 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:32:56.761230 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:32:56.761432 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:32:56.775006 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:32:56.775591 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:32:56.784021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:32:56.790110 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:32:56.792763 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:32:56.793111 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:32:56.794351 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:32:56.801586 systemd[1]: Finished ensure-sysext.service. Aug 5 22:32:56.842966 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:32:56.843564 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:32:56.854867 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:32:56.859460 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:32:56.865854 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:32:56.866134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:32:56.871196 systemd-udevd[1770]: Using default interface naming scheme 'v255'. Aug 5 22:32:56.883855 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:32:56.894886 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:32:56.900250 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:32:56.901174 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:32:56.917487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:32:56.917876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:32:56.927655 augenrules[1795]: No rules Aug 5 22:32:56.932798 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:32:56.935060 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:32:56.948911 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:32:56.952917 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:32:56.956833 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:32:56.975434 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:32:56.984239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:32:57.001880 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:32:57.159709 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:32:57.175657 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1816) Aug 5 22:32:57.180032 (udev-worker)[1822]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:32:57.237041 systemd-networkd[1807]: lo: Link UP Aug 5 22:32:57.237054 systemd-networkd[1807]: lo: Gained carrier Aug 5 22:32:57.241404 systemd-networkd[1807]: Enumeration completed Aug 5 22:32:57.241567 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:32:57.246333 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:32:57.246349 systemd-networkd[1807]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:32:57.249935 systemd-networkd[1807]: eth0: Link UP Aug 5 22:32:57.250188 systemd-networkd[1807]: eth0: Gained carrier Aug 5 22:32:57.250218 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:32:57.255210 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:32:57.259777 systemd-networkd[1807]: eth0: DHCPv4 address 172.31.23.20/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:32:57.276958 systemd-resolved[1765]: Positive Trust Anchors: Aug 5 22:32:57.283096 systemd-resolved[1765]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:32:57.283162 systemd-resolved[1765]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:32:57.306161 systemd-resolved[1765]: Defaulting to hostname 'linux'. Aug 5 22:32:57.311539 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:32:57.313114 systemd[1]: Reached target network.target - Network. Aug 5 22:32:57.314114 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:32:57.360386 systemd-networkd[1807]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:32:57.381105 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Aug 5 22:32:57.402842 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 5 22:32:57.412691 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Aug 5 22:32:57.419699 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:32:57.419841 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Aug 5 22:32:57.421832 kernel: ACPI: button: Sleep Button [SLPF] Aug 5 22:32:57.457781 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1821) Aug 5 22:32:57.583839 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:32:57.587446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:32:57.714748 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:32:57.723753 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:32:57.730972 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:32:57.734021 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:32:57.793795 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:32:57.801517 lvm[1923]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:32:57.834405 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:32:57.945930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:32:57.953056 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:32:57.955326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:32:57.960478 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:32:57.962515 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:32:57.964232 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:32:57.966044 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:32:57.968868 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:32:57.970543 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:32:57.971789 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:32:57.971833 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:32:57.973171 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:32:57.982184 lvm[1930]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:32:57.984592 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:32:57.993504 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:32:58.017796 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:32:58.022627 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:32:58.025481 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:32:58.026570 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:32:58.028360 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:32:58.028436 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:32:58.041821 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:32:58.047526 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 22:32:58.052026 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:32:58.070192 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:32:58.074973 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:32:58.077750 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:32:58.080076 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:32:58.083850 jq[1937]: false Aug 5 22:32:58.096934 systemd[1]: Started ntpd.service - Network Time Service. Aug 5 22:32:58.103973 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:32:58.106793 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 5 22:32:58.109817 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:32:58.114975 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:32:58.129904 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:32:58.131578 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:32:58.132283 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:32:58.154868 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:32:58.164278 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:32:58.168141 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:32:58.186113 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:32:58.186554 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:32:58.273810 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:32:58.274205 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:32:58.294554 jq[1947]: true Aug 5 22:32:58.292757 (ntainerd)[1959]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:32:58.317039 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:32:58.316499 dbus-daemon[1936]: [system] SELinux support is enabled Aug 5 22:32:58.325348 extend-filesystems[1938]: Found loop4 Aug 5 22:32:58.325348 extend-filesystems[1938]: Found loop5 Aug 5 22:32:58.325348 extend-filesystems[1938]: Found loop6 Aug 5 22:32:58.325348 extend-filesystems[1938]: Found loop7 Aug 5 22:32:58.325348 extend-filesystems[1938]: Found nvme0n1 Aug 5 22:32:58.349799 extend-filesystems[1938]: Found nvme0n1p1 Aug 5 22:32:58.349799 extend-filesystems[1938]: Found nvme0n1p2 Aug 5 22:32:58.349799 extend-filesystems[1938]: Found nvme0n1p3 Aug 5 22:32:58.349799 extend-filesystems[1938]: Found usr Aug 5 22:32:58.349799 extend-filesystems[1938]: Found nvme0n1p4 Aug 5 22:32:58.349799 extend-filesystems[1938]: Found nvme0n1p6 Aug 5 22:32:58.349799 extend-filesystems[1938]: Found nvme0n1p7 Aug 5 22:32:58.349799 extend-filesystems[1938]: Found nvme0n1p9 Aug 5 22:32:58.349799 extend-filesystems[1938]: Checking size of /dev/nvme0n1p9 Aug 5 22:32:58.358331 jq[1962]: true Aug 5 22:32:58.326058 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:32:58.326096 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:32:58.340329 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:32:58.340361 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:32:58.363534 dbus-daemon[1936]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1807 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 5 22:32:58.376506 update_engine[1946]: I0805 22:32:58.367355 1946 main.cc:92] Flatcar Update Engine starting Aug 5 22:32:58.367177 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:32:58.367450 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:32:58.385147 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 5 22:32:58.390192 tar[1949]: linux-amd64/helm Aug 5 22:32:58.402153 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 5 22:32:58.412402 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:32:58.417874 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:33 UTC 2024 (1): Starting Aug 5 22:32:58.417953 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:32:58.418460 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:33 UTC 2024 (1): Starting Aug 5 22:32:58.418460 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:32:58.418460 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: ---------------------------------------------------- Aug 5 22:32:58.418460 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:32:58.418460 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:32:58.418460 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: corporation. Support and training for ntp-4 are Aug 5 22:32:58.418460 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: available at https://www.nwtime.org/support Aug 5 22:32:58.418460 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: ---------------------------------------------------- Aug 5 22:32:58.417965 ntpd[1940]: ---------------------------------------------------- Aug 5 22:32:58.417974 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:32:58.417983 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:32:58.417992 ntpd[1940]: corporation. Support and training for ntp-4 are Aug 5 22:32:58.418000 ntpd[1940]: available at https://www.nwtime.org/support Aug 5 22:32:58.418009 ntpd[1940]: ---------------------------------------------------- Aug 5 22:32:58.423749 update_engine[1946]: I0805 22:32:58.422142 1946 update_check_scheduler.cc:74] Next update check in 8m23s Aug 5 22:32:58.442671 ntpd[1940]: proto: precision = 0.097 usec (-23) Aug 5 22:32:58.442943 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: proto: precision = 0.097 usec (-23) Aug 5 22:32:58.443270 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:32:58.450560 ntpd[1940]: basedate set to 2024-07-24 Aug 5 22:32:58.450591 ntpd[1940]: gps base set to 2024-07-28 (week 2325) Aug 5 22:32:58.451074 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: basedate set to 2024-07-24 Aug 5 22:32:58.451074 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: gps base set to 2024-07-28 (week 2325) Aug 5 22:32:58.467165 extend-filesystems[1938]: Resized partition /dev/nvme0n1p9 Aug 5 22:32:58.468362 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:32:58.468537 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:32:58.468537 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:32:58.468430 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:32:58.472306 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:32:58.473787 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:32:58.473787 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: Listen normally on 3 eth0 172.31.23.20:123 Aug 5 22:32:58.473787 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: Listen normally on 4 lo [::1]:123 Aug 5 22:32:58.473787 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: bind(21) AF_INET6 fe80::4d4:38ff:fee5:7c09%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:32:58.473787 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: unable to create socket on eth0 (5) for fe80::4d4:38ff:fee5:7c09%2#123 Aug 5 22:32:58.473787 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: failed to init interface for address fe80::4d4:38ff:fee5:7c09%2 Aug 5 22:32:58.473787 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Aug 5 22:32:58.472364 ntpd[1940]: Listen normally on 3 eth0 172.31.23.20:123 Aug 5 22:32:58.472411 ntpd[1940]: Listen normally on 4 lo [::1]:123 Aug 5 22:32:58.472463 ntpd[1940]: bind(21) AF_INET6 fe80::4d4:38ff:fee5:7c09%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:32:58.472484 ntpd[1940]: unable to create socket on eth0 (5) for fe80::4d4:38ff:fee5:7c09%2#123 Aug 5 22:32:58.472500 ntpd[1940]: failed to init interface for address fe80::4d4:38ff:fee5:7c09%2 Aug 5 22:32:58.472534 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Aug 5 22:32:58.481570 extend-filesystems[1991]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:32:58.488467 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:32:58.491669 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 5 22:32:58.491748 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:32:58.491748 ntpd[1940]: 5 Aug 22:32:58 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:32:58.489027 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:32:58.500509 coreos-metadata[1935]: Aug 05 22:32:58.500 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:32:58.502705 coreos-metadata[1935]: Aug 05 22:32:58.502 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 5 22:32:58.503614 coreos-metadata[1935]: Aug 05 22:32:58.503 INFO Fetch successful Aug 5 22:32:58.503832 coreos-metadata[1935]: Aug 05 22:32:58.503 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 5 22:32:58.504619 coreos-metadata[1935]: Aug 05 22:32:58.504 INFO Fetch successful Aug 5 22:32:58.504619 coreos-metadata[1935]: Aug 05 22:32:58.504 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 5 22:32:58.505234 coreos-metadata[1935]: Aug 05 22:32:58.505 INFO Fetch successful Aug 5 22:32:58.505234 coreos-metadata[1935]: Aug 05 22:32:58.505 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 5 22:32:58.508655 coreos-metadata[1935]: Aug 05 22:32:58.507 INFO Fetch successful Aug 5 22:32:58.508655 coreos-metadata[1935]: Aug 05 22:32:58.508 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 5 22:32:58.513122 coreos-metadata[1935]: Aug 05 22:32:58.512 INFO Fetch failed with 404: resource not found Aug 5 22:32:58.513122 coreos-metadata[1935]: Aug 05 22:32:58.512 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 5 22:32:58.516812 coreos-metadata[1935]: Aug 05 22:32:58.516 INFO Fetch successful Aug 5 22:32:58.518513 coreos-metadata[1935]: Aug 05 22:32:58.518 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 5 22:32:58.519096 coreos-metadata[1935]: Aug 05 22:32:58.519 INFO Fetch successful Aug 5 22:32:58.519096 coreos-metadata[1935]: Aug 05 22:32:58.519 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 5 22:32:58.560224 coreos-metadata[1935]: Aug 05 22:32:58.543 INFO Fetch successful Aug 5 22:32:58.560224 coreos-metadata[1935]: Aug 05 22:32:58.543 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 5 22:32:58.572798 systemd-networkd[1807]: eth0: Gained IPv6LL Aug 5 22:32:58.579114 coreos-metadata[1935]: Aug 05 22:32:58.575 INFO Fetch successful Aug 5 22:32:58.579114 coreos-metadata[1935]: Aug 05 22:32:58.576 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 5 22:32:58.583839 coreos-metadata[1935]: Aug 05 22:32:58.583 INFO Fetch successful Aug 5 22:32:58.584316 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:32:58.588380 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:32:58.598295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:32:58.601521 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:32:58.622914 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 5 22:32:58.642468 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 5 22:32:58.686226 locksmithd[1982]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:32:58.698849 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 5 22:32:58.718930 extend-filesystems[1991]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 5 22:32:58.718930 extend-filesystems[1991]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:32:58.718930 extend-filesystems[1991]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 5 22:32:58.740198 extend-filesystems[1938]: Resized filesystem in /dev/nvme0n1p9 Aug 5 22:32:58.742760 bash[2017]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:32:58.729859 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:32:58.733724 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:32:58.744419 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:32:58.764538 systemd[1]: Starting sshkeys.service... Aug 5 22:32:58.819975 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 22:32:58.836511 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:32:58.839385 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:32:58.886762 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1822) Aug 5 22:32:58.894723 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 22:32:58.907189 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 22:32:58.921686 systemd-logind[1945]: Watching system buttons on /dev/input/event2 (Power Button) Aug 5 22:32:58.921715 systemd-logind[1945]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 5 22:32:58.921739 systemd-logind[1945]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:32:58.931709 systemd-logind[1945]: New seat seat0. Aug 5 22:32:58.932605 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:32:59.018249 coreos-metadata[2043]: Aug 05 22:32:59.018 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:32:59.056846 coreos-metadata[2043]: Aug 05 22:32:59.055 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 5 22:32:59.057962 coreos-metadata[2043]: Aug 05 22:32:59.057 INFO Fetch successful Aug 5 22:32:59.057962 coreos-metadata[2043]: Aug 05 22:32:59.057 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 5 22:32:59.087119 coreos-metadata[2043]: Aug 05 22:32:59.085 INFO Fetch successful Aug 5 22:32:59.087913 unknown[2043]: wrote ssh authorized keys file for user: core Aug 5 22:32:59.164842 amazon-ssm-agent[2016]: Initializing new seelog logger Aug 5 22:32:59.168243 amazon-ssm-agent[2016]: New Seelog Logger Creation Complete Aug 5 22:32:59.168243 amazon-ssm-agent[2016]: 2024/08/05 22:32:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:32:59.168243 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:32:59.168512 amazon-ssm-agent[2016]: 2024/08/05 22:32:59 processing appconfig overrides Aug 5 22:32:59.181459 amazon-ssm-agent[2016]: 2024/08/05 22:32:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:32:59.181459 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:32:59.181459 amazon-ssm-agent[2016]: 2024/08/05 22:32:59 processing appconfig overrides Aug 5 22:32:59.182225 amazon-ssm-agent[2016]: 2024/08/05 22:32:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:32:59.182225 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:32:59.182225 amazon-ssm-agent[2016]: 2024/08/05 22:32:59 processing appconfig overrides Aug 5 22:32:59.182735 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO Proxy environment variables: Aug 5 22:32:59.201391 amazon-ssm-agent[2016]: 2024/08/05 22:32:59 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:32:59.201391 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:32:59.201391 amazon-ssm-agent[2016]: 2024/08/05 22:32:59 processing appconfig overrides Aug 5 22:32:59.210057 update-ssh-keys[2077]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:32:59.212996 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 22:32:59.220093 systemd[1]: Finished sshkeys.service. Aug 5 22:32:59.292009 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 5 22:32:59.298052 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 5 22:32:59.308433 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1979 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 5 22:32:59.318162 systemd[1]: Starting polkit.service - Authorization Manager... Aug 5 22:32:59.337651 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO https_proxy: Aug 5 22:32:59.420177 polkitd[2109]: Started polkitd version 121 Aug 5 22:32:59.430253 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO http_proxy: Aug 5 22:32:59.447421 polkitd[2109]: Loading rules from directory /etc/polkit-1/rules.d Aug 5 22:32:59.459078 polkitd[2109]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 5 22:32:59.460677 polkitd[2109]: Finished loading, compiling and executing 2 rules Aug 5 22:32:59.466031 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 5 22:32:59.466354 systemd[1]: Started polkit.service - Authorization Manager. Aug 5 22:32:59.500654 polkitd[2109]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 5 22:32:59.540652 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO no_proxy: Aug 5 22:32:59.596906 systemd-hostnamed[1979]: Hostname set to (transient) Aug 5 22:32:59.599140 systemd-resolved[1765]: System hostname changed to 'ip-172-31-23-20'. Aug 5 22:32:59.643048 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO Checking if agent identity type OnPrem can be assumed Aug 5 22:32:59.749978 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO Checking if agent identity type EC2 can be assumed Aug 5 22:32:59.846145 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO Agent will take identity from EC2 Aug 5 22:32:59.903272 containerd[1959]: time="2024-08-05T22:32:59.902337520Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 22:32:59.945417 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:33:00.048780 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:33:00.069301 containerd[1959]: time="2024-08-05T22:33:00.067832614Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:33:00.069600 containerd[1959]: time="2024-08-05T22:33:00.069571774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.074840113Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.074891919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.075280534Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.075606997Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.075794801Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.075913040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.075934692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.076119552Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.076388321Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.076410880Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:33:00.076778 containerd[1959]: time="2024-08-05T22:33:00.076426449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:33:00.077408 containerd[1959]: time="2024-08-05T22:33:00.076761823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:33:00.077408 containerd[1959]: time="2024-08-05T22:33:00.076785682Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:33:00.077408 containerd[1959]: time="2024-08-05T22:33:00.076857326Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:33:00.077408 containerd[1959]: time="2024-08-05T22:33:00.076875540Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.102986675Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103100209Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103130326Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103173640Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103195788Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103212265Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103234601Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103396441Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103419343Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103494058Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103522281Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103544854Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103570522Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:33:00.106653 containerd[1959]: time="2024-08-05T22:33:00.103591253Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.103693305Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.103719544Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.103741224Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.103761342Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.103808220Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.104119676Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.104477309Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.104512817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.104535240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:33:00.107592 containerd[1959]: time="2024-08-05T22:33:00.104570356Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.115783404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.115948682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.115984276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116011421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116037482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116069082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116096323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116188408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116223843Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116420827Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116451530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116477211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116503178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116528207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.116815 containerd[1959]: time="2024-08-05T22:33:00.116557416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.119772 containerd[1959]: time="2024-08-05T22:33:00.116584690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.119772 containerd[1959]: time="2024-08-05T22:33:00.116606047Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:33:00.119854 containerd[1959]: time="2024-08-05T22:33:00.117161998Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:33:00.119854 containerd[1959]: time="2024-08-05T22:33:00.117261622Z" level=info msg="Connect containerd service" Aug 5 22:33:00.119854 containerd[1959]: time="2024-08-05T22:33:00.117327749Z" level=info msg="using legacy CRI server" Aug 5 22:33:00.119854 containerd[1959]: time="2024-08-05T22:33:00.117400506Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:33:00.119854 containerd[1959]: time="2024-08-05T22:33:00.119598492Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:33:00.129951 containerd[1959]: time="2024-08-05T22:33:00.129863381Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:33:00.130115 containerd[1959]: time="2024-08-05T22:33:00.130076889Z" level=info msg="Start subscribing containerd event" Aug 5 22:33:00.130313 containerd[1959]: time="2024-08-05T22:33:00.130143811Z" level=info msg="Start recovering state" Aug 5 22:33:00.130313 containerd[1959]: time="2024-08-05T22:33:00.130238277Z" level=info msg="Start event monitor" Aug 5 22:33:00.130313 containerd[1959]: time="2024-08-05T22:33:00.130260371Z" level=info msg="Start snapshots syncer" Aug 5 22:33:00.130313 containerd[1959]: time="2024-08-05T22:33:00.130274242Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:33:00.130313 containerd[1959]: time="2024-08-05T22:33:00.130284090Z" level=info msg="Start streaming server" Aug 5 22:33:00.131607 containerd[1959]: time="2024-08-05T22:33:00.130890968Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:33:00.131607 containerd[1959]: time="2024-08-05T22:33:00.130947892Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:33:00.131607 containerd[1959]: time="2024-08-05T22:33:00.130968024Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:33:00.131607 containerd[1959]: time="2024-08-05T22:33:00.130988345Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:33:00.131607 containerd[1959]: time="2024-08-05T22:33:00.131297411Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:33:00.131607 containerd[1959]: time="2024-08-05T22:33:00.131350816Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:33:00.131518 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:33:00.133530 containerd[1959]: time="2024-08-05T22:33:00.131758926Z" level=info msg="containerd successfully booted in 0.240186s" Aug 5 22:33:00.149024 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:33:00.173437 sshd_keygen[1981]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO [amazon-ssm-agent] Starting Core Agent Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO [Registrar] Starting registrar module Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:32:59 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:33:00 INFO [EC2Identity] EC2 registration was successful. Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:33:00 INFO [CredentialRefresher] credentialRefresher has started Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:33:00 INFO [CredentialRefresher] Starting credentials refresher loop Aug 5 22:33:00.205989 amazon-ssm-agent[2016]: 2024-08-05 22:33:00 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 5 22:33:00.248034 amazon-ssm-agent[2016]: 2024-08-05 22:33:00 INFO [CredentialRefresher] Next credential rotation will be in 30.566619529383335 minutes Aug 5 22:33:00.264031 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:33:00.275188 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:33:00.315447 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:33:00.315823 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:33:00.329761 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:33:00.370263 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:33:00.377054 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:33:00.387058 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:33:00.391388 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:33:00.629526 tar[1949]: linux-amd64/LICENSE Aug 5 22:33:00.629526 tar[1949]: linux-amd64/README.md Aug 5 22:33:00.657855 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:33:01.251737 amazon-ssm-agent[2016]: 2024-08-05 22:33:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 5 22:33:01.345372 amazon-ssm-agent[2016]: 2024-08-05 22:33:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2182) started Aug 5 22:33:01.420725 ntpd[1940]: Listen normally on 6 eth0 [fe80::4d4:38ff:fee5:7c09%2]:123 Aug 5 22:33:01.422995 ntpd[1940]: 5 Aug 22:33:01 ntpd[1940]: Listen normally on 6 eth0 [fe80::4d4:38ff:fee5:7c09%2]:123 Aug 5 22:33:01.445617 amazon-ssm-agent[2016]: 2024-08-05 22:33:01 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 5 22:33:01.710876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:33:01.716172 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:33:01.718409 systemd[1]: Startup finished in 924ms (kernel) + 8.770s (initrd) + 9.228s (userspace) = 18.923s. Aug 5 22:33:01.723347 (kubelet)[2198]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:33:02.965813 kubelet[2198]: E0805 22:33:02.965623 2198 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:33:02.968512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:33:02.969091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:33:02.969445 systemd[1]: kubelet.service: Consumed 1.072s CPU time. Aug 5 22:33:05.685446 systemd-resolved[1765]: Clock change detected. Flushing caches. Aug 5 22:33:08.060727 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:33:08.069514 systemd[1]: Started sshd@0-172.31.23.20:22-147.75.109.163:60534.service - OpenSSH per-connection server daemon (147.75.109.163:60534). Aug 5 22:33:08.310155 sshd[2211]: Accepted publickey for core from 147.75.109.163 port 60534 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:33:08.313446 sshd[2211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:33:08.337296 systemd-logind[1945]: New session 1 of user core. Aug 5 22:33:08.341386 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:33:08.352043 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:33:08.379810 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:33:08.389004 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:33:08.399853 (systemd)[2215]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:33:08.563110 systemd[2215]: Queued start job for default target default.target. Aug 5 22:33:08.574003 systemd[2215]: Created slice app.slice - User Application Slice. Aug 5 22:33:08.574048 systemd[2215]: Reached target paths.target - Paths. Aug 5 22:33:08.574070 systemd[2215]: Reached target timers.target - Timers. Aug 5 22:33:08.575849 systemd[2215]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:33:08.606942 systemd[2215]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:33:08.608131 systemd[2215]: Reached target sockets.target - Sockets. Aug 5 22:33:08.608343 systemd[2215]: Reached target basic.target - Basic System. Aug 5 22:33:08.609568 systemd[2215]: Reached target default.target - Main User Target. Aug 5 22:33:08.609659 systemd[2215]: Startup finished in 197ms. Aug 5 22:33:08.609891 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:33:08.618945 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:33:08.776082 systemd[1]: Started sshd@1-172.31.23.20:22-147.75.109.163:60540.service - OpenSSH per-connection server daemon (147.75.109.163:60540). Aug 5 22:33:08.946418 sshd[2226]: Accepted publickey for core from 147.75.109.163 port 60540 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:33:08.948102 sshd[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:33:08.954251 systemd-logind[1945]: New session 2 of user core. Aug 5 22:33:08.960728 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:33:09.082119 sshd[2226]: pam_unix(sshd:session): session closed for user core Aug 5 22:33:09.086160 systemd[1]: sshd@1-172.31.23.20:22-147.75.109.163:60540.service: Deactivated successfully. Aug 5 22:33:09.088613 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:33:09.090785 systemd-logind[1945]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:33:09.092202 systemd-logind[1945]: Removed session 2. Aug 5 22:33:09.135595 systemd[1]: Started sshd@2-172.31.23.20:22-147.75.109.163:60550.service - OpenSSH per-connection server daemon (147.75.109.163:60550). Aug 5 22:33:09.315681 sshd[2233]: Accepted publickey for core from 147.75.109.163 port 60550 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:33:09.317226 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:33:09.327452 systemd-logind[1945]: New session 3 of user core. Aug 5 22:33:09.336721 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:33:09.454336 sshd[2233]: pam_unix(sshd:session): session closed for user core Aug 5 22:33:09.457812 systemd[1]: sshd@2-172.31.23.20:22-147.75.109.163:60550.service: Deactivated successfully. Aug 5 22:33:09.460220 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:33:09.462541 systemd-logind[1945]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:33:09.463759 systemd-logind[1945]: Removed session 3. Aug 5 22:33:09.491358 systemd[1]: Started sshd@3-172.31.23.20:22-147.75.109.163:60560.service - OpenSSH per-connection server daemon (147.75.109.163:60560). Aug 5 22:33:09.649497 sshd[2240]: Accepted publickey for core from 147.75.109.163 port 60560 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:33:09.651887 sshd[2240]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:33:09.658918 systemd-logind[1945]: New session 4 of user core. Aug 5 22:33:09.661652 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:33:09.791934 sshd[2240]: pam_unix(sshd:session): session closed for user core Aug 5 22:33:09.804095 systemd[1]: sshd@3-172.31.23.20:22-147.75.109.163:60560.service: Deactivated successfully. Aug 5 22:33:09.810642 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:33:09.813671 systemd-logind[1945]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:33:09.831102 systemd[1]: Started sshd@4-172.31.23.20:22-147.75.109.163:60576.service - OpenSSH per-connection server daemon (147.75.109.163:60576). Aug 5 22:33:09.833139 systemd-logind[1945]: Removed session 4. Aug 5 22:33:10.007459 sshd[2247]: Accepted publickey for core from 147.75.109.163 port 60576 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:33:10.009599 sshd[2247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:33:10.018412 systemd-logind[1945]: New session 5 of user core. Aug 5 22:33:10.024813 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:33:10.171924 sudo[2250]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:33:10.172377 sudo[2250]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:33:10.198561 sudo[2250]: pam_unix(sudo:session): session closed for user root Aug 5 22:33:10.222443 sshd[2247]: pam_unix(sshd:session): session closed for user core Aug 5 22:33:10.230059 systemd[1]: sshd@4-172.31.23.20:22-147.75.109.163:60576.service: Deactivated successfully. Aug 5 22:33:10.234631 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:33:10.236693 systemd-logind[1945]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:33:10.238922 systemd-logind[1945]: Removed session 5. Aug 5 22:33:10.263799 systemd[1]: Started sshd@5-172.31.23.20:22-147.75.109.163:60582.service - OpenSSH per-connection server daemon (147.75.109.163:60582). Aug 5 22:33:10.419480 sshd[2255]: Accepted publickey for core from 147.75.109.163 port 60582 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:33:10.422447 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:33:10.437423 systemd-logind[1945]: New session 6 of user core. Aug 5 22:33:10.444710 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:33:10.556545 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:33:10.558179 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:33:10.564386 sudo[2259]: pam_unix(sudo:session): session closed for user root Aug 5 22:33:10.572951 sudo[2258]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:33:10.573451 sudo[2258]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:33:10.597427 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:33:10.599741 auditctl[2262]: No rules Aug 5 22:33:10.601279 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:33:10.602036 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:33:10.609007 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:33:10.659611 augenrules[2280]: No rules Aug 5 22:33:10.661413 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:33:10.663891 sudo[2258]: pam_unix(sudo:session): session closed for user root Aug 5 22:33:10.688130 sshd[2255]: pam_unix(sshd:session): session closed for user core Aug 5 22:33:10.694219 systemd-logind[1945]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:33:10.695453 systemd[1]: sshd@5-172.31.23.20:22-147.75.109.163:60582.service: Deactivated successfully. Aug 5 22:33:10.698791 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:33:10.700004 systemd-logind[1945]: Removed session 6. Aug 5 22:33:10.735903 systemd[1]: Started sshd@6-172.31.23.20:22-147.75.109.163:60584.service - OpenSSH per-connection server daemon (147.75.109.163:60584). Aug 5 22:33:10.915241 sshd[2288]: Accepted publickey for core from 147.75.109.163 port 60584 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:33:10.917139 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:33:10.923828 systemd-logind[1945]: New session 7 of user core. Aug 5 22:33:10.931809 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:33:11.042148 sudo[2291]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:33:11.042537 sudo[2291]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:33:11.286269 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:33:11.287375 (dockerd)[2301]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:33:11.821402 dockerd[2301]: time="2024-08-05T22:33:11.821161193Z" level=info msg="Starting up" Aug 5 22:33:11.905553 dockerd[2301]: time="2024-08-05T22:33:11.905502388Z" level=info msg="Loading containers: start." Aug 5 22:33:12.063626 kernel: Initializing XFRM netlink socket Aug 5 22:33:12.104002 (udev-worker)[2312]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:33:12.180101 systemd-networkd[1807]: docker0: Link UP Aug 5 22:33:12.197093 dockerd[2301]: time="2024-08-05T22:33:12.197051291Z" level=info msg="Loading containers: done." Aug 5 22:33:12.332276 dockerd[2301]: time="2024-08-05T22:33:12.332223506Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:33:12.332565 dockerd[2301]: time="2024-08-05T22:33:12.332536985Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:33:12.332685 dockerd[2301]: time="2024-08-05T22:33:12.332662962Z" level=info msg="Daemon has completed initialization" Aug 5 22:33:12.375455 dockerd[2301]: time="2024-08-05T22:33:12.374762533Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:33:12.374993 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:33:13.383088 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:33:13.392934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:33:13.443234 containerd[1959]: time="2024-08-05T22:33:13.443193498Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\"" Aug 5 22:33:13.719954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:33:13.733071 (kubelet)[2441]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:33:13.804846 kubelet[2441]: E0805 22:33:13.804700 2441 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:33:13.809278 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:33:13.809835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:33:14.121949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2821788274.mount: Deactivated successfully. Aug 5 22:33:16.233320 containerd[1959]: time="2024-08-05T22:33:16.233262303Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:16.238178 containerd[1959]: time="2024-08-05T22:33:16.238098079Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.3: active requests=0, bytes read=32773238" Aug 5 22:33:16.240120 containerd[1959]: time="2024-08-05T22:33:16.240053542Z" level=info msg="ImageCreate event name:\"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:16.245342 containerd[1959]: time="2024-08-05T22:33:16.245269497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:16.246891 containerd[1959]: time="2024-08-05T22:33:16.246438916Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.3\" with image id \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c\", size \"32770038\" in 2.803202936s" Aug 5 22:33:16.246891 containerd[1959]: time="2024-08-05T22:33:16.246502194Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.3\" returns image reference \"sha256:1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d\"" Aug 5 22:33:16.289084 containerd[1959]: time="2024-08-05T22:33:16.289048868Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\"" Aug 5 22:33:18.581391 containerd[1959]: time="2024-08-05T22:33:18.581340115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:18.583488 containerd[1959]: time="2024-08-05T22:33:18.583402395Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.3: active requests=0, bytes read=29589535" Aug 5 22:33:18.584945 containerd[1959]: time="2024-08-05T22:33:18.584887644Z" level=info msg="ImageCreate event name:\"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:18.589482 containerd[1959]: time="2024-08-05T22:33:18.589149941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:18.590352 containerd[1959]: time="2024-08-05T22:33:18.590308558Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.3\" with image id \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7\", size \"31139481\" in 2.301220123s" Aug 5 22:33:18.590451 containerd[1959]: time="2024-08-05T22:33:18.590366813Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.3\" returns image reference \"sha256:76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e\"" Aug 5 22:33:18.621246 containerd[1959]: time="2024-08-05T22:33:18.621197785Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\"" Aug 5 22:33:20.238574 containerd[1959]: time="2024-08-05T22:33:20.238522710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:20.239899 containerd[1959]: time="2024-08-05T22:33:20.239854540Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.3: active requests=0, bytes read=17779544" Aug 5 22:33:20.240965 containerd[1959]: time="2024-08-05T22:33:20.240926516Z" level=info msg="ImageCreate event name:\"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:20.245285 containerd[1959]: time="2024-08-05T22:33:20.245146904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:20.247111 containerd[1959]: time="2024-08-05T22:33:20.247064466Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.3\" with image id \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4\", size \"19329508\" in 1.625821239s" Aug 5 22:33:20.247237 containerd[1959]: time="2024-08-05T22:33:20.247115973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.3\" returns image reference \"sha256:3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2\"" Aug 5 22:33:20.278616 containerd[1959]: time="2024-08-05T22:33:20.278579051Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\"" Aug 5 22:33:21.711227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1855546146.mount: Deactivated successfully. Aug 5 22:33:22.531790 containerd[1959]: time="2024-08-05T22:33:22.531738271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:22.533719 containerd[1959]: time="2024-08-05T22:33:22.533561735Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.3: active requests=0, bytes read=29036435" Aug 5 22:33:22.535674 containerd[1959]: time="2024-08-05T22:33:22.534822753Z" level=info msg="ImageCreate event name:\"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:22.538913 containerd[1959]: time="2024-08-05T22:33:22.538873156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:22.539991 containerd[1959]: time="2024-08-05T22:33:22.539822204Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.3\" with image id \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\", repo tag \"registry.k8s.io/kube-proxy:v1.30.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65\", size \"29035454\" in 2.261141889s" Aug 5 22:33:22.540114 containerd[1959]: time="2024-08-05T22:33:22.539990591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.3\" returns image reference \"sha256:55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1\"" Aug 5 22:33:22.591191 containerd[1959]: time="2024-08-05T22:33:22.591122428Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:33:23.204366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1499003772.mount: Deactivated successfully. Aug 5 22:33:23.881519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:33:23.895978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:33:24.298815 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:33:24.307411 (kubelet)[2590]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:33:24.414102 kubelet[2590]: E0805 22:33:24.414046 2590 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:33:24.420159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:33:24.420692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:33:24.792784 containerd[1959]: time="2024-08-05T22:33:24.792729455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:24.796363 containerd[1959]: time="2024-08-05T22:33:24.796302441Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Aug 5 22:33:24.798340 containerd[1959]: time="2024-08-05T22:33:24.797679146Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:24.802358 containerd[1959]: time="2024-08-05T22:33:24.802312365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:24.805303 containerd[1959]: time="2024-08-05T22:33:24.804042634Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.212483124s" Aug 5 22:33:24.805303 containerd[1959]: time="2024-08-05T22:33:24.804092372Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Aug 5 22:33:24.836050 containerd[1959]: time="2024-08-05T22:33:24.835981867Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:33:25.317661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2658393235.mount: Deactivated successfully. Aug 5 22:33:25.329086 containerd[1959]: time="2024-08-05T22:33:25.329034301Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:25.330645 containerd[1959]: time="2024-08-05T22:33:25.330571336Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 5 22:33:25.332917 containerd[1959]: time="2024-08-05T22:33:25.331462536Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:25.334975 containerd[1959]: time="2024-08-05T22:33:25.333999777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:25.334975 containerd[1959]: time="2024-08-05T22:33:25.334815483Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 498.753461ms" Aug 5 22:33:25.334975 containerd[1959]: time="2024-08-05T22:33:25.334848993Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:33:25.370814 containerd[1959]: time="2024-08-05T22:33:25.370765333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Aug 5 22:33:25.958137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177437208.mount: Deactivated successfully. Aug 5 22:33:28.578418 containerd[1959]: time="2024-08-05T22:33:28.578357139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:28.579889 containerd[1959]: time="2024-08-05T22:33:28.579748366Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Aug 5 22:33:28.582498 containerd[1959]: time="2024-08-05T22:33:28.581560054Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:28.585872 containerd[1959]: time="2024-08-05T22:33:28.585383922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:28.586976 containerd[1959]: time="2024-08-05T22:33:28.586936243Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.216116572s" Aug 5 22:33:28.587062 containerd[1959]: time="2024-08-05T22:33:28.586986355Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Aug 5 22:33:29.877271 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 5 22:33:32.208921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:33:32.215835 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:33:32.266195 systemd[1]: Reloading requested from client PID 2727 ('systemctl') (unit session-7.scope)... Aug 5 22:33:32.266218 systemd[1]: Reloading... Aug 5 22:33:32.435528 zram_generator::config[2766]: No configuration found. Aug 5 22:33:32.600856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:33:32.699748 systemd[1]: Reloading finished in 432 ms. Aug 5 22:33:32.767333 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:33:32.767616 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:33:32.767940 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:33:32.776241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:33:33.129967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:33:33.148984 (kubelet)[2826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:33:33.202227 kubelet[2826]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:33:33.202597 kubelet[2826]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:33:33.202597 kubelet[2826]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:33:33.204298 kubelet[2826]: I0805 22:33:33.204229 2826 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:33:33.781094 kubelet[2826]: I0805 22:33:33.780876 2826 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:33:33.781094 kubelet[2826]: I0805 22:33:33.781089 2826 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:33:33.782052 kubelet[2826]: I0805 22:33:33.782023 2826 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:33:33.828997 kubelet[2826]: I0805 22:33:33.828952 2826 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:33:33.833851 kubelet[2826]: E0805 22:33:33.833812 2826 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:33.862031 kubelet[2826]: I0805 22:33:33.861996 2826 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:33:33.867099 kubelet[2826]: I0805 22:33:33.867028 2826 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:33:33.867329 kubelet[2826]: I0805 22:33:33.867092 2826 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-20","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:33:33.867502 kubelet[2826]: I0805 22:33:33.867341 2826 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:33:33.867502 kubelet[2826]: I0805 22:33:33.867356 2826 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:33:33.869620 kubelet[2826]: I0805 22:33:33.869587 2826 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:33:33.871196 kubelet[2826]: I0805 22:33:33.871171 2826 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:33:33.871196 kubelet[2826]: I0805 22:33:33.871200 2826 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:33:33.871413 kubelet[2826]: I0805 22:33:33.871239 2826 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:33:33.871413 kubelet[2826]: I0805 22:33:33.871266 2826 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:33:33.876793 kubelet[2826]: W0805 22:33:33.875803 2826 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:33.876793 kubelet[2826]: E0805 22:33:33.875879 2826 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:33.876793 kubelet[2826]: W0805 22:33:33.876112 2826 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-20&limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:33.876793 kubelet[2826]: E0805 22:33:33.876165 2826 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-20&limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:33.877157 kubelet[2826]: I0805 22:33:33.876960 2826 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:33:33.880613 kubelet[2826]: I0805 22:33:33.879402 2826 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:33:33.880613 kubelet[2826]: W0805 22:33:33.879498 2826 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:33:33.880613 kubelet[2826]: I0805 22:33:33.880344 2826 server.go:1264] "Started kubelet" Aug 5 22:33:33.897662 kubelet[2826]: E0805 22:33:33.897511 2826 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.20:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.20:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-20.17e8f5e341d82b17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-20,UID:ip-172-31-23-20,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-20,},FirstTimestamp:2024-08-05 22:33:33.880281879 +0000 UTC m=+0.726361352,LastTimestamp:2024-08-05 22:33:33.880281879 +0000 UTC m=+0.726361352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-20,}" Aug 5 22:33:33.898450 kubelet[2826]: I0805 22:33:33.898376 2826 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:33:33.902075 kubelet[2826]: I0805 22:33:33.901738 2826 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:33:33.904535 kubelet[2826]: I0805 22:33:33.904492 2826 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:33:33.912540 kubelet[2826]: I0805 22:33:33.912388 2826 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:33:33.915611 kubelet[2826]: I0805 22:33:33.915016 2826 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:33:33.918982 kubelet[2826]: I0805 22:33:33.918957 2826 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:33:33.921536 kubelet[2826]: I0805 22:33:33.919661 2826 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:33:33.921536 kubelet[2826]: I0805 22:33:33.919900 2826 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:33:33.925179 kubelet[2826]: W0805 22:33:33.925084 2826 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:33.925657 kubelet[2826]: E0805 22:33:33.925314 2826 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:33.925726 kubelet[2826]: E0805 22:33:33.925640 2826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-20?timeout=10s\": dial tcp 172.31.23.20:6443: connect: connection refused" interval="200ms" Aug 5 22:33:33.927338 kubelet[2826]: E0805 22:33:33.927296 2826 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:33:33.929460 kubelet[2826]: I0805 22:33:33.929441 2826 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:33:33.929633 kubelet[2826]: I0805 22:33:33.929620 2826 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:33:33.930394 kubelet[2826]: I0805 22:33:33.929811 2826 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:33:33.969285 kubelet[2826]: I0805 22:33:33.969233 2826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:33:33.974367 kubelet[2826]: I0805 22:33:33.974336 2826 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:33:33.974588 kubelet[2826]: I0805 22:33:33.974374 2826 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:33:33.974588 kubelet[2826]: I0805 22:33:33.974393 2826 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:33:33.974588 kubelet[2826]: E0805 22:33:33.974442 2826 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:33:33.975759 kubelet[2826]: I0805 22:33:33.975734 2826 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:33:33.975759 kubelet[2826]: I0805 22:33:33.975750 2826 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:33:33.976702 kubelet[2826]: I0805 22:33:33.975772 2826 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:33:33.979334 kubelet[2826]: W0805 22:33:33.979247 2826 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:33.984285 kubelet[2826]: E0805 22:33:33.979340 2826 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:33.986384 kubelet[2826]: I0805 22:33:33.986350 2826 policy_none.go:49] "None policy: Start" Aug 5 22:33:34.000235 kubelet[2826]: I0805 22:33:34.000196 2826 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:33:34.000235 kubelet[2826]: I0805 22:33:34.000242 2826 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:33:34.022951 kubelet[2826]: I0805 22:33:34.022919 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-20" Aug 5 22:33:34.024446 kubelet[2826]: E0805 22:33:34.024353 2826 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.20:6443/api/v1/nodes\": dial tcp 172.31.23.20:6443: connect: connection refused" node="ip-172-31-23-20" Aug 5 22:33:34.026491 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:33:34.043649 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:33:34.053731 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:33:34.071587 kubelet[2826]: I0805 22:33:34.071554 2826 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:33:34.071815 kubelet[2826]: I0805 22:33:34.071778 2826 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:33:34.071990 kubelet[2826]: I0805 22:33:34.071929 2826 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:33:34.077180 kubelet[2826]: I0805 22:33:34.074692 2826 topology_manager.go:215] "Topology Admit Handler" podUID="47ee0ec81eaa90cfd868b3a29e720361" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-20" Aug 5 22:33:34.081664 kubelet[2826]: E0805 22:33:34.081606 2826 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-20\" not found" Aug 5 22:33:34.091379 kubelet[2826]: I0805 22:33:34.088970 2826 topology_manager.go:215] "Topology Admit Handler" podUID="bc001c6e4d4d1baee280c546c3b741da" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-20" Aug 5 22:33:34.102235 kubelet[2826]: I0805 22:33:34.101726 2826 topology_manager.go:215] "Topology Admit Handler" podUID="269c42419e39a2350b435e34555cc7da" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:34.125107 kubelet[2826]: I0805 22:33:34.122365 2826 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:34.125107 kubelet[2826]: I0805 22:33:34.122420 2826 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:34.125107 kubelet[2826]: I0805 22:33:34.122447 2826 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:34.125107 kubelet[2826]: I0805 22:33:34.122489 2826 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:34.125107 kubelet[2826]: I0805 22:33:34.122516 2826 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47ee0ec81eaa90cfd868b3a29e720361-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-20\" (UID: \"47ee0ec81eaa90cfd868b3a29e720361\") " pod="kube-system/kube-scheduler-ip-172-31-23-20" Aug 5 22:33:34.125597 kubelet[2826]: I0805 22:33:34.122543 2826 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc001c6e4d4d1baee280c546c3b741da-ca-certs\") pod \"kube-apiserver-ip-172-31-23-20\" (UID: \"bc001c6e4d4d1baee280c546c3b741da\") " pod="kube-system/kube-apiserver-ip-172-31-23-20" Aug 5 22:33:34.125597 kubelet[2826]: I0805 22:33:34.122567 2826 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc001c6e4d4d1baee280c546c3b741da-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-20\" (UID: \"bc001c6e4d4d1baee280c546c3b741da\") " pod="kube-system/kube-apiserver-ip-172-31-23-20" Aug 5 22:33:34.125597 kubelet[2826]: I0805 22:33:34.124625 2826 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc001c6e4d4d1baee280c546c3b741da-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-20\" (UID: \"bc001c6e4d4d1baee280c546c3b741da\") " pod="kube-system/kube-apiserver-ip-172-31-23-20" Aug 5 22:33:34.125597 kubelet[2826]: I0805 22:33:34.124673 2826 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:34.126504 kubelet[2826]: E0805 22:33:34.126443 2826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-20?timeout=10s\": dial tcp 172.31.23.20:6443: connect: connection refused" interval="400ms" Aug 5 22:33:34.131511 systemd[1]: Created slice kubepods-burstable-pod47ee0ec81eaa90cfd868b3a29e720361.slice - libcontainer container kubepods-burstable-pod47ee0ec81eaa90cfd868b3a29e720361.slice. Aug 5 22:33:34.154177 systemd[1]: Created slice kubepods-burstable-podbc001c6e4d4d1baee280c546c3b741da.slice - libcontainer container kubepods-burstable-podbc001c6e4d4d1baee280c546c3b741da.slice. Aug 5 22:33:34.159917 systemd[1]: Created slice kubepods-burstable-pod269c42419e39a2350b435e34555cc7da.slice - libcontainer container kubepods-burstable-pod269c42419e39a2350b435e34555cc7da.slice. Aug 5 22:33:34.227431 kubelet[2826]: I0805 22:33:34.227401 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-20" Aug 5 22:33:34.227941 kubelet[2826]: E0805 22:33:34.227773 2826 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.20:6443/api/v1/nodes\": dial tcp 172.31.23.20:6443: connect: connection refused" node="ip-172-31-23-20" Aug 5 22:33:34.451436 containerd[1959]: time="2024-08-05T22:33:34.451390870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-20,Uid:47ee0ec81eaa90cfd868b3a29e720361,Namespace:kube-system,Attempt:0,}" Aug 5 22:33:34.463848 containerd[1959]: time="2024-08-05T22:33:34.463804478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-20,Uid:bc001c6e4d4d1baee280c546c3b741da,Namespace:kube-system,Attempt:0,}" Aug 5 22:33:34.466372 containerd[1959]: time="2024-08-05T22:33:34.466005065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-20,Uid:269c42419e39a2350b435e34555cc7da,Namespace:kube-system,Attempt:0,}" Aug 5 22:33:34.529647 kubelet[2826]: E0805 22:33:34.529581 2826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-20?timeout=10s\": dial tcp 172.31.23.20:6443: connect: connection refused" interval="800ms" Aug 5 22:33:34.632461 kubelet[2826]: I0805 22:33:34.632426 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-20" Aug 5 22:33:34.633099 kubelet[2826]: E0805 22:33:34.633063 2826 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.20:6443/api/v1/nodes\": dial tcp 172.31.23.20:6443: connect: connection refused" node="ip-172-31-23-20" Aug 5 22:33:34.949811 kubelet[2826]: W0805 22:33:34.949770 2826 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:34.949811 kubelet[2826]: E0805 22:33:34.949815 2826 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:34.968952 kubelet[2826]: W0805 22:33:34.968852 2826 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:34.968952 kubelet[2826]: E0805 22:33:34.968957 2826 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:34.991272 kubelet[2826]: W0805 22:33:34.991190 2826 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:34.991406 kubelet[2826]: E0805 22:33:34.991341 2826 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:35.048396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount483935787.mount: Deactivated successfully. Aug 5 22:33:35.059443 containerd[1959]: time="2024-08-05T22:33:35.059333939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:33:35.061176 containerd[1959]: time="2024-08-05T22:33:35.061133196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:33:35.062936 containerd[1959]: time="2024-08-05T22:33:35.062774340Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:33:35.064279 containerd[1959]: time="2024-08-05T22:33:35.064217850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 5 22:33:35.066565 containerd[1959]: time="2024-08-05T22:33:35.066527588Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:33:35.073552 containerd[1959]: time="2024-08-05T22:33:35.071586042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:33:35.073552 containerd[1959]: time="2024-08-05T22:33:35.072518977Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:33:35.081003 containerd[1959]: time="2024-08-05T22:33:35.080903121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:33:35.083522 containerd[1959]: time="2024-08-05T22:33:35.083461223Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 631.932447ms" Aug 5 22:33:35.085715 containerd[1959]: time="2024-08-05T22:33:35.085667007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 619.561254ms" Aug 5 22:33:35.090154 containerd[1959]: time="2024-08-05T22:33:35.090108203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 626.197737ms" Aug 5 22:33:35.323212 containerd[1959]: time="2024-08-05T22:33:35.322811877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:33:35.323212 containerd[1959]: time="2024-08-05T22:33:35.322895836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:35.323212 containerd[1959]: time="2024-08-05T22:33:35.322926094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:33:35.323212 containerd[1959]: time="2024-08-05T22:33:35.322947613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:35.331501 kubelet[2826]: E0805 22:33:35.331140 2826 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-20?timeout=10s\": dial tcp 172.31.23.20:6443: connect: connection refused" interval="1.6s" Aug 5 22:33:35.335446 containerd[1959]: time="2024-08-05T22:33:35.334502565Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:33:35.335446 containerd[1959]: time="2024-08-05T22:33:35.334588797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:35.335446 containerd[1959]: time="2024-08-05T22:33:35.334626857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:33:35.335446 containerd[1959]: time="2024-08-05T22:33:35.334749818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:35.344868 containerd[1959]: time="2024-08-05T22:33:35.344549913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:33:35.344868 containerd[1959]: time="2024-08-05T22:33:35.344772694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:35.345525 containerd[1959]: time="2024-08-05T22:33:35.345212786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:33:35.345525 containerd[1959]: time="2024-08-05T22:33:35.345295038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:35.362087 kubelet[2826]: W0805 22:33:35.361761 2826 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-20&limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:35.362087 kubelet[2826]: E0805 22:33:35.361847 2826 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-20&limit=500&resourceVersion=0": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:35.364730 systemd[1]: Started cri-containerd-79e5a87994da8be63fa399526e74061d6d3d99f3f8a5e7ddb2cb537baa308148.scope - libcontainer container 79e5a87994da8be63fa399526e74061d6d3d99f3f8a5e7ddb2cb537baa308148. Aug 5 22:33:35.395038 systemd[1]: Started cri-containerd-309246516b507f6fea07c6078d14dbad4e091a20573c6c0b42423f42a2232d83.scope - libcontainer container 309246516b507f6fea07c6078d14dbad4e091a20573c6c0b42423f42a2232d83. Aug 5 22:33:35.410187 systemd[1]: Started cri-containerd-952a882f706851d73f0f346049df4c283864be80396d4aeaa0e144e822afb7f6.scope - libcontainer container 952a882f706851d73f0f346049df4c283864be80396d4aeaa0e144e822afb7f6. Aug 5 22:33:35.436121 kubelet[2826]: I0805 22:33:35.435700 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-20" Aug 5 22:33:35.437414 kubelet[2826]: E0805 22:33:35.437371 2826 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.20:6443/api/v1/nodes\": dial tcp 172.31.23.20:6443: connect: connection refused" node="ip-172-31-23-20" Aug 5 22:33:35.509537 containerd[1959]: time="2024-08-05T22:33:35.509397658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-20,Uid:269c42419e39a2350b435e34555cc7da,Namespace:kube-system,Attempt:0,} returns sandbox id \"79e5a87994da8be63fa399526e74061d6d3d99f3f8a5e7ddb2cb537baa308148\"" Aug 5 22:33:35.519349 containerd[1959]: time="2024-08-05T22:33:35.519299579Z" level=info msg="CreateContainer within sandbox \"79e5a87994da8be63fa399526e74061d6d3d99f3f8a5e7ddb2cb537baa308148\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:33:35.556739 containerd[1959]: time="2024-08-05T22:33:35.556398341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-20,Uid:bc001c6e4d4d1baee280c546c3b741da,Namespace:kube-system,Attempt:0,} returns sandbox id \"309246516b507f6fea07c6078d14dbad4e091a20573c6c0b42423f42a2232d83\"" Aug 5 22:33:35.566928 containerd[1959]: time="2024-08-05T22:33:35.566834391Z" level=info msg="CreateContainer within sandbox \"309246516b507f6fea07c6078d14dbad4e091a20573c6c0b42423f42a2232d83\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:33:35.574518 containerd[1959]: time="2024-08-05T22:33:35.573815353Z" level=info msg="CreateContainer within sandbox \"79e5a87994da8be63fa399526e74061d6d3d99f3f8a5e7ddb2cb537baa308148\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64\"" Aug 5 22:33:35.576490 containerd[1959]: time="2024-08-05T22:33:35.575833114Z" level=info msg="StartContainer for \"5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64\"" Aug 5 22:33:35.601972 containerd[1959]: time="2024-08-05T22:33:35.601927580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-20,Uid:47ee0ec81eaa90cfd868b3a29e720361,Namespace:kube-system,Attempt:0,} returns sandbox id \"952a882f706851d73f0f346049df4c283864be80396d4aeaa0e144e822afb7f6\"" Aug 5 22:33:35.602708 containerd[1959]: time="2024-08-05T22:33:35.602680463Z" level=info msg="CreateContainer within sandbox \"309246516b507f6fea07c6078d14dbad4e091a20573c6c0b42423f42a2232d83\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"46094f4f7945bc006502a6d31d28a1530356d7af23c53b2265c706abab2053c8\"" Aug 5 22:33:35.603996 containerd[1959]: time="2024-08-05T22:33:35.603454628Z" level=info msg="StartContainer for \"46094f4f7945bc006502a6d31d28a1530356d7af23c53b2265c706abab2053c8\"" Aug 5 22:33:35.609687 containerd[1959]: time="2024-08-05T22:33:35.609646207Z" level=info msg="CreateContainer within sandbox \"952a882f706851d73f0f346049df4c283864be80396d4aeaa0e144e822afb7f6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:33:35.645289 containerd[1959]: time="2024-08-05T22:33:35.645220850Z" level=info msg="CreateContainer within sandbox \"952a882f706851d73f0f346049df4c283864be80396d4aeaa0e144e822afb7f6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731\"" Aug 5 22:33:35.649431 containerd[1959]: time="2024-08-05T22:33:35.647405595Z" level=info msg="StartContainer for \"1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731\"" Aug 5 22:33:35.658225 systemd[1]: Started cri-containerd-5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64.scope - libcontainer container 5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64. Aug 5 22:33:35.669462 systemd[1]: Started cri-containerd-46094f4f7945bc006502a6d31d28a1530356d7af23c53b2265c706abab2053c8.scope - libcontainer container 46094f4f7945bc006502a6d31d28a1530356d7af23c53b2265c706abab2053c8. Aug 5 22:33:35.723816 systemd[1]: Started cri-containerd-1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731.scope - libcontainer container 1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731. Aug 5 22:33:35.798825 containerd[1959]: time="2024-08-05T22:33:35.798774254Z" level=info msg="StartContainer for \"5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64\" returns successfully" Aug 5 22:33:35.892628 containerd[1959]: time="2024-08-05T22:33:35.892503796Z" level=info msg="StartContainer for \"46094f4f7945bc006502a6d31d28a1530356d7af23c53b2265c706abab2053c8\" returns successfully" Aug 5 22:33:35.910396 kubelet[2826]: E0805 22:33:35.910355 2826 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.20:6443: connect: connection refused Aug 5 22:33:35.920946 containerd[1959]: time="2024-08-05T22:33:35.919851282Z" level=info msg="StartContainer for \"1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731\" returns successfully" Aug 5 22:33:37.042908 kubelet[2826]: I0805 22:33:37.039789 2826 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-20" Aug 5 22:33:38.624180 kubelet[2826]: E0805 22:33:38.624139 2826 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-20\" not found" node="ip-172-31-23-20" Aug 5 22:33:38.723082 kubelet[2826]: E0805 22:33:38.722976 2826 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-20.17e8f5e341d82b17 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-20,UID:ip-172-31-23-20,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-20,},FirstTimestamp:2024-08-05 22:33:33.880281879 +0000 UTC m=+0.726361352,LastTimestamp:2024-08-05 22:33:33.880281879 +0000 UTC m=+0.726361352,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-20,}" Aug 5 22:33:38.759128 kubelet[2826]: I0805 22:33:38.759077 2826 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-20" Aug 5 22:33:38.876740 kubelet[2826]: I0805 22:33:38.875849 2826 apiserver.go:52] "Watching apiserver" Aug 5 22:33:38.920379 kubelet[2826]: I0805 22:33:38.920340 2826 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:33:41.120784 systemd[1]: Reloading requested from client PID 3100 ('systemctl') (unit session-7.scope)... Aug 5 22:33:41.120803 systemd[1]: Reloading... Aug 5 22:33:41.258565 zram_generator::config[3135]: No configuration found. Aug 5 22:33:41.420724 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:33:41.532414 systemd[1]: Reloading finished in 411 ms. Aug 5 22:33:41.591902 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:33:41.610028 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:33:41.610263 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:33:41.619943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:33:41.896009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:33:41.909370 (kubelet)[3195]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:33:41.994345 kubelet[3195]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:33:41.994814 kubelet[3195]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:33:41.994814 kubelet[3195]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:33:41.996653 kubelet[3195]: I0805 22:33:41.996294 3195 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:33:42.002443 kubelet[3195]: I0805 22:33:42.002415 3195 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Aug 5 22:33:42.002726 kubelet[3195]: I0805 22:33:42.002681 3195 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:33:42.003543 kubelet[3195]: I0805 22:33:42.003019 3195 server.go:927] "Client rotation is on, will bootstrap in background" Aug 5 22:33:42.004594 kubelet[3195]: I0805 22:33:42.004578 3195 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:33:42.009158 kubelet[3195]: I0805 22:33:42.009126 3195 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:33:42.027082 kubelet[3195]: I0805 22:33:42.027048 3195 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:33:42.027388 kubelet[3195]: I0805 22:33:42.027348 3195 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:33:42.027615 kubelet[3195]: I0805 22:33:42.027386 3195 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-20","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:33:42.027769 kubelet[3195]: I0805 22:33:42.027638 3195 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:33:42.027769 kubelet[3195]: I0805 22:33:42.027655 3195 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:33:42.028848 kubelet[3195]: I0805 22:33:42.028823 3195 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:33:42.028987 kubelet[3195]: I0805 22:33:42.028970 3195 kubelet.go:400] "Attempting to sync node with API server" Aug 5 22:33:42.029058 kubelet[3195]: I0805 22:33:42.028990 3195 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:33:42.029058 kubelet[3195]: I0805 22:33:42.029021 3195 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:33:42.032978 kubelet[3195]: I0805 22:33:42.031320 3195 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:33:42.038026 kubelet[3195]: I0805 22:33:42.037997 3195 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:33:42.040810 kubelet[3195]: I0805 22:33:42.040569 3195 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:33:42.044020 kubelet[3195]: I0805 22:33:42.043458 3195 server.go:1264] "Started kubelet" Aug 5 22:33:42.051259 kubelet[3195]: I0805 22:33:42.051236 3195 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:33:42.062489 kubelet[3195]: I0805 22:33:42.062425 3195 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:33:42.063851 kubelet[3195]: I0805 22:33:42.063829 3195 server.go:455] "Adding debug handlers to kubelet server" Aug 5 22:33:42.065823 kubelet[3195]: I0805 22:33:42.065750 3195 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:33:42.066128 kubelet[3195]: I0805 22:33:42.066114 3195 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:33:42.069365 kubelet[3195]: I0805 22:33:42.068513 3195 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:33:42.071510 kubelet[3195]: I0805 22:33:42.070623 3195 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Aug 5 22:33:42.071510 kubelet[3195]: I0805 22:33:42.070790 3195 reconciler.go:26] "Reconciler: start to sync state" Aug 5 22:33:42.078969 kubelet[3195]: I0805 22:33:42.078930 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:33:42.080891 kubelet[3195]: I0805 22:33:42.080865 3195 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:33:42.081894 kubelet[3195]: I0805 22:33:42.081880 3195 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:33:42.082019 kubelet[3195]: I0805 22:33:42.082006 3195 kubelet.go:2337] "Starting kubelet main sync loop" Aug 5 22:33:42.082535 kubelet[3195]: E0805 22:33:42.082126 3195 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:33:42.092365 kubelet[3195]: I0805 22:33:42.092330 3195 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:33:42.092522 kubelet[3195]: I0805 22:33:42.092428 3195 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:33:42.100532 kubelet[3195]: I0805 22:33:42.099079 3195 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:33:42.184487 kubelet[3195]: I0805 22:33:42.178018 3195 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:33:42.184487 kubelet[3195]: I0805 22:33:42.178043 3195 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:33:42.184487 kubelet[3195]: I0805 22:33:42.178065 3195 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:33:42.184487 kubelet[3195]: I0805 22:33:42.178262 3195 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:33:42.184487 kubelet[3195]: I0805 22:33:42.178274 3195 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:33:42.184487 kubelet[3195]: I0805 22:33:42.178298 3195 policy_none.go:49] "None policy: Start" Aug 5 22:33:42.186848 kubelet[3195]: I0805 22:33:42.186659 3195 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:33:42.186848 kubelet[3195]: I0805 22:33:42.186692 3195 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:33:42.186984 kubelet[3195]: I0805 22:33:42.186915 3195 state_mem.go:75] "Updated machine memory state" Aug 5 22:33:42.188104 kubelet[3195]: E0805 22:33:42.187606 3195 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:33:42.194514 kubelet[3195]: I0805 22:33:42.194406 3195 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-20" Aug 5 22:33:42.213653 kubelet[3195]: I0805 22:33:42.211856 3195 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:33:42.213653 kubelet[3195]: I0805 22:33:42.212244 3195 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 5 22:33:42.213653 kubelet[3195]: I0805 22:33:42.212439 3195 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:33:42.219227 kubelet[3195]: I0805 22:33:42.219145 3195 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-20" Aug 5 22:33:42.219227 kubelet[3195]: I0805 22:33:42.219228 3195 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-20" Aug 5 22:33:42.387895 kubelet[3195]: I0805 22:33:42.387747 3195 topology_manager.go:215] "Topology Admit Handler" podUID="bc001c6e4d4d1baee280c546c3b741da" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-20" Aug 5 22:33:42.388227 kubelet[3195]: I0805 22:33:42.388139 3195 topology_manager.go:215] "Topology Admit Handler" podUID="269c42419e39a2350b435e34555cc7da" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:42.396611 kubelet[3195]: I0805 22:33:42.393633 3195 topology_manager.go:215] "Topology Admit Handler" podUID="47ee0ec81eaa90cfd868b3a29e720361" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-20" Aug 5 22:33:42.405706 kubelet[3195]: E0805 22:33:42.404764 3195 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-23-20\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-20" Aug 5 22:33:42.405706 kubelet[3195]: E0805 22:33:42.405317 3195 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-20\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:42.478244 kubelet[3195]: I0805 22:33:42.478113 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc001c6e4d4d1baee280c546c3b741da-ca-certs\") pod \"kube-apiserver-ip-172-31-23-20\" (UID: \"bc001c6e4d4d1baee280c546c3b741da\") " pod="kube-system/kube-apiserver-ip-172-31-23-20" Aug 5 22:33:42.478244 kubelet[3195]: I0805 22:33:42.478158 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:42.478244 kubelet[3195]: I0805 22:33:42.478204 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:42.478244 kubelet[3195]: I0805 22:33:42.478228 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:42.478719 kubelet[3195]: I0805 22:33:42.478249 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc001c6e4d4d1baee280c546c3b741da-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-20\" (UID: \"bc001c6e4d4d1baee280c546c3b741da\") " pod="kube-system/kube-apiserver-ip-172-31-23-20" Aug 5 22:33:42.478793 kubelet[3195]: I0805 22:33:42.478737 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc001c6e4d4d1baee280c546c3b741da-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-20\" (UID: \"bc001c6e4d4d1baee280c546c3b741da\") " pod="kube-system/kube-apiserver-ip-172-31-23-20" Aug 5 22:33:42.478793 kubelet[3195]: I0805 22:33:42.478767 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:42.478876 kubelet[3195]: I0805 22:33:42.478803 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/269c42419e39a2350b435e34555cc7da-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-20\" (UID: \"269c42419e39a2350b435e34555cc7da\") " pod="kube-system/kube-controller-manager-ip-172-31-23-20" Aug 5 22:33:42.478876 kubelet[3195]: I0805 22:33:42.478830 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47ee0ec81eaa90cfd868b3a29e720361-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-20\" (UID: \"47ee0ec81eaa90cfd868b3a29e720361\") " pod="kube-system/kube-scheduler-ip-172-31-23-20" Aug 5 22:33:43.034578 kubelet[3195]: I0805 22:33:43.034502 3195 apiserver.go:52] "Watching apiserver" Aug 5 22:33:43.072488 kubelet[3195]: I0805 22:33:43.072430 3195 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 5 22:33:43.253222 kubelet[3195]: I0805 22:33:43.253148 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-20" podStartSLOduration=1.253123829 podStartE2EDuration="1.253123829s" podCreationTimestamp="2024-08-05 22:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:33:43.21890732 +0000 UTC m=+1.301044711" watchObservedRunningTime="2024-08-05 22:33:43.253123829 +0000 UTC m=+1.335261218" Aug 5 22:33:43.267180 kubelet[3195]: I0805 22:33:43.267122 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-20" podStartSLOduration=4.267104763 podStartE2EDuration="4.267104763s" podCreationTimestamp="2024-08-05 22:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:33:43.255026002 +0000 UTC m=+1.337163412" watchObservedRunningTime="2024-08-05 22:33:43.267104763 +0000 UTC m=+1.349242154" Aug 5 22:33:43.282985 kubelet[3195]: I0805 22:33:43.282883 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-20" podStartSLOduration=3.282861035 podStartE2EDuration="3.282861035s" podCreationTimestamp="2024-08-05 22:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:33:43.267375849 +0000 UTC m=+1.349513244" watchObservedRunningTime="2024-08-05 22:33:43.282861035 +0000 UTC m=+1.364998429" Aug 5 22:33:44.064854 update_engine[1946]: I0805 22:33:44.063990 1946 update_attempter.cc:509] Updating boot flags... Aug 5 22:33:44.260675 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3247) Aug 5 22:33:44.754155 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3250) Aug 5 22:33:45.244497 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3250) Aug 5 22:33:49.829702 sudo[2291]: pam_unix(sudo:session): session closed for user root Aug 5 22:33:49.855749 sshd[2288]: pam_unix(sshd:session): session closed for user core Aug 5 22:33:49.862034 systemd[1]: sshd@6-172.31.23.20:22-147.75.109.163:60584.service: Deactivated successfully. Aug 5 22:33:49.864732 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:33:49.864955 systemd[1]: session-7.scope: Consumed 5.329s CPU time, 136.1M memory peak, 0B memory swap peak. Aug 5 22:33:49.867290 systemd-logind[1945]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:33:49.870694 systemd-logind[1945]: Removed session 7. Aug 5 22:33:55.984774 kubelet[3195]: I0805 22:33:55.984737 3195 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:33:56.004306 containerd[1959]: time="2024-08-05T22:33:56.004261168Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:33:56.004996 kubelet[3195]: I0805 22:33:56.004971 3195 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:33:56.764103 kubelet[3195]: I0805 22:33:56.762782 3195 topology_manager.go:215] "Topology Admit Handler" podUID="faf3803e-6583-4757-95ee-d2fd6f3894de" podNamespace="kube-system" podName="kube-proxy-g5lbm" Aug 5 22:33:56.814001 systemd[1]: Created slice kubepods-besteffort-podfaf3803e_6583_4757_95ee_d2fd6f3894de.slice - libcontainer container kubepods-besteffort-podfaf3803e_6583_4757_95ee_d2fd6f3894de.slice. Aug 5 22:33:56.929435 kubelet[3195]: I0805 22:33:56.929386 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/faf3803e-6583-4757-95ee-d2fd6f3894de-xtables-lock\") pod \"kube-proxy-g5lbm\" (UID: \"faf3803e-6583-4757-95ee-d2fd6f3894de\") " pod="kube-system/kube-proxy-g5lbm" Aug 5 22:33:56.929435 kubelet[3195]: I0805 22:33:56.929443 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/faf3803e-6583-4757-95ee-d2fd6f3894de-kube-proxy\") pod \"kube-proxy-g5lbm\" (UID: \"faf3803e-6583-4757-95ee-d2fd6f3894de\") " pod="kube-system/kube-proxy-g5lbm" Aug 5 22:33:56.929767 kubelet[3195]: I0805 22:33:56.929478 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/faf3803e-6583-4757-95ee-d2fd6f3894de-lib-modules\") pod \"kube-proxy-g5lbm\" (UID: \"faf3803e-6583-4757-95ee-d2fd6f3894de\") " pod="kube-system/kube-proxy-g5lbm" Aug 5 22:33:56.929767 kubelet[3195]: I0805 22:33:56.929501 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f445x\" (UniqueName: \"kubernetes.io/projected/faf3803e-6583-4757-95ee-d2fd6f3894de-kube-api-access-f445x\") pod \"kube-proxy-g5lbm\" (UID: \"faf3803e-6583-4757-95ee-d2fd6f3894de\") " pod="kube-system/kube-proxy-g5lbm" Aug 5 22:33:56.960820 kubelet[3195]: I0805 22:33:56.960179 3195 topology_manager.go:215] "Topology Admit Handler" podUID="fdba3065-78b1-4955-b9aa-322a00469934" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-hkwdv" Aug 5 22:33:56.974527 systemd[1]: Created slice kubepods-besteffort-podfdba3065_78b1_4955_b9aa_322a00469934.slice - libcontainer container kubepods-besteffort-podfdba3065_78b1_4955_b9aa_322a00469934.slice. Aug 5 22:33:57.030928 kubelet[3195]: I0805 22:33:57.030804 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzdtc\" (UniqueName: \"kubernetes.io/projected/fdba3065-78b1-4955-b9aa-322a00469934-kube-api-access-wzdtc\") pod \"tigera-operator-76ff79f7fd-hkwdv\" (UID: \"fdba3065-78b1-4955-b9aa-322a00469934\") " pod="tigera-operator/tigera-operator-76ff79f7fd-hkwdv" Aug 5 22:33:57.030928 kubelet[3195]: I0805 22:33:57.030878 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fdba3065-78b1-4955-b9aa-322a00469934-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-hkwdv\" (UID: \"fdba3065-78b1-4955-b9aa-322a00469934\") " pod="tigera-operator/tigera-operator-76ff79f7fd-hkwdv" Aug 5 22:33:57.126387 containerd[1959]: time="2024-08-05T22:33:57.126340981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g5lbm,Uid:faf3803e-6583-4757-95ee-d2fd6f3894de,Namespace:kube-system,Attempt:0,}" Aug 5 22:33:57.175900 containerd[1959]: time="2024-08-05T22:33:57.175269398Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:33:57.175900 containerd[1959]: time="2024-08-05T22:33:57.175364695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:57.175900 containerd[1959]: time="2024-08-05T22:33:57.175390815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:33:57.175900 containerd[1959]: time="2024-08-05T22:33:57.175409693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:57.216762 systemd[1]: Started cri-containerd-08fb13cec5ee3190acd987902573fa990eae809b0372626196481d5b0e1ce9fd.scope - libcontainer container 08fb13cec5ee3190acd987902573fa990eae809b0372626196481d5b0e1ce9fd. Aug 5 22:33:57.286367 containerd[1959]: time="2024-08-05T22:33:57.285639114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-hkwdv,Uid:fdba3065-78b1-4955-b9aa-322a00469934,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:33:57.307666 containerd[1959]: time="2024-08-05T22:33:57.306925850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g5lbm,Uid:faf3803e-6583-4757-95ee-d2fd6f3894de,Namespace:kube-system,Attempt:0,} returns sandbox id \"08fb13cec5ee3190acd987902573fa990eae809b0372626196481d5b0e1ce9fd\"" Aug 5 22:33:57.338545 containerd[1959]: time="2024-08-05T22:33:57.338431308Z" level=info msg="CreateContainer within sandbox \"08fb13cec5ee3190acd987902573fa990eae809b0372626196481d5b0e1ce9fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:33:57.366899 containerd[1959]: time="2024-08-05T22:33:57.366737754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:33:57.367772 containerd[1959]: time="2024-08-05T22:33:57.366793487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:57.368362 containerd[1959]: time="2024-08-05T22:33:57.368216937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:33:57.368362 containerd[1959]: time="2024-08-05T22:33:57.368255385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:33:57.385880 containerd[1959]: time="2024-08-05T22:33:57.385533291Z" level=info msg="CreateContainer within sandbox \"08fb13cec5ee3190acd987902573fa990eae809b0372626196481d5b0e1ce9fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"236505db3b1d9b775ecdd828f5296396399c50d2660a300e43f8bcb196ef103c\"" Aug 5 22:33:57.389537 containerd[1959]: time="2024-08-05T22:33:57.387743013Z" level=info msg="StartContainer for \"236505db3b1d9b775ecdd828f5296396399c50d2660a300e43f8bcb196ef103c\"" Aug 5 22:33:57.422727 systemd[1]: Started cri-containerd-112dd1c2940f77be42a91ddcbb416376387be7bd1dbef97840e4c5c2b156b1e0.scope - libcontainer container 112dd1c2940f77be42a91ddcbb416376387be7bd1dbef97840e4c5c2b156b1e0. Aug 5 22:33:57.458550 systemd[1]: Started cri-containerd-236505db3b1d9b775ecdd828f5296396399c50d2660a300e43f8bcb196ef103c.scope - libcontainer container 236505db3b1d9b775ecdd828f5296396399c50d2660a300e43f8bcb196ef103c. Aug 5 22:33:57.532099 containerd[1959]: time="2024-08-05T22:33:57.532029989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-hkwdv,Uid:fdba3065-78b1-4955-b9aa-322a00469934,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"112dd1c2940f77be42a91ddcbb416376387be7bd1dbef97840e4c5c2b156b1e0\"" Aug 5 22:33:57.543619 containerd[1959]: time="2024-08-05T22:33:57.543216739Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:33:57.546869 containerd[1959]: time="2024-08-05T22:33:57.546296736Z" level=info msg="StartContainer for \"236505db3b1d9b775ecdd828f5296396399c50d2660a300e43f8bcb196ef103c\" returns successfully" Aug 5 22:33:58.930758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2058988774.mount: Deactivated successfully. Aug 5 22:33:59.880535 containerd[1959]: time="2024-08-05T22:33:59.880483524Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:59.882164 containerd[1959]: time="2024-08-05T22:33:59.881789363Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076052" Aug 5 22:33:59.885285 containerd[1959]: time="2024-08-05T22:33:59.885201742Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:59.892733 containerd[1959]: time="2024-08-05T22:33:59.892641979Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:33:59.895947 containerd[1959]: time="2024-08-05T22:33:59.895904573Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.35264166s" Aug 5 22:33:59.895947 containerd[1959]: time="2024-08-05T22:33:59.895945712Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:33:59.898987 containerd[1959]: time="2024-08-05T22:33:59.898942103Z" level=info msg="CreateContainer within sandbox \"112dd1c2940f77be42a91ddcbb416376387be7bd1dbef97840e4c5c2b156b1e0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:33:59.917035 containerd[1959]: time="2024-08-05T22:33:59.916989912Z" level=info msg="CreateContainer within sandbox \"112dd1c2940f77be42a91ddcbb416376387be7bd1dbef97840e4c5c2b156b1e0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c\"" Aug 5 22:33:59.917839 containerd[1959]: time="2024-08-05T22:33:59.917804517Z" level=info msg="StartContainer for \"9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c\"" Aug 5 22:33:59.972883 systemd[1]: run-containerd-runc-k8s.io-9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c-runc.H5IFdX.mount: Deactivated successfully. Aug 5 22:33:59.987863 systemd[1]: Started cri-containerd-9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c.scope - libcontainer container 9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c. Aug 5 22:34:00.161777 containerd[1959]: time="2024-08-05T22:34:00.158605650Z" level=info msg="StartContainer for \"9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c\" returns successfully" Aug 5 22:34:00.230496 kubelet[3195]: I0805 22:34:00.229442 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-hkwdv" podStartSLOduration=1.872370021 podStartE2EDuration="4.229419095s" podCreationTimestamp="2024-08-05 22:33:56 +0000 UTC" firstStartedPulling="2024-08-05 22:33:57.539841467 +0000 UTC m=+15.621978838" lastFinishedPulling="2024-08-05 22:33:59.896890537 +0000 UTC m=+17.979027912" observedRunningTime="2024-08-05 22:34:00.225137459 +0000 UTC m=+18.307274851" watchObservedRunningTime="2024-08-05 22:34:00.229419095 +0000 UTC m=+18.311556482" Aug 5 22:34:00.230496 kubelet[3195]: I0805 22:34:00.230002 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g5lbm" podStartSLOduration=4.229985645 podStartE2EDuration="4.229985645s" podCreationTimestamp="2024-08-05 22:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:33:58.231364734 +0000 UTC m=+16.313502125" watchObservedRunningTime="2024-08-05 22:34:00.229985645 +0000 UTC m=+18.312123036" Aug 5 22:34:03.760143 kubelet[3195]: I0805 22:34:03.760099 3195 topology_manager.go:215] "Topology Admit Handler" podUID="5f283ce1-1f6a-47a3-bca9-2b211f031915" podNamespace="calico-system" podName="calico-typha-bffd7d85f-rgl9w" Aug 5 22:34:03.790096 systemd[1]: Created slice kubepods-besteffort-pod5f283ce1_1f6a_47a3_bca9_2b211f031915.slice - libcontainer container kubepods-besteffort-pod5f283ce1_1f6a_47a3_bca9_2b211f031915.slice. Aug 5 22:34:03.890504 kubelet[3195]: I0805 22:34:03.888800 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f283ce1-1f6a-47a3-bca9-2b211f031915-tigera-ca-bundle\") pod \"calico-typha-bffd7d85f-rgl9w\" (UID: \"5f283ce1-1f6a-47a3-bca9-2b211f031915\") " pod="calico-system/calico-typha-bffd7d85f-rgl9w" Aug 5 22:34:03.890504 kubelet[3195]: I0805 22:34:03.889194 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5f283ce1-1f6a-47a3-bca9-2b211f031915-typha-certs\") pod \"calico-typha-bffd7d85f-rgl9w\" (UID: \"5f283ce1-1f6a-47a3-bca9-2b211f031915\") " pod="calico-system/calico-typha-bffd7d85f-rgl9w" Aug 5 22:34:03.890504 kubelet[3195]: I0805 22:34:03.889248 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-749t7\" (UniqueName: \"kubernetes.io/projected/5f283ce1-1f6a-47a3-bca9-2b211f031915-kube-api-access-749t7\") pod \"calico-typha-bffd7d85f-rgl9w\" (UID: \"5f283ce1-1f6a-47a3-bca9-2b211f031915\") " pod="calico-system/calico-typha-bffd7d85f-rgl9w" Aug 5 22:34:03.992678 kubelet[3195]: I0805 22:34:03.992635 3195 topology_manager.go:215] "Topology Admit Handler" podUID="d1108d43-3bfb-4e2e-89de-6ab0b3718d03" podNamespace="calico-system" podName="calico-node-4blkq" Aug 5 22:34:04.018100 systemd[1]: Created slice kubepods-besteffort-podd1108d43_3bfb_4e2e_89de_6ab0b3718d03.slice - libcontainer container kubepods-besteffort-podd1108d43_3bfb_4e2e_89de_6ab0b3718d03.slice. Aug 5 22:34:04.090581 kubelet[3195]: I0805 22:34:04.090498 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-flexvol-driver-host\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.090581 kubelet[3195]: I0805 22:34:04.090583 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-net-dir\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.091065 kubelet[3195]: I0805 22:34:04.090611 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-policysync\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.091065 kubelet[3195]: I0805 22:34:04.090657 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-var-run-calico\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.091065 kubelet[3195]: I0805 22:34:04.090684 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-var-lib-calico\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.091065 kubelet[3195]: I0805 22:34:04.090706 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-xtables-lock\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.091065 kubelet[3195]: I0805 22:34:04.090756 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpxc\" (UniqueName: \"kubernetes.io/projected/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-kube-api-access-6bpxc\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.094609 kubelet[3195]: I0805 22:34:04.090780 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-tigera-ca-bundle\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.094609 kubelet[3195]: I0805 22:34:04.090829 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-bin-dir\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.094609 kubelet[3195]: I0805 22:34:04.090851 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-log-dir\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.094609 kubelet[3195]: I0805 22:34:04.090872 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-lib-modules\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.094609 kubelet[3195]: I0805 22:34:04.090925 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-node-certs\") pod \"calico-node-4blkq\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " pod="calico-system/calico-node-4blkq" Aug 5 22:34:04.110453 containerd[1959]: time="2024-08-05T22:34:04.110409133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bffd7d85f-rgl9w,Uid:5f283ce1-1f6a-47a3-bca9-2b211f031915,Namespace:calico-system,Attempt:0,}" Aug 5 22:34:04.138499 kubelet[3195]: I0805 22:34:04.138418 3195 topology_manager.go:215] "Topology Admit Handler" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" podNamespace="calico-system" podName="csi-node-driver-hhrbp" Aug 5 22:34:04.144665 kubelet[3195]: E0805 22:34:04.144616 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhrbp" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" Aug 5 22:34:04.172284 containerd[1959]: time="2024-08-05T22:34:04.171299169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:34:04.172448 containerd[1959]: time="2024-08-05T22:34:04.172327005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:04.172448 containerd[1959]: time="2024-08-05T22:34:04.172372853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:34:04.172448 containerd[1959]: time="2024-08-05T22:34:04.172402035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:04.194299 kubelet[3195]: I0805 22:34:04.194240 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/59f1ecad-9abf-4018-81f1-db05fd12b487-varrun\") pod \"csi-node-driver-hhrbp\" (UID: \"59f1ecad-9abf-4018-81f1-db05fd12b487\") " pod="calico-system/csi-node-driver-hhrbp" Aug 5 22:34:04.199656 kubelet[3195]: I0805 22:34:04.194376 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/59f1ecad-9abf-4018-81f1-db05fd12b487-socket-dir\") pod \"csi-node-driver-hhrbp\" (UID: \"59f1ecad-9abf-4018-81f1-db05fd12b487\") " pod="calico-system/csi-node-driver-hhrbp" Aug 5 22:34:04.199656 kubelet[3195]: I0805 22:34:04.194418 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/59f1ecad-9abf-4018-81f1-db05fd12b487-registration-dir\") pod \"csi-node-driver-hhrbp\" (UID: \"59f1ecad-9abf-4018-81f1-db05fd12b487\") " pod="calico-system/csi-node-driver-hhrbp" Aug 5 22:34:04.199656 kubelet[3195]: I0805 22:34:04.194443 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmxk5\" (UniqueName: \"kubernetes.io/projected/59f1ecad-9abf-4018-81f1-db05fd12b487-kube-api-access-tmxk5\") pod \"csi-node-driver-hhrbp\" (UID: \"59f1ecad-9abf-4018-81f1-db05fd12b487\") " pod="calico-system/csi-node-driver-hhrbp" Aug 5 22:34:04.199656 kubelet[3195]: I0805 22:34:04.196732 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/59f1ecad-9abf-4018-81f1-db05fd12b487-kubelet-dir\") pod \"csi-node-driver-hhrbp\" (UID: \"59f1ecad-9abf-4018-81f1-db05fd12b487\") " pod="calico-system/csi-node-driver-hhrbp" Aug 5 22:34:04.234635 kubelet[3195]: E0805 22:34:04.233205 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.234635 kubelet[3195]: W0805 22:34:04.233249 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.234635 kubelet[3195]: E0805 22:34:04.233290 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.266943 systemd[1]: Started cri-containerd-121a45961db86360cd9368a880bc84f368f428441995838c1125ff1049d47a16.scope - libcontainer container 121a45961db86360cd9368a880bc84f368f428441995838c1125ff1049d47a16. Aug 5 22:34:04.284515 kubelet[3195]: E0805 22:34:04.282799 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.284515 kubelet[3195]: W0805 22:34:04.282828 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.284515 kubelet[3195]: E0805 22:34:04.282852 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.298227 kubelet[3195]: E0805 22:34:04.298188 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.298227 kubelet[3195]: W0805 22:34:04.298217 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.298437 kubelet[3195]: E0805 22:34:04.298242 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.298757 kubelet[3195]: E0805 22:34:04.298624 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.298757 kubelet[3195]: W0805 22:34:04.298637 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.298757 kubelet[3195]: E0805 22:34:04.298695 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.299177 kubelet[3195]: E0805 22:34:04.299104 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.299177 kubelet[3195]: W0805 22:34:04.299120 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.299177 kubelet[3195]: E0805 22:34:04.299136 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.300134 kubelet[3195]: E0805 22:34:04.299455 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.300134 kubelet[3195]: W0805 22:34:04.299498 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.300134 kubelet[3195]: E0805 22:34:04.299514 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.301692 kubelet[3195]: E0805 22:34:04.301673 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.301692 kubelet[3195]: W0805 22:34:04.301691 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.301829 kubelet[3195]: E0805 22:34:04.301708 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.302078 kubelet[3195]: E0805 22:34:04.302060 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.302078 kubelet[3195]: W0805 22:34:04.302078 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.302334 kubelet[3195]: E0805 22:34:04.302111 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.302386 kubelet[3195]: E0805 22:34:04.302373 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.302386 kubelet[3195]: W0805 22:34:04.302383 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.304390 kubelet[3195]: E0805 22:34:04.302412 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.304390 kubelet[3195]: E0805 22:34:04.303334 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.304390 kubelet[3195]: W0805 22:34:04.303346 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.305350 kubelet[3195]: E0805 22:34:04.304514 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.305515 kubelet[3195]: E0805 22:34:04.305490 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.305515 kubelet[3195]: W0805 22:34:04.305508 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.305842 kubelet[3195]: E0805 22:34:04.305541 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.305842 kubelet[3195]: E0805 22:34:04.305806 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.305842 kubelet[3195]: W0805 22:34:04.305817 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.306866 kubelet[3195]: E0805 22:34:04.306845 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.307496 kubelet[3195]: E0805 22:34:04.307105 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.307496 kubelet[3195]: W0805 22:34:04.307126 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.307496 kubelet[3195]: E0805 22:34:04.307209 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.307496 kubelet[3195]: E0805 22:34:04.307444 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.307496 kubelet[3195]: W0805 22:34:04.307456 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.307816 kubelet[3195]: E0805 22:34:04.307564 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.307816 kubelet[3195]: E0805 22:34:04.307810 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.307907 kubelet[3195]: W0805 22:34:04.307820 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.308017 kubelet[3195]: E0805 22:34:04.307906 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.308500 kubelet[3195]: E0805 22:34:04.308180 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.308500 kubelet[3195]: W0805 22:34:04.308192 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.308500 kubelet[3195]: E0805 22:34:04.308210 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.309300 kubelet[3195]: E0805 22:34:04.309279 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.309300 kubelet[3195]: W0805 22:34:04.309298 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.309416 kubelet[3195]: E0805 22:34:04.309383 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.309760 kubelet[3195]: E0805 22:34:04.309715 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.309760 kubelet[3195]: W0805 22:34:04.309727 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.309871 kubelet[3195]: E0805 22:34:04.309831 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.310522 kubelet[3195]: E0805 22:34:04.310007 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.310522 kubelet[3195]: W0805 22:34:04.310019 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.310522 kubelet[3195]: E0805 22:34:04.310104 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.314925 kubelet[3195]: E0805 22:34:04.314240 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.314925 kubelet[3195]: W0805 22:34:04.314259 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.314925 kubelet[3195]: E0805 22:34:04.314603 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.314925 kubelet[3195]: W0805 22:34:04.314614 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.315193 kubelet[3195]: E0805 22:34:04.315052 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.315193 kubelet[3195]: W0805 22:34:04.315063 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.315193 kubelet[3195]: E0805 22:34:04.315082 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.316018 kubelet[3195]: E0805 22:34:04.315996 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.316018 kubelet[3195]: W0805 22:34:04.316014 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.316217 kubelet[3195]: E0805 22:34:04.316080 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.320858 kubelet[3195]: E0805 22:34:04.320818 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.320858 kubelet[3195]: E0805 22:34:04.320853 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.321036 kubelet[3195]: E0805 22:34:04.320954 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.321036 kubelet[3195]: W0805 22:34:04.320966 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.321036 kubelet[3195]: E0805 22:34:04.320983 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.321813 kubelet[3195]: E0805 22:34:04.321794 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.321813 kubelet[3195]: W0805 22:34:04.321811 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.321962 kubelet[3195]: E0805 22:34:04.321834 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.322453 kubelet[3195]: E0805 22:34:04.322318 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.322453 kubelet[3195]: W0805 22:34:04.322334 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.322748 kubelet[3195]: E0805 22:34:04.322520 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.323491 kubelet[3195]: E0805 22:34:04.323214 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.323491 kubelet[3195]: W0805 22:34:04.323231 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.323491 kubelet[3195]: E0805 22:34:04.323308 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.353574 containerd[1959]: time="2024-08-05T22:34:04.353291881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4blkq,Uid:d1108d43-3bfb-4e2e-89de-6ab0b3718d03,Namespace:calico-system,Attempt:0,}" Aug 5 22:34:04.374764 kubelet[3195]: E0805 22:34:04.374732 3195 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:34:04.374764 kubelet[3195]: W0805 22:34:04.374762 3195 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:34:04.375063 kubelet[3195]: E0805 22:34:04.374790 3195 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:34:04.408798 containerd[1959]: time="2024-08-05T22:34:04.408174123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:34:04.409438 containerd[1959]: time="2024-08-05T22:34:04.408748663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:04.409438 containerd[1959]: time="2024-08-05T22:34:04.408920679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:34:04.409438 containerd[1959]: time="2024-08-05T22:34:04.409059421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:04.463217 systemd[1]: Started cri-containerd-5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892.scope - libcontainer container 5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892. Aug 5 22:34:04.607851 containerd[1959]: time="2024-08-05T22:34:04.606037669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4blkq,Uid:d1108d43-3bfb-4e2e-89de-6ab0b3718d03,Namespace:calico-system,Attempt:0,} returns sandbox id \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\"" Aug 5 22:34:04.622536 containerd[1959]: time="2024-08-05T22:34:04.622478775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:34:04.681945 containerd[1959]: time="2024-08-05T22:34:04.681902047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bffd7d85f-rgl9w,Uid:5f283ce1-1f6a-47a3-bca9-2b211f031915,Namespace:calico-system,Attempt:0,} returns sandbox id \"121a45961db86360cd9368a880bc84f368f428441995838c1125ff1049d47a16\"" Aug 5 22:34:06.084374 kubelet[3195]: E0805 22:34:06.083455 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhrbp" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" Aug 5 22:34:06.341329 containerd[1959]: time="2024-08-05T22:34:06.340458034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:06.344396 containerd[1959]: time="2024-08-05T22:34:06.344239535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:34:06.346320 containerd[1959]: time="2024-08-05T22:34:06.345331037Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:06.351633 containerd[1959]: time="2024-08-05T22:34:06.351504252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:06.354130 containerd[1959]: time="2024-08-05T22:34:06.354006195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.731230617s" Aug 5 22:34:06.354130 containerd[1959]: time="2024-08-05T22:34:06.354114121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:34:06.357227 containerd[1959]: time="2024-08-05T22:34:06.357039350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:34:06.359631 containerd[1959]: time="2024-08-05T22:34:06.359406402Z" level=info msg="CreateContainer within sandbox \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:34:06.395786 containerd[1959]: time="2024-08-05T22:34:06.395719714Z" level=info msg="CreateContainer within sandbox \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032\"" Aug 5 22:34:06.397877 containerd[1959]: time="2024-08-05T22:34:06.397423881Z" level=info msg="StartContainer for \"61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032\"" Aug 5 22:34:06.484121 systemd[1]: run-containerd-runc-k8s.io-61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032-runc.m5eBmR.mount: Deactivated successfully. Aug 5 22:34:06.498149 systemd[1]: Started cri-containerd-61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032.scope - libcontainer container 61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032. Aug 5 22:34:06.564301 containerd[1959]: time="2024-08-05T22:34:06.564077097Z" level=info msg="StartContainer for \"61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032\" returns successfully" Aug 5 22:34:06.597142 systemd[1]: cri-containerd-61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032.scope: Deactivated successfully. Aug 5 22:34:06.687565 containerd[1959]: time="2024-08-05T22:34:06.676189476Z" level=info msg="shim disconnected" id=61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032 namespace=k8s.io Aug 5 22:34:06.687565 containerd[1959]: time="2024-08-05T22:34:06.687413221Z" level=warning msg="cleaning up after shim disconnected" id=61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032 namespace=k8s.io Aug 5 22:34:06.687565 containerd[1959]: time="2024-08-05T22:34:06.687443318Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:34:06.724138 containerd[1959]: time="2024-08-05T22:34:06.723635967Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:34:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:34:07.241216 containerd[1959]: time="2024-08-05T22:34:07.241046092Z" level=info msg="StopPodSandbox for \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\"" Aug 5 22:34:07.258355 containerd[1959]: time="2024-08-05T22:34:07.241097230Z" level=info msg="Container to stop \"61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 5 22:34:07.271196 systemd[1]: cri-containerd-5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892.scope: Deactivated successfully. Aug 5 22:34:07.339968 containerd[1959]: time="2024-08-05T22:34:07.337629173Z" level=info msg="shim disconnected" id=5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892 namespace=k8s.io Aug 5 22:34:07.340905 containerd[1959]: time="2024-08-05T22:34:07.340431575Z" level=warning msg="cleaning up after shim disconnected" id=5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892 namespace=k8s.io Aug 5 22:34:07.340905 containerd[1959]: time="2024-08-05T22:34:07.340483181Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:34:07.369112 containerd[1959]: time="2024-08-05T22:34:07.368352282Z" level=info msg="TearDown network for sandbox \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\" successfully" Aug 5 22:34:07.369112 containerd[1959]: time="2024-08-05T22:34:07.368389676Z" level=info msg="StopPodSandbox for \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\" returns successfully" Aug 5 22:34:07.380664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032-rootfs.mount: Deactivated successfully. Aug 5 22:34:07.380876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892-rootfs.mount: Deactivated successfully. Aug 5 22:34:07.380963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892-shm.mount: Deactivated successfully. Aug 5 22:34:07.562828 kubelet[3195]: I0805 22:34:07.562716 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-node-certs\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.562828 kubelet[3195]: I0805 22:34:07.562762 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-lib-modules\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.562828 kubelet[3195]: I0805 22:34:07.562786 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-var-lib-calico\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.562828 kubelet[3195]: I0805 22:34:07.562808 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-bin-dir\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.563922 kubelet[3195]: I0805 22:34:07.562834 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-flexvol-driver-host\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.563922 kubelet[3195]: I0805 22:34:07.562855 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-policysync\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.563922 kubelet[3195]: I0805 22:34:07.562872 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-log-dir\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.563922 kubelet[3195]: I0805 22:34:07.562892 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-net-dir\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.563922 kubelet[3195]: I0805 22:34:07.562919 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-tigera-ca-bundle\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.563922 kubelet[3195]: I0805 22:34:07.562945 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-var-run-calico\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.564262 kubelet[3195]: I0805 22:34:07.562970 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-xtables-lock\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.564262 kubelet[3195]: I0805 22:34:07.563401 3195 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bpxc\" (UniqueName: \"kubernetes.io/projected/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-kube-api-access-6bpxc\") pod \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\" (UID: \"d1108d43-3bfb-4e2e-89de-6ab0b3718d03\") " Aug 5 22:34:07.564262 kubelet[3195]: I0805 22:34:07.563580 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-policysync" (OuterVolumeSpecName: "policysync") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:34:07.567915 kubelet[3195]: I0805 22:34:07.567862 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:34:07.568043 kubelet[3195]: I0805 22:34:07.567927 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:34:07.568550 kubelet[3195]: I0805 22:34:07.568349 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 5 22:34:07.568550 kubelet[3195]: I0805 22:34:07.568402 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:34:07.568550 kubelet[3195]: I0805 22:34:07.568426 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:34:07.568550 kubelet[3195]: I0805 22:34:07.568449 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:34:07.568550 kubelet[3195]: I0805 22:34:07.568487 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:34:07.568829 kubelet[3195]: I0805 22:34:07.568511 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:34:07.568829 kubelet[3195]: I0805 22:34:07.568520 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 5 22:34:07.573827 systemd[1]: var-lib-kubelet-pods-d1108d43\x2d3bfb\x2d4e2e\x2d89de\x2d6ab0b3718d03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6bpxc.mount: Deactivated successfully. Aug 5 22:34:07.577870 kubelet[3195]: I0805 22:34:07.573897 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-kube-api-access-6bpxc" (OuterVolumeSpecName: "kube-api-access-6bpxc") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "kube-api-access-6bpxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 5 22:34:07.577870 kubelet[3195]: I0805 22:34:07.574278 3195 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-node-certs" (OuterVolumeSpecName: "node-certs") pod "d1108d43-3bfb-4e2e-89de-6ab0b3718d03" (UID: "d1108d43-3bfb-4e2e-89de-6ab0b3718d03"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 5 22:34:07.573973 systemd[1]: var-lib-kubelet-pods-d1108d43\x2d3bfb\x2d4e2e\x2d89de\x2d6ab0b3718d03-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Aug 5 22:34:07.663903 kubelet[3195]: I0805 22:34:07.663802 3195 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-flexvol-driver-host\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.663903 kubelet[3195]: I0805 22:34:07.663841 3195 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-policysync\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.663903 kubelet[3195]: I0805 22:34:07.663855 3195 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-log-dir\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.663903 kubelet[3195]: I0805 22:34:07.663867 3195 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-net-dir\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.663903 kubelet[3195]: I0805 22:34:07.663879 3195 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-tigera-ca-bundle\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.663903 kubelet[3195]: I0805 22:34:07.663891 3195 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-var-run-calico\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.663903 kubelet[3195]: I0805 22:34:07.663922 3195 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-6bpxc\" (UniqueName: \"kubernetes.io/projected/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-kube-api-access-6bpxc\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.664661 kubelet[3195]: I0805 22:34:07.663933 3195 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-xtables-lock\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.664661 kubelet[3195]: I0805 22:34:07.663976 3195 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-lib-modules\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.664661 kubelet[3195]: I0805 22:34:07.663998 3195 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-node-certs\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.664661 kubelet[3195]: I0805 22:34:07.664011 3195 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-var-lib-calico\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:07.664661 kubelet[3195]: I0805 22:34:07.664083 3195 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d1108d43-3bfb-4e2e-89de-6ab0b3718d03-cni-bin-dir\") on node \"ip-172-31-23-20\" DevicePath \"\"" Aug 5 22:34:08.083120 kubelet[3195]: E0805 22:34:08.083023 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhrbp" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" Aug 5 22:34:08.121057 systemd[1]: Removed slice kubepods-besteffort-podd1108d43_3bfb_4e2e_89de_6ab0b3718d03.slice - libcontainer container kubepods-besteffort-podd1108d43_3bfb_4e2e_89de_6ab0b3718d03.slice. Aug 5 22:34:08.248905 kubelet[3195]: I0805 22:34:08.247943 3195 scope.go:117] "RemoveContainer" containerID="61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032" Aug 5 22:34:08.263587 containerd[1959]: time="2024-08-05T22:34:08.263200066Z" level=info msg="RemoveContainer for \"61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032\"" Aug 5 22:34:08.346209 containerd[1959]: time="2024-08-05T22:34:08.346064625Z" level=info msg="RemoveContainer for \"61f8f2c6b26b14cda0c1ab439fe6d5fd09308908748c33a22e01c1d5a816e032\" returns successfully" Aug 5 22:34:08.359654 kubelet[3195]: I0805 22:34:08.359107 3195 topology_manager.go:215] "Topology Admit Handler" podUID="c362d00e-9713-4fd9-87f9-487eb8e2ccd8" podNamespace="calico-system" podName="calico-node-lmdrt" Aug 5 22:34:08.360540 kubelet[3195]: E0805 22:34:08.360514 3195 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d1108d43-3bfb-4e2e-89de-6ab0b3718d03" containerName="flexvol-driver" Aug 5 22:34:08.360689 kubelet[3195]: I0805 22:34:08.360677 3195 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1108d43-3bfb-4e2e-89de-6ab0b3718d03" containerName="flexvol-driver" Aug 5 22:34:08.376082 systemd[1]: Created slice kubepods-besteffort-podc362d00e_9713_4fd9_87f9_487eb8e2ccd8.slice - libcontainer container kubepods-besteffort-podc362d00e_9713_4fd9_87f9_487eb8e2ccd8.slice. Aug 5 22:34:08.478404 kubelet[3195]: I0805 22:34:08.476888 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-policysync\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.478404 kubelet[3195]: I0805 22:34:08.476973 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-var-lib-calico\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.478404 kubelet[3195]: I0805 22:34:08.477001 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-tigera-ca-bundle\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.478404 kubelet[3195]: I0805 22:34:08.477040 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvqpx\" (UniqueName: \"kubernetes.io/projected/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-kube-api-access-lvqpx\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.478404 kubelet[3195]: I0805 22:34:08.477112 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-var-run-calico\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.478980 kubelet[3195]: I0805 22:34:08.477139 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-xtables-lock\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.478980 kubelet[3195]: I0805 22:34:08.477162 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-cni-bin-dir\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.478980 kubelet[3195]: I0805 22:34:08.477200 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-cni-log-dir\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.478980 kubelet[3195]: I0805 22:34:08.477223 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-flexvol-driver-host\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.478980 kubelet[3195]: I0805 22:34:08.477259 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-lib-modules\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.479272 kubelet[3195]: I0805 22:34:08.477289 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-node-certs\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.479272 kubelet[3195]: I0805 22:34:08.477412 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c362d00e-9713-4fd9-87f9-487eb8e2ccd8-cni-net-dir\") pod \"calico-node-lmdrt\" (UID: \"c362d00e-9713-4fd9-87f9-487eb8e2ccd8\") " pod="calico-system/calico-node-lmdrt" Aug 5 22:34:08.690569 containerd[1959]: time="2024-08-05T22:34:08.690206748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lmdrt,Uid:c362d00e-9713-4fd9-87f9-487eb8e2ccd8,Namespace:calico-system,Attempt:0,}" Aug 5 22:34:08.792535 containerd[1959]: time="2024-08-05T22:34:08.792172073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:34:08.792535 containerd[1959]: time="2024-08-05T22:34:08.792238314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:08.792535 containerd[1959]: time="2024-08-05T22:34:08.792275951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:34:08.792535 containerd[1959]: time="2024-08-05T22:34:08.792289241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:08.925772 systemd[1]: Started cri-containerd-7582309c8dca34a62b518f0ef56576b0c101a9f91733ca1098083aa885684a8f.scope - libcontainer container 7582309c8dca34a62b518f0ef56576b0c101a9f91733ca1098083aa885684a8f. Aug 5 22:34:09.221689 containerd[1959]: time="2024-08-05T22:34:09.221558015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lmdrt,Uid:c362d00e-9713-4fd9-87f9-487eb8e2ccd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"7582309c8dca34a62b518f0ef56576b0c101a9f91733ca1098083aa885684a8f\"" Aug 5 22:34:09.232986 containerd[1959]: time="2024-08-05T22:34:09.232927908Z" level=info msg="CreateContainer within sandbox \"7582309c8dca34a62b518f0ef56576b0c101a9f91733ca1098083aa885684a8f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:34:09.279106 containerd[1959]: time="2024-08-05T22:34:09.279008157Z" level=info msg="CreateContainer within sandbox \"7582309c8dca34a62b518f0ef56576b0c101a9f91733ca1098083aa885684a8f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"65da3c27c257f63546c3e0d9ab1da3d0b8fffd0b8f1d318bbbde49bd43344249\"" Aug 5 22:34:09.281599 containerd[1959]: time="2024-08-05T22:34:09.280430976Z" level=info msg="StartContainer for \"65da3c27c257f63546c3e0d9ab1da3d0b8fffd0b8f1d318bbbde49bd43344249\"" Aug 5 22:34:09.385365 systemd[1]: Started cri-containerd-65da3c27c257f63546c3e0d9ab1da3d0b8fffd0b8f1d318bbbde49bd43344249.scope - libcontainer container 65da3c27c257f63546c3e0d9ab1da3d0b8fffd0b8f1d318bbbde49bd43344249. Aug 5 22:34:09.472039 containerd[1959]: time="2024-08-05T22:34:09.471344231Z" level=info msg="StartContainer for \"65da3c27c257f63546c3e0d9ab1da3d0b8fffd0b8f1d318bbbde49bd43344249\" returns successfully" Aug 5 22:34:09.556504 systemd[1]: cri-containerd-65da3c27c257f63546c3e0d9ab1da3d0b8fffd0b8f1d318bbbde49bd43344249.scope: Deactivated successfully. Aug 5 22:34:09.652087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65da3c27c257f63546c3e0d9ab1da3d0b8fffd0b8f1d318bbbde49bd43344249-rootfs.mount: Deactivated successfully. Aug 5 22:34:09.788618 containerd[1959]: time="2024-08-05T22:34:09.788047453Z" level=info msg="shim disconnected" id=65da3c27c257f63546c3e0d9ab1da3d0b8fffd0b8f1d318bbbde49bd43344249 namespace=k8s.io Aug 5 22:34:09.788618 containerd[1959]: time="2024-08-05T22:34:09.788114677Z" level=warning msg="cleaning up after shim disconnected" id=65da3c27c257f63546c3e0d9ab1da3d0b8fffd0b8f1d318bbbde49bd43344249 namespace=k8s.io Aug 5 22:34:09.788618 containerd[1959]: time="2024-08-05T22:34:09.788125999Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:34:09.828046 containerd[1959]: time="2024-08-05T22:34:09.825861156Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:34:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:34:09.909488 containerd[1959]: time="2024-08-05T22:34:09.908882584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:09.910676 containerd[1959]: time="2024-08-05T22:34:09.910613713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:34:09.910997 containerd[1959]: time="2024-08-05T22:34:09.910969733Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:09.915127 containerd[1959]: time="2024-08-05T22:34:09.915070527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:09.917862 containerd[1959]: time="2024-08-05T22:34:09.917778322Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.560581807s" Aug 5 22:34:09.918035 containerd[1959]: time="2024-08-05T22:34:09.917867135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:34:09.946808 containerd[1959]: time="2024-08-05T22:34:09.946758970Z" level=info msg="CreateContainer within sandbox \"121a45961db86360cd9368a880bc84f368f428441995838c1125ff1049d47a16\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:34:09.978353 containerd[1959]: time="2024-08-05T22:34:09.978302495Z" level=info msg="CreateContainer within sandbox \"121a45961db86360cd9368a880bc84f368f428441995838c1125ff1049d47a16\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d301278525ae3a65de11ac9780176fc9ef044cd17e6892950d5119e219cd223f\"" Aug 5 22:34:09.982062 containerd[1959]: time="2024-08-05T22:34:09.982002959Z" level=info msg="StartContainer for \"d301278525ae3a65de11ac9780176fc9ef044cd17e6892950d5119e219cd223f\"" Aug 5 22:34:10.034961 systemd[1]: Started cri-containerd-d301278525ae3a65de11ac9780176fc9ef044cd17e6892950d5119e219cd223f.scope - libcontainer container d301278525ae3a65de11ac9780176fc9ef044cd17e6892950d5119e219cd223f. Aug 5 22:34:10.084841 kubelet[3195]: E0805 22:34:10.084714 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhrbp" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" Aug 5 22:34:10.090562 kubelet[3195]: I0805 22:34:10.090525 3195 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1108d43-3bfb-4e2e-89de-6ab0b3718d03" path="/var/lib/kubelet/pods/d1108d43-3bfb-4e2e-89de-6ab0b3718d03/volumes" Aug 5 22:34:10.131302 containerd[1959]: time="2024-08-05T22:34:10.131232789Z" level=info msg="StartContainer for \"d301278525ae3a65de11ac9780176fc9ef044cd17e6892950d5119e219cd223f\" returns successfully" Aug 5 22:34:10.272262 containerd[1959]: time="2024-08-05T22:34:10.271411339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:34:10.630155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2808281623.mount: Deactivated successfully. Aug 5 22:34:11.277608 kubelet[3195]: I0805 22:34:11.277575 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:34:12.084770 kubelet[3195]: E0805 22:34:12.084606 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhrbp" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" Aug 5 22:34:14.085462 kubelet[3195]: E0805 22:34:14.084726 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhrbp" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" Aug 5 22:34:16.083396 kubelet[3195]: E0805 22:34:16.083104 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hhrbp" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" Aug 5 22:34:16.367130 containerd[1959]: time="2024-08-05T22:34:16.366983572Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:16.368293 containerd[1959]: time="2024-08-05T22:34:16.368242230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:34:16.369490 containerd[1959]: time="2024-08-05T22:34:16.369133283Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:16.372270 containerd[1959]: time="2024-08-05T22:34:16.372233224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:16.373357 containerd[1959]: time="2024-08-05T22:34:16.373320319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 6.101859381s" Aug 5 22:34:16.373443 containerd[1959]: time="2024-08-05T22:34:16.373362215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:34:16.379224 containerd[1959]: time="2024-08-05T22:34:16.379179658Z" level=info msg="CreateContainer within sandbox \"7582309c8dca34a62b518f0ef56576b0c101a9f91733ca1098083aa885684a8f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:34:16.479818 containerd[1959]: time="2024-08-05T22:34:16.479761840Z" level=info msg="CreateContainer within sandbox \"7582309c8dca34a62b518f0ef56576b0c101a9f91733ca1098083aa885684a8f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc\"" Aug 5 22:34:16.480677 containerd[1959]: time="2024-08-05T22:34:16.480645631Z" level=info msg="StartContainer for \"c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc\"" Aug 5 22:34:16.630268 systemd[1]: run-containerd-runc-k8s.io-c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc-runc.mGHyj6.mount: Deactivated successfully. Aug 5 22:34:16.639794 systemd[1]: Started cri-containerd-c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc.scope - libcontainer container c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc. Aug 5 22:34:16.718071 containerd[1959]: time="2024-08-05T22:34:16.717018248Z" level=info msg="StartContainer for \"c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc\" returns successfully" Aug 5 22:34:17.333491 kubelet[3195]: I0805 22:34:17.333327 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bffd7d85f-rgl9w" podStartSLOduration=9.104916213 podStartE2EDuration="14.33330685s" podCreationTimestamp="2024-08-05 22:34:03 +0000 UTC" firstStartedPulling="2024-08-05 22:34:04.690700104 +0000 UTC m=+22.772837483" lastFinishedPulling="2024-08-05 22:34:09.919090744 +0000 UTC m=+28.001228120" observedRunningTime="2024-08-05 22:34:10.363523002 +0000 UTC m=+28.445660395" watchObservedRunningTime="2024-08-05 22:34:17.33330685 +0000 UTC m=+35.415444243" Aug 5 22:34:17.495583 systemd[1]: cri-containerd-c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc.scope: Deactivated successfully. Aug 5 22:34:17.535238 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc-rootfs.mount: Deactivated successfully. Aug 5 22:34:17.541011 containerd[1959]: time="2024-08-05T22:34:17.540943137Z" level=info msg="shim disconnected" id=c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc namespace=k8s.io Aug 5 22:34:17.541011 containerd[1959]: time="2024-08-05T22:34:17.541013737Z" level=warning msg="cleaning up after shim disconnected" id=c2268d9e74852aa500d63d5ca1ad5a57e0b59fd80d875e4f4b9569a14cf078fc namespace=k8s.io Aug 5 22:34:17.541776 containerd[1959]: time="2024-08-05T22:34:17.541025534Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:34:17.557493 containerd[1959]: time="2024-08-05T22:34:17.557390288Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:34:17Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:34:17.587080 kubelet[3195]: I0805 22:34:17.584989 3195 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:34:17.621277 kubelet[3195]: I0805 22:34:17.621153 3195 topology_manager.go:215] "Topology Admit Handler" podUID="9e485b7a-0228-414a-bfbb-32f1bef8c0b6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rrt8r" Aug 5 22:34:17.626735 kubelet[3195]: I0805 22:34:17.625155 3195 topology_manager.go:215] "Topology Admit Handler" podUID="df8aa33e-ef06-4b03-a2b2-eefc0f040acf" podNamespace="calico-system" podName="calico-kube-controllers-69cd57f8df-xq8fb" Aug 5 22:34:17.628930 kubelet[3195]: I0805 22:34:17.628702 3195 topology_manager.go:215] "Topology Admit Handler" podUID="e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dqhhw" Aug 5 22:34:17.641696 systemd[1]: Created slice kubepods-burstable-pod9e485b7a_0228_414a_bfbb_32f1bef8c0b6.slice - libcontainer container kubepods-burstable-pod9e485b7a_0228_414a_bfbb_32f1bef8c0b6.slice. Aug 5 22:34:17.656349 systemd[1]: Created slice kubepods-besteffort-poddf8aa33e_ef06_4b03_a2b2_eefc0f040acf.slice - libcontainer container kubepods-besteffort-poddf8aa33e_ef06_4b03_a2b2_eefc0f040acf.slice. Aug 5 22:34:17.666407 systemd[1]: Created slice kubepods-burstable-pode92fcae9_7ce9_48eb_b1fb_347bcc3d67f7.slice - libcontainer container kubepods-burstable-pode92fcae9_7ce9_48eb_b1fb_347bcc3d67f7.slice. Aug 5 22:34:17.674515 kubelet[3195]: I0805 22:34:17.674447 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct9s7\" (UniqueName: \"kubernetes.io/projected/df8aa33e-ef06-4b03-a2b2-eefc0f040acf-kube-api-access-ct9s7\") pod \"calico-kube-controllers-69cd57f8df-xq8fb\" (UID: \"df8aa33e-ef06-4b03-a2b2-eefc0f040acf\") " pod="calico-system/calico-kube-controllers-69cd57f8df-xq8fb" Aug 5 22:34:17.674759 kubelet[3195]: I0805 22:34:17.674524 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7-config-volume\") pod \"coredns-7db6d8ff4d-dqhhw\" (UID: \"e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7\") " pod="kube-system/coredns-7db6d8ff4d-dqhhw" Aug 5 22:34:17.674759 kubelet[3195]: I0805 22:34:17.674574 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e485b7a-0228-414a-bfbb-32f1bef8c0b6-config-volume\") pod \"coredns-7db6d8ff4d-rrt8r\" (UID: \"9e485b7a-0228-414a-bfbb-32f1bef8c0b6\") " pod="kube-system/coredns-7db6d8ff4d-rrt8r" Aug 5 22:34:17.674759 kubelet[3195]: I0805 22:34:17.674602 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcv6x\" (UniqueName: \"kubernetes.io/projected/9e485b7a-0228-414a-bfbb-32f1bef8c0b6-kube-api-access-vcv6x\") pod \"coredns-7db6d8ff4d-rrt8r\" (UID: \"9e485b7a-0228-414a-bfbb-32f1bef8c0b6\") " pod="kube-system/coredns-7db6d8ff4d-rrt8r" Aug 5 22:34:17.674759 kubelet[3195]: I0805 22:34:17.674637 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tsdg\" (UniqueName: \"kubernetes.io/projected/e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7-kube-api-access-6tsdg\") pod \"coredns-7db6d8ff4d-dqhhw\" (UID: \"e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7\") " pod="kube-system/coredns-7db6d8ff4d-dqhhw" Aug 5 22:34:17.674759 kubelet[3195]: I0805 22:34:17.674679 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/df8aa33e-ef06-4b03-a2b2-eefc0f040acf-tigera-ca-bundle\") pod \"calico-kube-controllers-69cd57f8df-xq8fb\" (UID: \"df8aa33e-ef06-4b03-a2b2-eefc0f040acf\") " pod="calico-system/calico-kube-controllers-69cd57f8df-xq8fb" Aug 5 22:34:17.969507 containerd[1959]: time="2024-08-05T22:34:17.969436598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69cd57f8df-xq8fb,Uid:df8aa33e-ef06-4b03-a2b2-eefc0f040acf,Namespace:calico-system,Attempt:0,}" Aug 5 22:34:17.969977 containerd[1959]: time="2024-08-05T22:34:17.969460486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rrt8r,Uid:9e485b7a-0228-414a-bfbb-32f1bef8c0b6,Namespace:kube-system,Attempt:0,}" Aug 5 22:34:17.975503 containerd[1959]: time="2024-08-05T22:34:17.974682876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dqhhw,Uid:e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7,Namespace:kube-system,Attempt:0,}" Aug 5 22:34:18.093289 systemd[1]: Created slice kubepods-besteffort-pod59f1ecad_9abf_4018_81f1_db05fd12b487.slice - libcontainer container kubepods-besteffort-pod59f1ecad_9abf_4018_81f1_db05fd12b487.slice. Aug 5 22:34:18.100221 containerd[1959]: time="2024-08-05T22:34:18.099762904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhrbp,Uid:59f1ecad-9abf-4018-81f1-db05fd12b487,Namespace:calico-system,Attempt:0,}" Aug 5 22:34:18.271505 containerd[1959]: time="2024-08-05T22:34:18.271368417Z" level=error msg="Failed to destroy network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.279567 containerd[1959]: time="2024-08-05T22:34:18.279506713Z" level=error msg="encountered an error cleaning up failed sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.279879 containerd[1959]: time="2024-08-05T22:34:18.279842666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rrt8r,Uid:9e485b7a-0228-414a-bfbb-32f1bef8c0b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.280236 kubelet[3195]: E0805 22:34:18.280191 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.280445 kubelet[3195]: E0805 22:34:18.280269 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rrt8r" Aug 5 22:34:18.280545 kubelet[3195]: E0805 22:34:18.280455 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rrt8r" Aug 5 22:34:18.280645 kubelet[3195]: E0805 22:34:18.280542 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rrt8r_kube-system(9e485b7a-0228-414a-bfbb-32f1bef8c0b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rrt8r_kube-system(9e485b7a-0228-414a-bfbb-32f1bef8c0b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rrt8r" podUID="9e485b7a-0228-414a-bfbb-32f1bef8c0b6" Aug 5 22:34:18.285655 containerd[1959]: time="2024-08-05T22:34:18.285510175Z" level=error msg="Failed to destroy network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.286232 containerd[1959]: time="2024-08-05T22:34:18.286103906Z" level=error msg="encountered an error cleaning up failed sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.286232 containerd[1959]: time="2024-08-05T22:34:18.286173850Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhrbp,Uid:59f1ecad-9abf-4018-81f1-db05fd12b487,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.287506 kubelet[3195]: E0805 22:34:18.286572 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.287506 kubelet[3195]: E0805 22:34:18.286631 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hhrbp" Aug 5 22:34:18.287506 kubelet[3195]: E0805 22:34:18.286657 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hhrbp" Aug 5 22:34:18.287919 kubelet[3195]: E0805 22:34:18.286705 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hhrbp_calico-system(59f1ecad-9abf-4018-81f1-db05fd12b487)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hhrbp_calico-system(59f1ecad-9abf-4018-81f1-db05fd12b487)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hhrbp" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" Aug 5 22:34:18.301012 containerd[1959]: time="2024-08-05T22:34:18.300946694Z" level=error msg="Failed to destroy network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.301591 containerd[1959]: time="2024-08-05T22:34:18.301553737Z" level=error msg="encountered an error cleaning up failed sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.302100 containerd[1959]: time="2024-08-05T22:34:18.302066069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69cd57f8df-xq8fb,Uid:df8aa33e-ef06-4b03-a2b2-eefc0f040acf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.302731 kubelet[3195]: E0805 22:34:18.302696 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.302899 kubelet[3195]: E0805 22:34:18.302876 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69cd57f8df-xq8fb" Aug 5 22:34:18.302994 kubelet[3195]: E0805 22:34:18.302975 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69cd57f8df-xq8fb" Aug 5 22:34:18.303128 kubelet[3195]: E0805 22:34:18.303098 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69cd57f8df-xq8fb_calico-system(df8aa33e-ef06-4b03-a2b2-eefc0f040acf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69cd57f8df-xq8fb_calico-system(df8aa33e-ef06-4b03-a2b2-eefc0f040acf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69cd57f8df-xq8fb" podUID="df8aa33e-ef06-4b03-a2b2-eefc0f040acf" Aug 5 22:34:18.309518 containerd[1959]: time="2024-08-05T22:34:18.307371050Z" level=error msg="Failed to destroy network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.309518 containerd[1959]: time="2024-08-05T22:34:18.307974848Z" level=error msg="encountered an error cleaning up failed sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.309518 containerd[1959]: time="2024-08-05T22:34:18.308138977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dqhhw,Uid:e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.309753 kubelet[3195]: E0805 22:34:18.308610 3195 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.309753 kubelet[3195]: E0805 22:34:18.308662 3195 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dqhhw" Aug 5 22:34:18.309753 kubelet[3195]: E0805 22:34:18.308701 3195 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dqhhw" Aug 5 22:34:18.309878 kubelet[3195]: E0805 22:34:18.308754 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dqhhw_kube-system(e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dqhhw_kube-system(e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dqhhw" podUID="e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7" Aug 5 22:34:18.311564 kubelet[3195]: I0805 22:34:18.311537 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:18.312630 containerd[1959]: time="2024-08-05T22:34:18.312420195Z" level=info msg="StopPodSandbox for \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\"" Aug 5 22:34:18.316802 containerd[1959]: time="2024-08-05T22:34:18.316655919Z" level=info msg="Ensure that sandbox 452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf in task-service has been cleanup successfully" Aug 5 22:34:18.328937 containerd[1959]: time="2024-08-05T22:34:18.328902290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:34:18.329930 kubelet[3195]: I0805 22:34:18.329795 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:18.330895 containerd[1959]: time="2024-08-05T22:34:18.330760299Z" level=info msg="StopPodSandbox for \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\"" Aug 5 22:34:18.331325 containerd[1959]: time="2024-08-05T22:34:18.331299233Z" level=info msg="Ensure that sandbox 127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f in task-service has been cleanup successfully" Aug 5 22:34:18.398656 containerd[1959]: time="2024-08-05T22:34:18.398596343Z" level=error msg="StopPodSandbox for \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\" failed" error="failed to destroy network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.399107 kubelet[3195]: E0805 22:34:18.398844 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:18.399107 kubelet[3195]: E0805 22:34:18.398904 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf"} Aug 5 22:34:18.399107 kubelet[3195]: E0805 22:34:18.398979 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9e485b7a-0228-414a-bfbb-32f1bef8c0b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:34:18.399107 kubelet[3195]: E0805 22:34:18.399009 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9e485b7a-0228-414a-bfbb-32f1bef8c0b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rrt8r" podUID="9e485b7a-0228-414a-bfbb-32f1bef8c0b6" Aug 5 22:34:18.407066 containerd[1959]: time="2024-08-05T22:34:18.407017484Z" level=error msg="StopPodSandbox for \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\" failed" error="failed to destroy network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:18.407362 kubelet[3195]: E0805 22:34:18.407251 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:18.407362 kubelet[3195]: E0805 22:34:18.407304 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f"} Aug 5 22:34:18.407362 kubelet[3195]: E0805 22:34:18.407351 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"59f1ecad-9abf-4018-81f1-db05fd12b487\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:34:18.407561 kubelet[3195]: E0805 22:34:18.407381 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"59f1ecad-9abf-4018-81f1-db05fd12b487\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hhrbp" podUID="59f1ecad-9abf-4018-81f1-db05fd12b487" Aug 5 22:34:18.536524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4-shm.mount: Deactivated successfully. Aug 5 22:34:18.539054 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf-shm.mount: Deactivated successfully. Aug 5 22:34:19.332955 kubelet[3195]: I0805 22:34:19.332918 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:19.334349 containerd[1959]: time="2024-08-05T22:34:19.333787395Z" level=info msg="StopPodSandbox for \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\"" Aug 5 22:34:19.334349 containerd[1959]: time="2024-08-05T22:34:19.334044536Z" level=info msg="Ensure that sandbox 8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38 in task-service has been cleanup successfully" Aug 5 22:34:19.336298 kubelet[3195]: I0805 22:34:19.335881 3195 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:19.339735 containerd[1959]: time="2024-08-05T22:34:19.339623972Z" level=info msg="StopPodSandbox for \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\"" Aug 5 22:34:19.340519 containerd[1959]: time="2024-08-05T22:34:19.340255859Z" level=info msg="Ensure that sandbox 6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4 in task-service has been cleanup successfully" Aug 5 22:34:19.406164 containerd[1959]: time="2024-08-05T22:34:19.406101819Z" level=error msg="StopPodSandbox for \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\" failed" error="failed to destroy network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:19.406456 kubelet[3195]: E0805 22:34:19.406359 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:19.406456 kubelet[3195]: E0805 22:34:19.406413 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38"} Aug 5 22:34:19.408065 kubelet[3195]: E0805 22:34:19.406456 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:34:19.408065 kubelet[3195]: E0805 22:34:19.406505 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dqhhw" podUID="e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7" Aug 5 22:34:19.416006 containerd[1959]: time="2024-08-05T22:34:19.415951898Z" level=error msg="StopPodSandbox for \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\" failed" error="failed to destroy network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:34:19.416277 kubelet[3195]: E0805 22:34:19.416199 3195 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:19.416277 kubelet[3195]: E0805 22:34:19.416256 3195 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4"} Aug 5 22:34:19.416431 kubelet[3195]: E0805 22:34:19.416302 3195 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"df8aa33e-ef06-4b03-a2b2-eefc0f040acf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:34:19.416431 kubelet[3195]: E0805 22:34:19.416332 3195 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"df8aa33e-ef06-4b03-a2b2-eefc0f040acf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69cd57f8df-xq8fb" podUID="df8aa33e-ef06-4b03-a2b2-eefc0f040acf" Aug 5 22:34:26.238961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3470925746.mount: Deactivated successfully. Aug 5 22:34:26.290003 containerd[1959]: time="2024-08-05T22:34:26.289909172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:26.303970 containerd[1959]: time="2024-08-05T22:34:26.300966123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:34:26.319503 containerd[1959]: time="2024-08-05T22:34:26.318507828Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:26.322552 containerd[1959]: time="2024-08-05T22:34:26.322507477Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:26.323246 containerd[1959]: time="2024-08-05T22:34:26.323207639Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.994092099s" Aug 5 22:34:26.323341 containerd[1959]: time="2024-08-05T22:34:26.323248833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:34:26.410277 containerd[1959]: time="2024-08-05T22:34:26.410235579Z" level=info msg="CreateContainer within sandbox \"7582309c8dca34a62b518f0ef56576b0c101a9f91733ca1098083aa885684a8f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:34:26.458985 containerd[1959]: time="2024-08-05T22:34:26.458941016Z" level=info msg="CreateContainer within sandbox \"7582309c8dca34a62b518f0ef56576b0c101a9f91733ca1098083aa885684a8f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0f051f1a655d0864fda7e15c1f7d2aa174e52516abbc9d5f6ff50b97929950bf\"" Aug 5 22:34:26.480399 containerd[1959]: time="2024-08-05T22:34:26.480326914Z" level=info msg="StartContainer for \"0f051f1a655d0864fda7e15c1f7d2aa174e52516abbc9d5f6ff50b97929950bf\"" Aug 5 22:34:26.690750 systemd[1]: Started cri-containerd-0f051f1a655d0864fda7e15c1f7d2aa174e52516abbc9d5f6ff50b97929950bf.scope - libcontainer container 0f051f1a655d0864fda7e15c1f7d2aa174e52516abbc9d5f6ff50b97929950bf. Aug 5 22:34:26.790253 containerd[1959]: time="2024-08-05T22:34:26.789697296Z" level=info msg="StartContainer for \"0f051f1a655d0864fda7e15c1f7d2aa174e52516abbc9d5f6ff50b97929950bf\" returns successfully" Aug 5 22:34:26.951176 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:34:26.952312 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:34:28.888074 systemd[1]: run-containerd-runc-k8s.io-0f051f1a655d0864fda7e15c1f7d2aa174e52516abbc9d5f6ff50b97929950bf-runc.XYkl3W.mount: Deactivated successfully. Aug 5 22:34:30.087052 containerd[1959]: time="2024-08-05T22:34:30.086611273Z" level=info msg="StopPodSandbox for \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\"" Aug 5 22:34:30.087052 containerd[1959]: time="2024-08-05T22:34:30.086970622Z" level=info msg="StopPodSandbox for \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\"" Aug 5 22:34:30.209230 kubelet[3195]: I0805 22:34:30.207649 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lmdrt" podStartSLOduration=6.127837087 podStartE2EDuration="22.186129549s" podCreationTimestamp="2024-08-05 22:34:08 +0000 UTC" firstStartedPulling="2024-08-05 22:34:10.271111046 +0000 UTC m=+28.353248425" lastFinishedPulling="2024-08-05 22:34:26.329403511 +0000 UTC m=+44.411540887" observedRunningTime="2024-08-05 22:34:27.442762759 +0000 UTC m=+45.524900150" watchObservedRunningTime="2024-08-05 22:34:30.186129549 +0000 UTC m=+48.268266941" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.189 [INFO][4716] k8s.go 608: Cleaning up netns ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.192 [INFO][4716] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" iface="eth0" netns="/var/run/netns/cni-7074e06b-d1c4-33e8-7d46-0645c52c8c5a" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.193 [INFO][4716] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" iface="eth0" netns="/var/run/netns/cni-7074e06b-d1c4-33e8-7d46-0645c52c8c5a" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.193 [INFO][4716] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" iface="eth0" netns="/var/run/netns/cni-7074e06b-d1c4-33e8-7d46-0645c52c8c5a" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.193 [INFO][4716] k8s.go 615: Releasing IP address(es) ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.193 [INFO][4716] utils.go 188: Calico CNI releasing IP address ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.686 [INFO][4728] ipam_plugin.go 411: Releasing address using handleID ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" HandleID="k8s-pod-network.8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.687 [INFO][4728] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.688 [INFO][4728] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.716 [WARNING][4728] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" HandleID="k8s-pod-network.8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.718 [INFO][4728] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" HandleID="k8s-pod-network.8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.721 [INFO][4728] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:30.727619 containerd[1959]: 2024-08-05 22:34:30.724 [INFO][4716] k8s.go 621: Teardown processing complete. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:30.731138 containerd[1959]: time="2024-08-05T22:34:30.727785879Z" level=info msg="TearDown network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\" successfully" Aug 5 22:34:30.731138 containerd[1959]: time="2024-08-05T22:34:30.727818740Z" level=info msg="StopPodSandbox for \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\" returns successfully" Aug 5 22:34:30.733851 containerd[1959]: time="2024-08-05T22:34:30.733277701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dqhhw,Uid:e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7,Namespace:kube-system,Attempt:1,}" Aug 5 22:34:30.735246 systemd[1]: run-netns-cni\x2d7074e06b\x2dd1c4\x2d33e8\x2d7d46\x2d0645c52c8c5a.mount: Deactivated successfully. Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.190 [INFO][4715] k8s.go 608: Cleaning up netns ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.190 [INFO][4715] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" iface="eth0" netns="/var/run/netns/cni-35d9cbd5-8751-6a2d-c221-a0c73ea12483" Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.191 [INFO][4715] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" iface="eth0" netns="/var/run/netns/cni-35d9cbd5-8751-6a2d-c221-a0c73ea12483" Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.191 [INFO][4715] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" iface="eth0" netns="/var/run/netns/cni-35d9cbd5-8751-6a2d-c221-a0c73ea12483" Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.191 [INFO][4715] k8s.go 615: Releasing IP address(es) ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.191 [INFO][4715] utils.go 188: Calico CNI releasing IP address ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.697 [INFO][4727] ipam_plugin.go 411: Releasing address using handleID ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" HandleID="k8s-pod-network.452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.698 [INFO][4727] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.721 [INFO][4727] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.738 [WARNING][4727] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" HandleID="k8s-pod-network.452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.738 [INFO][4727] ipam_plugin.go 439: Releasing address using workloadID ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" HandleID="k8s-pod-network.452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.742 [INFO][4727] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:30.753300 containerd[1959]: 2024-08-05 22:34:30.749 [INFO][4715] k8s.go 621: Teardown processing complete. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:30.754207 containerd[1959]: time="2024-08-05T22:34:30.754172327Z" level=info msg="TearDown network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\" successfully" Aug 5 22:34:30.754346 containerd[1959]: time="2024-08-05T22:34:30.754326420Z" level=info msg="StopPodSandbox for \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\" returns successfully" Aug 5 22:34:30.757928 containerd[1959]: time="2024-08-05T22:34:30.757879311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rrt8r,Uid:9e485b7a-0228-414a-bfbb-32f1bef8c0b6,Namespace:kube-system,Attempt:1,}" Aug 5 22:34:30.761273 systemd[1]: run-netns-cni\x2d35d9cbd5\x2d8751\x2d6a2d\x2dc221\x2da0c73ea12483.mount: Deactivated successfully. Aug 5 22:34:31.108686 systemd-networkd[1807]: cali82833788d15: Link UP Aug 5 22:34:31.109098 systemd-networkd[1807]: cali82833788d15: Gained carrier Aug 5 22:34:31.113824 (udev-worker)[4803]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:30.858 [INFO][4763] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:30.878 [INFO][4763] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0 coredns-7db6d8ff4d- kube-system e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7 739 0 2024-08-05 22:33:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-20 coredns-7db6d8ff4d-dqhhw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali82833788d15 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dqhhw" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:30.878 [INFO][4763] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dqhhw" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:30.960 [INFO][4786] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" HandleID="k8s-pod-network.4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:30.983 [INFO][4786] ipam_plugin.go 264: Auto assigning IP ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" HandleID="k8s-pod-network.4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034cca0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-20", "pod":"coredns-7db6d8ff4d-dqhhw", "timestamp":"2024-08-05 22:34:30.960361556 +0000 UTC"}, Hostname:"ip-172-31-23-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:30.983 [INFO][4786] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:30.983 [INFO][4786] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:30.983 [INFO][4786] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-20' Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:30.986 [INFO][4786] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" host="ip-172-31-23-20" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.016 [INFO][4786] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-20" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.032 [INFO][4786] ipam.go 489: Trying affinity for 192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.039 [INFO][4786] ipam.go 155: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.046 [INFO][4786] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.046 [INFO][4786] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" host="ip-172-31-23-20" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.050 [INFO][4786] ipam.go 1685: Creating new handle: k8s-pod-network.4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548 Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.055 [INFO][4786] ipam.go 1203: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" host="ip-172-31-23-20" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.066 [INFO][4786] ipam.go 1216: Successfully claimed IPs: [192.168.109.193/26] block=192.168.109.192/26 handle="k8s-pod-network.4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" host="ip-172-31-23-20" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.066 [INFO][4786] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.193/26] handle="k8s-pod-network.4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" host="ip-172-31-23-20" Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.067 [INFO][4786] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:31.152199 containerd[1959]: 2024-08-05 22:34:31.067 [INFO][4786] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.109.193/26] IPv6=[] ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" HandleID="k8s-pod-network.4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:31.157399 containerd[1959]: 2024-08-05 22:34:31.072 [INFO][4763] k8s.go 386: Populated endpoint ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dqhhw" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"", Pod:"coredns-7db6d8ff4d-dqhhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82833788d15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:31.157399 containerd[1959]: 2024-08-05 22:34:31.072 [INFO][4763] k8s.go 387: Calico CNI using IPs: [192.168.109.193/32] ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dqhhw" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:31.157399 containerd[1959]: 2024-08-05 22:34:31.072 [INFO][4763] dataplane_linux.go 68: Setting the host side veth name to cali82833788d15 ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dqhhw" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:31.157399 containerd[1959]: 2024-08-05 22:34:31.109 [INFO][4763] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dqhhw" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:31.157399 containerd[1959]: 2024-08-05 22:34:31.115 [INFO][4763] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dqhhw" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7", ResourceVersion:"739", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548", Pod:"coredns-7db6d8ff4d-dqhhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82833788d15", MAC:"be:da:ce:01:e0:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:31.157399 containerd[1959]: 2024-08-05 22:34:31.143 [INFO][4763] k8s.go 500: Wrote updated endpoint to datastore ContainerID="4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dqhhw" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:31.172120 (udev-worker)[4801]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:34:31.176134 systemd-networkd[1807]: cali27a0d65bfcb: Link UP Aug 5 22:34:31.178001 systemd-networkd[1807]: cali27a0d65bfcb: Gained carrier Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:30.876 [INFO][4764] utils.go 100: File /var/lib/calico/mtu does not exist Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:30.911 [INFO][4764] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0 coredns-7db6d8ff4d- kube-system 9e485b7a-0228-414a-bfbb-32f1bef8c0b6 740 0 2024-08-05 22:33:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-20 coredns-7db6d8ff4d-rrt8r eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali27a0d65bfcb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rrt8r" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:30.911 [INFO][4764] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rrt8r" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:30.987 [INFO][4790] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" HandleID="k8s-pod-network.355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.026 [INFO][4790] ipam_plugin.go 264: Auto assigning IP ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" HandleID="k8s-pod-network.355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-20", "pod":"coredns-7db6d8ff4d-rrt8r", "timestamp":"2024-08-05 22:34:30.987025479 +0000 UTC"}, Hostname:"ip-172-31-23-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.026 [INFO][4790] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.067 [INFO][4790] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.067 [INFO][4790] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-20' Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.071 [INFO][4790] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" host="ip-172-31-23-20" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.087 [INFO][4790] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-20" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.100 [INFO][4790] ipam.go 489: Trying affinity for 192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.109 [INFO][4790] ipam.go 155: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.117 [INFO][4790] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.118 [INFO][4790] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" host="ip-172-31-23-20" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.120 [INFO][4790] ipam.go 1685: Creating new handle: k8s-pod-network.355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.137 [INFO][4790] ipam.go 1203: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" host="ip-172-31-23-20" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.162 [INFO][4790] ipam.go 1216: Successfully claimed IPs: [192.168.109.194/26] block=192.168.109.192/26 handle="k8s-pod-network.355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" host="ip-172-31-23-20" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.163 [INFO][4790] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.194/26] handle="k8s-pod-network.355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" host="ip-172-31-23-20" Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.163 [INFO][4790] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:31.211764 containerd[1959]: 2024-08-05 22:34:31.163 [INFO][4790] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.109.194/26] IPv6=[] ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" HandleID="k8s-pod-network.355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:31.213274 containerd[1959]: 2024-08-05 22:34:31.169 [INFO][4764] k8s.go 386: Populated endpoint ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rrt8r" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9e485b7a-0228-414a-bfbb-32f1bef8c0b6", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"", Pod:"coredns-7db6d8ff4d-rrt8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27a0d65bfcb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:31.213274 containerd[1959]: 2024-08-05 22:34:31.169 [INFO][4764] k8s.go 387: Calico CNI using IPs: [192.168.109.194/32] ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rrt8r" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:31.213274 containerd[1959]: 2024-08-05 22:34:31.169 [INFO][4764] dataplane_linux.go 68: Setting the host side veth name to cali27a0d65bfcb ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rrt8r" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:31.213274 containerd[1959]: 2024-08-05 22:34:31.182 [INFO][4764] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rrt8r" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:31.213274 containerd[1959]: 2024-08-05 22:34:31.183 [INFO][4764] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rrt8r" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9e485b7a-0228-414a-bfbb-32f1bef8c0b6", ResourceVersion:"740", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c", Pod:"coredns-7db6d8ff4d-rrt8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27a0d65bfcb", MAC:"d6:22:9c:1e:d5:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:31.213274 containerd[1959]: 2024-08-05 22:34:31.204 [INFO][4764] k8s.go 500: Wrote updated endpoint to datastore ContainerID="355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rrt8r" WorkloadEndpoint="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:31.233995 containerd[1959]: time="2024-08-05T22:34:31.233693693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:34:31.233995 containerd[1959]: time="2024-08-05T22:34:31.233763580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:31.233995 containerd[1959]: time="2024-08-05T22:34:31.233788499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:34:31.233995 containerd[1959]: time="2024-08-05T22:34:31.233804097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:31.288966 systemd[1]: Started cri-containerd-4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548.scope - libcontainer container 4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548. Aug 5 22:34:31.313265 containerd[1959]: time="2024-08-05T22:34:31.312903792Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:34:31.313265 containerd[1959]: time="2024-08-05T22:34:31.312984158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:31.313265 containerd[1959]: time="2024-08-05T22:34:31.313006999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:34:31.313265 containerd[1959]: time="2024-08-05T22:34:31.313022226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:31.352871 systemd[1]: Started cri-containerd-355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c.scope - libcontainer container 355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c. Aug 5 22:34:31.462820 containerd[1959]: time="2024-08-05T22:34:31.462766892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dqhhw,Uid:e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7,Namespace:kube-system,Attempt:1,} returns sandbox id \"4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548\"" Aug 5 22:34:31.472433 containerd[1959]: time="2024-08-05T22:34:31.472389935Z" level=info msg="CreateContainer within sandbox \"4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:34:31.529820 containerd[1959]: time="2024-08-05T22:34:31.524667320Z" level=info msg="CreateContainer within sandbox \"4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f6703694eca451135de7211ebafdda9b5957d661b41d3c8f84040626b3f8a77\"" Aug 5 22:34:31.551869 containerd[1959]: time="2024-08-05T22:34:31.551808990Z" level=info msg="StartContainer for \"9f6703694eca451135de7211ebafdda9b5957d661b41d3c8f84040626b3f8a77\"" Aug 5 22:34:31.659550 containerd[1959]: time="2024-08-05T22:34:31.658547060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rrt8r,Uid:9e485b7a-0228-414a-bfbb-32f1bef8c0b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c\"" Aug 5 22:34:31.681446 containerd[1959]: time="2024-08-05T22:34:31.680322301Z" level=info msg="CreateContainer within sandbox \"355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:34:31.783660 systemd[1]: Started cri-containerd-9f6703694eca451135de7211ebafdda9b5957d661b41d3c8f84040626b3f8a77.scope - libcontainer container 9f6703694eca451135de7211ebafdda9b5957d661b41d3c8f84040626b3f8a77. Aug 5 22:34:31.796155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2440065709.mount: Deactivated successfully. Aug 5 22:34:31.802579 containerd[1959]: time="2024-08-05T22:34:31.802532215Z" level=info msg="CreateContainer within sandbox \"355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ab17bfec74b70d0b35d5734d93bd5f6909e053e0f305a0701374540a4357e8b\"" Aug 5 22:34:31.807535 containerd[1959]: time="2024-08-05T22:34:31.807496248Z" level=info msg="StartContainer for \"9ab17bfec74b70d0b35d5734d93bd5f6909e053e0f305a0701374540a4357e8b\"" Aug 5 22:34:31.882420 systemd[1]: Started cri-containerd-9ab17bfec74b70d0b35d5734d93bd5f6909e053e0f305a0701374540a4357e8b.scope - libcontainer container 9ab17bfec74b70d0b35d5734d93bd5f6909e053e0f305a0701374540a4357e8b. Aug 5 22:34:31.896342 containerd[1959]: time="2024-08-05T22:34:31.895369276Z" level=info msg="StartContainer for \"9f6703694eca451135de7211ebafdda9b5957d661b41d3c8f84040626b3f8a77\" returns successfully" Aug 5 22:34:31.952801 containerd[1959]: time="2024-08-05T22:34:31.952627068Z" level=info msg="StartContainer for \"9ab17bfec74b70d0b35d5734d93bd5f6909e053e0f305a0701374540a4357e8b\" returns successfully" Aug 5 22:34:31.995430 kubelet[3195]: I0805 22:34:31.995377 3195 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:34:32.203655 systemd-networkd[1807]: cali82833788d15: Gained IPv6LL Aug 5 22:34:32.561504 kubelet[3195]: I0805 22:34:32.558726 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rrt8r" podStartSLOduration=36.558704384 podStartE2EDuration="36.558704384s" podCreationTimestamp="2024-08-05 22:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:34:32.53522251 +0000 UTC m=+50.617359902" watchObservedRunningTime="2024-08-05 22:34:32.558704384 +0000 UTC m=+50.640841776" Aug 5 22:34:32.596295 kubelet[3195]: I0805 22:34:32.595645 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dqhhw" podStartSLOduration=36.595623894 podStartE2EDuration="36.595623894s" podCreationTimestamp="2024-08-05 22:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:34:32.594821161 +0000 UTC m=+50.676958554" watchObservedRunningTime="2024-08-05 22:34:32.595623894 +0000 UTC m=+50.677761286" Aug 5 22:34:32.652693 systemd-networkd[1807]: cali27a0d65bfcb: Gained IPv6LL Aug 5 22:34:33.087415 containerd[1959]: time="2024-08-05T22:34:33.087369700Z" level=info msg="StopPodSandbox for \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\"" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.239 [INFO][5054] k8s.go 608: Cleaning up netns ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.239 [INFO][5054] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" iface="eth0" netns="/var/run/netns/cni-4bcfc006-7f2b-5eb6-846f-e6f74b251aa1" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.240 [INFO][5054] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" iface="eth0" netns="/var/run/netns/cni-4bcfc006-7f2b-5eb6-846f-e6f74b251aa1" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.242 [INFO][5054] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" iface="eth0" netns="/var/run/netns/cni-4bcfc006-7f2b-5eb6-846f-e6f74b251aa1" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.243 [INFO][5054] k8s.go 615: Releasing IP address(es) ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.243 [INFO][5054] utils.go 188: Calico CNI releasing IP address ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.285 [INFO][5078] ipam_plugin.go 411: Releasing address using handleID ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" HandleID="k8s-pod-network.6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.285 [INFO][5078] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.285 [INFO][5078] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.297 [WARNING][5078] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" HandleID="k8s-pod-network.6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.297 [INFO][5078] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" HandleID="k8s-pod-network.6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.300 [INFO][5078] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:33.318571 containerd[1959]: 2024-08-05 22:34:33.306 [INFO][5054] k8s.go 621: Teardown processing complete. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:33.320157 containerd[1959]: time="2024-08-05T22:34:33.319363702Z" level=info msg="TearDown network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\" successfully" Aug 5 22:34:33.320157 containerd[1959]: time="2024-08-05T22:34:33.319405910Z" level=info msg="StopPodSandbox for \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\" returns successfully" Aug 5 22:34:33.321122 containerd[1959]: time="2024-08-05T22:34:33.321087000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69cd57f8df-xq8fb,Uid:df8aa33e-ef06-4b03-a2b2-eefc0f040acf,Namespace:calico-system,Attempt:1,}" Aug 5 22:34:33.322244 systemd[1]: run-netns-cni\x2d4bcfc006\x2d7f2b\x2d5eb6\x2d846f\x2de6f74b251aa1.mount: Deactivated successfully. Aug 5 22:34:33.578424 systemd-networkd[1807]: vxlan.calico: Link UP Aug 5 22:34:33.578433 systemd-networkd[1807]: vxlan.calico: Gained carrier Aug 5 22:34:33.647158 systemd-networkd[1807]: calie2be352954a: Link UP Aug 5 22:34:33.647486 systemd-networkd[1807]: calie2be352954a: Gained carrier Aug 5 22:34:33.663123 (udev-worker)[4816]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.447 [INFO][5106] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0 calico-kube-controllers-69cd57f8df- calico-system df8aa33e-ef06-4b03-a2b2-eefc0f040acf 780 0 2024-08-05 22:34:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69cd57f8df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-20 calico-kube-controllers-69cd57f8df-xq8fb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie2be352954a [] []}} ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Namespace="calico-system" Pod="calico-kube-controllers-69cd57f8df-xq8fb" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.447 [INFO][5106] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Namespace="calico-system" Pod="calico-kube-controllers-69cd57f8df-xq8fb" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.513 [INFO][5114] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" HandleID="k8s-pod-network.70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.534 [INFO][5114] ipam_plugin.go 264: Auto assigning IP ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" HandleID="k8s-pod-network.70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000316ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-20", "pod":"calico-kube-controllers-69cd57f8df-xq8fb", "timestamp":"2024-08-05 22:34:33.513529009 +0000 UTC"}, Hostname:"ip-172-31-23-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.536 [INFO][5114] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.537 [INFO][5114] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.537 [INFO][5114] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-20' Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.540 [INFO][5114] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" host="ip-172-31-23-20" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.554 [INFO][5114] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-20" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.578 [INFO][5114] ipam.go 489: Trying affinity for 192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.586 [INFO][5114] ipam.go 155: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.599 [INFO][5114] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.599 [INFO][5114] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" host="ip-172-31-23-20" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.610 [INFO][5114] ipam.go 1685: Creating new handle: k8s-pod-network.70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3 Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.621 [INFO][5114] ipam.go 1203: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" host="ip-172-31-23-20" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.632 [INFO][5114] ipam.go 1216: Successfully claimed IPs: [192.168.109.195/26] block=192.168.109.192/26 handle="k8s-pod-network.70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" host="ip-172-31-23-20" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.632 [INFO][5114] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.195/26] handle="k8s-pod-network.70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" host="ip-172-31-23-20" Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.633 [INFO][5114] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:33.684184 containerd[1959]: 2024-08-05 22:34:33.633 [INFO][5114] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.109.195/26] IPv6=[] ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" HandleID="k8s-pod-network.70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.688837 containerd[1959]: 2024-08-05 22:34:33.639 [INFO][5106] k8s.go 386: Populated endpoint ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Namespace="calico-system" Pod="calico-kube-controllers-69cd57f8df-xq8fb" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0", GenerateName:"calico-kube-controllers-69cd57f8df-", Namespace:"calico-system", SelfLink:"", UID:"df8aa33e-ef06-4b03-a2b2-eefc0f040acf", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69cd57f8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"", Pod:"calico-kube-controllers-69cd57f8df-xq8fb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie2be352954a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:33.688837 containerd[1959]: 2024-08-05 22:34:33.639 [INFO][5106] k8s.go 387: Calico CNI using IPs: [192.168.109.195/32] ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Namespace="calico-system" Pod="calico-kube-controllers-69cd57f8df-xq8fb" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.688837 containerd[1959]: 2024-08-05 22:34:33.639 [INFO][5106] dataplane_linux.go 68: Setting the host side veth name to calie2be352954a ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Namespace="calico-system" Pod="calico-kube-controllers-69cd57f8df-xq8fb" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.688837 containerd[1959]: 2024-08-05 22:34:33.642 [INFO][5106] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Namespace="calico-system" Pod="calico-kube-controllers-69cd57f8df-xq8fb" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.688837 containerd[1959]: 2024-08-05 22:34:33.643 [INFO][5106] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Namespace="calico-system" Pod="calico-kube-controllers-69cd57f8df-xq8fb" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0", GenerateName:"calico-kube-controllers-69cd57f8df-", Namespace:"calico-system", SelfLink:"", UID:"df8aa33e-ef06-4b03-a2b2-eefc0f040acf", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69cd57f8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3", Pod:"calico-kube-controllers-69cd57f8df-xq8fb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie2be352954a", MAC:"82:b4:e6:e7:c5:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:33.688837 containerd[1959]: 2024-08-05 22:34:33.674 [INFO][5106] k8s.go 500: Wrote updated endpoint to datastore ContainerID="70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3" Namespace="calico-system" Pod="calico-kube-controllers-69cd57f8df-xq8fb" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:33.758584 containerd[1959]: time="2024-08-05T22:34:33.758320166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:34:33.758584 containerd[1959]: time="2024-08-05T22:34:33.758415785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:33.774632 containerd[1959]: time="2024-08-05T22:34:33.758566318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:34:33.774632 containerd[1959]: time="2024-08-05T22:34:33.774117053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:33.821188 systemd[1]: run-containerd-runc-k8s.io-70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3-runc.5SZxj4.mount: Deactivated successfully. Aug 5 22:34:33.831059 systemd[1]: Started cri-containerd-70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3.scope - libcontainer container 70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3. Aug 5 22:34:33.939930 containerd[1959]: time="2024-08-05T22:34:33.939879556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69cd57f8df-xq8fb,Uid:df8aa33e-ef06-4b03-a2b2-eefc0f040acf,Namespace:calico-system,Attempt:1,} returns sandbox id \"70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3\"" Aug 5 22:34:33.945598 containerd[1959]: time="2024-08-05T22:34:33.945183268Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:34:34.086693 containerd[1959]: time="2024-08-05T22:34:34.085875256Z" level=info msg="StopPodSandbox for \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\"" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.154 [INFO][5255] k8s.go 608: Cleaning up netns ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.154 [INFO][5255] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" iface="eth0" netns="/var/run/netns/cni-138954e5-0ee3-596f-8992-b86f58f6a150" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.154 [INFO][5255] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" iface="eth0" netns="/var/run/netns/cni-138954e5-0ee3-596f-8992-b86f58f6a150" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.155 [INFO][5255] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" iface="eth0" netns="/var/run/netns/cni-138954e5-0ee3-596f-8992-b86f58f6a150" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.155 [INFO][5255] k8s.go 615: Releasing IP address(es) ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.156 [INFO][5255] utils.go 188: Calico CNI releasing IP address ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.198 [INFO][5261] ipam_plugin.go 411: Releasing address using handleID ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" HandleID="k8s-pod-network.127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.198 [INFO][5261] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.198 [INFO][5261] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.206 [WARNING][5261] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" HandleID="k8s-pod-network.127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.206 [INFO][5261] ipam_plugin.go 439: Releasing address using workloadID ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" HandleID="k8s-pod-network.127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.208 [INFO][5261] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:34.213505 containerd[1959]: 2024-08-05 22:34:34.211 [INFO][5255] k8s.go 621: Teardown processing complete. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:34.221077 containerd[1959]: time="2024-08-05T22:34:34.214568306Z" level=info msg="TearDown network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\" successfully" Aug 5 22:34:34.221077 containerd[1959]: time="2024-08-05T22:34:34.214722958Z" level=info msg="StopPodSandbox for \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\" returns successfully" Aug 5 22:34:34.221077 containerd[1959]: time="2024-08-05T22:34:34.219687740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhrbp,Uid:59f1ecad-9abf-4018-81f1-db05fd12b487,Namespace:calico-system,Attempt:1,}" Aug 5 22:34:34.218977 systemd[1]: run-netns-cni\x2d138954e5\x2d0ee3\x2d596f\x2d8992\x2db86f58f6a150.mount: Deactivated successfully. Aug 5 22:34:34.413433 systemd-networkd[1807]: cali251f8150425: Link UP Aug 5 22:34:34.414652 systemd-networkd[1807]: cali251f8150425: Gained carrier Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.288 [INFO][5268] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0 csi-node-driver- calico-system 59f1ecad-9abf-4018-81f1-db05fd12b487 787 0 2024-08-05 22:34:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-23-20 csi-node-driver-hhrbp eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali251f8150425 [] []}} ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Namespace="calico-system" Pod="csi-node-driver-hhrbp" WorkloadEndpoint="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.289 [INFO][5268] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Namespace="calico-system" Pod="csi-node-driver-hhrbp" WorkloadEndpoint="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.344 [INFO][5279] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" HandleID="k8s-pod-network.e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.359 [INFO][5279] ipam_plugin.go 264: Auto assigning IP ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" HandleID="k8s-pod-network.e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050700), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-20", "pod":"csi-node-driver-hhrbp", "timestamp":"2024-08-05 22:34:34.34407176 +0000 UTC"}, Hostname:"ip-172-31-23-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.359 [INFO][5279] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.359 [INFO][5279] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.359 [INFO][5279] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-20' Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.361 [INFO][5279] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" host="ip-172-31-23-20" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.367 [INFO][5279] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-20" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.373 [INFO][5279] ipam.go 489: Trying affinity for 192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.375 [INFO][5279] ipam.go 155: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.377 [INFO][5279] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.378 [INFO][5279] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" host="ip-172-31-23-20" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.381 [INFO][5279] ipam.go 1685: Creating new handle: k8s-pod-network.e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.389 [INFO][5279] ipam.go 1203: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" host="ip-172-31-23-20" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.400 [INFO][5279] ipam.go 1216: Successfully claimed IPs: [192.168.109.196/26] block=192.168.109.192/26 handle="k8s-pod-network.e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" host="ip-172-31-23-20" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.401 [INFO][5279] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.196/26] handle="k8s-pod-network.e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" host="ip-172-31-23-20" Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.401 [INFO][5279] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:34.443658 containerd[1959]: 2024-08-05 22:34:34.401 [INFO][5279] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.109.196/26] IPv6=[] ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" HandleID="k8s-pod-network.e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.444968 containerd[1959]: 2024-08-05 22:34:34.408 [INFO][5268] k8s.go 386: Populated endpoint ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Namespace="calico-system" Pod="csi-node-driver-hhrbp" WorkloadEndpoint="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"59f1ecad-9abf-4018-81f1-db05fd12b487", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"", Pod:"csi-node-driver-hhrbp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali251f8150425", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:34.444968 containerd[1959]: 2024-08-05 22:34:34.409 [INFO][5268] k8s.go 387: Calico CNI using IPs: [192.168.109.196/32] ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Namespace="calico-system" Pod="csi-node-driver-hhrbp" WorkloadEndpoint="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.444968 containerd[1959]: 2024-08-05 22:34:34.409 [INFO][5268] dataplane_linux.go 68: Setting the host side veth name to cali251f8150425 ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Namespace="calico-system" Pod="csi-node-driver-hhrbp" WorkloadEndpoint="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.444968 containerd[1959]: 2024-08-05 22:34:34.414 [INFO][5268] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Namespace="calico-system" Pod="csi-node-driver-hhrbp" WorkloadEndpoint="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.444968 containerd[1959]: 2024-08-05 22:34:34.415 [INFO][5268] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Namespace="calico-system" Pod="csi-node-driver-hhrbp" WorkloadEndpoint="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"59f1ecad-9abf-4018-81f1-db05fd12b487", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae", Pod:"csi-node-driver-hhrbp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali251f8150425", MAC:"ce:9b:59:56:98:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:34.444968 containerd[1959]: 2024-08-05 22:34:34.438 [INFO][5268] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae" Namespace="calico-system" Pod="csi-node-driver-hhrbp" WorkloadEndpoint="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:34.498864 containerd[1959]: time="2024-08-05T22:34:34.493911189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:34:34.498864 containerd[1959]: time="2024-08-05T22:34:34.493990281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:34.498864 containerd[1959]: time="2024-08-05T22:34:34.494026669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:34:34.498864 containerd[1959]: time="2024-08-05T22:34:34.494047235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:34:34.523109 systemd[1]: Started cri-containerd-e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae.scope - libcontainer container e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae. Aug 5 22:34:34.588001 containerd[1959]: time="2024-08-05T22:34:34.587942539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hhrbp,Uid:59f1ecad-9abf-4018-81f1-db05fd12b487,Namespace:calico-system,Attempt:1,} returns sandbox id \"e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae\"" Aug 5 22:34:34.763777 systemd-networkd[1807]: vxlan.calico: Gained IPv6LL Aug 5 22:34:35.147933 systemd-networkd[1807]: calie2be352954a: Gained IPv6LL Aug 5 22:34:36.431436 systemd-networkd[1807]: cali251f8150425: Gained IPv6LL Aug 5 22:34:36.439993 systemd[1]: Started sshd@7-172.31.23.20:22-147.75.109.163:56634.service - OpenSSH per-connection server daemon (147.75.109.163:56634). Aug 5 22:34:36.698735 sshd[5345]: Accepted publickey for core from 147.75.109.163 port 56634 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:34:36.704622 sshd[5345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:34:36.718288 systemd-logind[1945]: New session 8 of user core. Aug 5 22:34:36.722907 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:34:37.424247 sshd[5345]: pam_unix(sshd:session): session closed for user core Aug 5 22:34:37.436755 systemd[1]: sshd@7-172.31.23.20:22-147.75.109.163:56634.service: Deactivated successfully. Aug 5 22:34:37.442958 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:34:37.453703 systemd-logind[1945]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:34:37.461820 systemd-logind[1945]: Removed session 8. Aug 5 22:34:38.543614 containerd[1959]: time="2024-08-05T22:34:38.538098500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:38.548080 containerd[1959]: time="2024-08-05T22:34:38.547221765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:34:38.550942 containerd[1959]: time="2024-08-05T22:34:38.550734514Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:38.561521 containerd[1959]: time="2024-08-05T22:34:38.560927546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:38.565581 containerd[1959]: time="2024-08-05T22:34:38.565511668Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 4.617944017s" Aug 5 22:34:38.565858 containerd[1959]: time="2024-08-05T22:34:38.565833722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:34:38.572096 containerd[1959]: time="2024-08-05T22:34:38.571821880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:34:38.591827 containerd[1959]: time="2024-08-05T22:34:38.591786960Z" level=info msg="CreateContainer within sandbox \"70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:34:38.673192 containerd[1959]: time="2024-08-05T22:34:38.672351906Z" level=info msg="CreateContainer within sandbox \"70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3a7c24e92a5137162e75ef09fcd6a86780c334b0f98bf08748f744987efc0d96\"" Aug 5 22:34:38.672790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2499459241.mount: Deactivated successfully. Aug 5 22:34:38.677621 containerd[1959]: time="2024-08-05T22:34:38.677279758Z" level=info msg="StartContainer for \"3a7c24e92a5137162e75ef09fcd6a86780c334b0f98bf08748f744987efc0d96\"" Aug 5 22:34:38.687058 ntpd[1940]: Listen normally on 7 vxlan.calico 192.168.109.192:123 Aug 5 22:34:38.689114 ntpd[1940]: 5 Aug 22:34:38 ntpd[1940]: Listen normally on 7 vxlan.calico 192.168.109.192:123 Aug 5 22:34:38.689114 ntpd[1940]: 5 Aug 22:34:38 ntpd[1940]: Listen normally on 8 cali82833788d15 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 22:34:38.689114 ntpd[1940]: 5 Aug 22:34:38 ntpd[1940]: Listen normally on 9 cali27a0d65bfcb [fe80::ecee:eeff:feee:eeee%5]:123 Aug 5 22:34:38.689114 ntpd[1940]: 5 Aug 22:34:38 ntpd[1940]: Listen normally on 10 vxlan.calico [fe80::6434:4cff:fe25:5a0c%6]:123 Aug 5 22:34:38.689114 ntpd[1940]: 5 Aug 22:34:38 ntpd[1940]: Listen normally on 11 calie2be352954a [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 22:34:38.689114 ntpd[1940]: 5 Aug 22:34:38 ntpd[1940]: Listen normally on 12 cali251f8150425 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:34:38.687671 ntpd[1940]: Listen normally on 8 cali82833788d15 [fe80::ecee:eeff:feee:eeee%4]:123 Aug 5 22:34:38.688013 ntpd[1940]: Listen normally on 9 cali27a0d65bfcb [fe80::ecee:eeff:feee:eeee%5]:123 Aug 5 22:34:38.688170 ntpd[1940]: Listen normally on 10 vxlan.calico [fe80::6434:4cff:fe25:5a0c%6]:123 Aug 5 22:34:38.688279 ntpd[1940]: Listen normally on 11 calie2be352954a [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 22:34:38.688327 ntpd[1940]: Listen normally on 12 cali251f8150425 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:34:38.731038 systemd[1]: Started cri-containerd-3a7c24e92a5137162e75ef09fcd6a86780c334b0f98bf08748f744987efc0d96.scope - libcontainer container 3a7c24e92a5137162e75ef09fcd6a86780c334b0f98bf08748f744987efc0d96. Aug 5 22:34:38.884912 containerd[1959]: time="2024-08-05T22:34:38.884799296Z" level=info msg="StartContainer for \"3a7c24e92a5137162e75ef09fcd6a86780c334b0f98bf08748f744987efc0d96\" returns successfully" Aug 5 22:34:40.053351 kubelet[3195]: I0805 22:34:40.053248 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69cd57f8df-xq8fb" podStartSLOduration=31.42948291 podStartE2EDuration="36.053135931s" podCreationTimestamp="2024-08-05 22:34:04 +0000 UTC" firstStartedPulling="2024-08-05 22:34:33.944449047 +0000 UTC m=+52.026586418" lastFinishedPulling="2024-08-05 22:34:38.568102049 +0000 UTC m=+56.650239439" observedRunningTime="2024-08-05 22:34:39.717631955 +0000 UTC m=+57.799769347" watchObservedRunningTime="2024-08-05 22:34:40.053135931 +0000 UTC m=+58.135273324" Aug 5 22:34:40.263672 containerd[1959]: time="2024-08-05T22:34:40.263082632Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:40.266241 containerd[1959]: time="2024-08-05T22:34:40.266155765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:34:40.267701 containerd[1959]: time="2024-08-05T22:34:40.267661729Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:40.271974 containerd[1959]: time="2024-08-05T22:34:40.271920289Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:40.274538 containerd[1959]: time="2024-08-05T22:34:40.274136247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.70226827s" Aug 5 22:34:40.274538 containerd[1959]: time="2024-08-05T22:34:40.274324031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:34:40.279449 containerd[1959]: time="2024-08-05T22:34:40.279407618Z" level=info msg="CreateContainer within sandbox \"e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:34:40.315259 containerd[1959]: time="2024-08-05T22:34:40.314883275Z" level=info msg="CreateContainer within sandbox \"e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"087f87d957a97b5ad808d6cf723f897d9bff6e3a4fefa5138933f4ca1d0228d7\"" Aug 5 22:34:40.318245 containerd[1959]: time="2024-08-05T22:34:40.316963645Z" level=info msg="StartContainer for \"087f87d957a97b5ad808d6cf723f897d9bff6e3a4fefa5138933f4ca1d0228d7\"" Aug 5 22:34:40.391391 systemd[1]: Started cri-containerd-087f87d957a97b5ad808d6cf723f897d9bff6e3a4fefa5138933f4ca1d0228d7.scope - libcontainer container 087f87d957a97b5ad808d6cf723f897d9bff6e3a4fefa5138933f4ca1d0228d7. Aug 5 22:34:40.477763 containerd[1959]: time="2024-08-05T22:34:40.477698787Z" level=info msg="StartContainer for \"087f87d957a97b5ad808d6cf723f897d9bff6e3a4fefa5138933f4ca1d0228d7\" returns successfully" Aug 5 22:34:40.480717 containerd[1959]: time="2024-08-05T22:34:40.480682232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:34:42.143692 containerd[1959]: time="2024-08-05T22:34:42.143643117Z" level=info msg="StopPodSandbox for \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\"" Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.296 [WARNING][5484] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9e485b7a-0228-414a-bfbb-32f1bef8c0b6", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c", Pod:"coredns-7db6d8ff4d-rrt8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27a0d65bfcb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.297 [INFO][5484] k8s.go 608: Cleaning up netns ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.297 [INFO][5484] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" iface="eth0" netns="" Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.297 [INFO][5484] k8s.go 615: Releasing IP address(es) ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.297 [INFO][5484] utils.go 188: Calico CNI releasing IP address ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.384 [INFO][5490] ipam_plugin.go 411: Releasing address using handleID ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" HandleID="k8s-pod-network.452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.384 [INFO][5490] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.384 [INFO][5490] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.402 [WARNING][5490] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" HandleID="k8s-pod-network.452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.402 [INFO][5490] ipam_plugin.go 439: Releasing address using workloadID ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" HandleID="k8s-pod-network.452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.406 [INFO][5490] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:42.428103 containerd[1959]: 2024-08-05 22:34:42.419 [INFO][5484] k8s.go 621: Teardown processing complete. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:42.430993 containerd[1959]: time="2024-08-05T22:34:42.427995911Z" level=info msg="TearDown network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\" successfully" Aug 5 22:34:42.430993 containerd[1959]: time="2024-08-05T22:34:42.429797931Z" level=info msg="StopPodSandbox for \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\" returns successfully" Aug 5 22:34:42.431423 containerd[1959]: time="2024-08-05T22:34:42.431208158Z" level=info msg="RemovePodSandbox for \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\"" Aug 5 22:34:42.431423 containerd[1959]: time="2024-08-05T22:34:42.431252110Z" level=info msg="Forcibly stopping sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\"" Aug 5 22:34:42.481084 systemd[1]: Started sshd@8-172.31.23.20:22-147.75.109.163:56638.service - OpenSSH per-connection server daemon (147.75.109.163:56638). Aug 5 22:34:43.022859 sshd[5506]: Accepted publickey for core from 147.75.109.163 port 56638 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:34:43.028842 sshd[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:34:43.048202 systemd-logind[1945]: New session 9 of user core. Aug 5 22:34:43.056303 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:34:43.217512 containerd[1959]: time="2024-08-05T22:34:43.217367605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:42.946 [WARNING][5511] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9e485b7a-0228-414a-bfbb-32f1bef8c0b6", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"355ba2e5b05b9abf38f0cc1d86aef2af15715b39345d58037b9d709e52de407c", Pod:"coredns-7db6d8ff4d-rrt8r", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali27a0d65bfcb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:42.948 [INFO][5511] k8s.go 608: Cleaning up netns ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:42.948 [INFO][5511] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" iface="eth0" netns="" Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:42.948 [INFO][5511] k8s.go 615: Releasing IP address(es) ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:42.948 [INFO][5511] utils.go 188: Calico CNI releasing IP address ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:43.185 [INFO][5521] ipam_plugin.go 411: Releasing address using handleID ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" HandleID="k8s-pod-network.452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:43.186 [INFO][5521] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:43.186 [INFO][5521] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:43.203 [WARNING][5521] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" HandleID="k8s-pod-network.452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:43.204 [INFO][5521] ipam_plugin.go 439: Releasing address using workloadID ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" HandleID="k8s-pod-network.452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--rrt8r-eth0" Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:43.208 [INFO][5521] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:43.220056 containerd[1959]: 2024-08-05 22:34:43.214 [INFO][5511] k8s.go 621: Teardown processing complete. ContainerID="452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf" Aug 5 22:34:43.223034 containerd[1959]: time="2024-08-05T22:34:43.221343449Z" level=info msg="TearDown network for sandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\" successfully" Aug 5 22:34:43.223034 containerd[1959]: time="2024-08-05T22:34:43.222686730Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:34:43.233267 containerd[1959]: time="2024-08-05T22:34:43.233219078Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:43.257044 containerd[1959]: time="2024-08-05T22:34:43.256303196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:34:43.264409 containerd[1959]: time="2024-08-05T22:34:43.261398336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.780424144s" Aug 5 22:34:43.265589 containerd[1959]: time="2024-08-05T22:34:43.264906168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:34:43.280778 containerd[1959]: time="2024-08-05T22:34:43.280318799Z" level=info msg="CreateContainer within sandbox \"e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:34:43.306296 containerd[1959]: time="2024-08-05T22:34:43.306191994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:34:43.306475 containerd[1959]: time="2024-08-05T22:34:43.306367085Z" level=info msg="RemovePodSandbox \"452193e792f20411cb8f5ced8d7f98af2724d63d6578f60e4e84f18f5c65bbbf\" returns successfully" Aug 5 22:34:43.309658 containerd[1959]: time="2024-08-05T22:34:43.309030700Z" level=info msg="StopPodSandbox for \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\"" Aug 5 22:34:43.309658 containerd[1959]: time="2024-08-05T22:34:43.309138495Z" level=info msg="TearDown network for sandbox \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\" successfully" Aug 5 22:34:43.309658 containerd[1959]: time="2024-08-05T22:34:43.309152789Z" level=info msg="StopPodSandbox for \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\" returns successfully" Aug 5 22:34:43.309658 containerd[1959]: time="2024-08-05T22:34:43.309610005Z" level=info msg="RemovePodSandbox for \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\"" Aug 5 22:34:43.309658 containerd[1959]: time="2024-08-05T22:34:43.309639312Z" level=info msg="Forcibly stopping sandbox \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\"" Aug 5 22:34:43.310557 containerd[1959]: time="2024-08-05T22:34:43.309706516Z" level=info msg="TearDown network for sandbox \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\" successfully" Aug 5 22:34:43.323230 containerd[1959]: time="2024-08-05T22:34:43.323179615Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:34:43.325263 containerd[1959]: time="2024-08-05T22:34:43.324920729Z" level=info msg="RemovePodSandbox \"5762f3fb7313bfbca06424cd2bd3b219ae7fff41bc1537b6c76933b512b3e892\" returns successfully" Aug 5 22:34:43.332517 containerd[1959]: time="2024-08-05T22:34:43.328497811Z" level=info msg="StopPodSandbox for \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\"" Aug 5 22:34:43.346601 containerd[1959]: time="2024-08-05T22:34:43.343985643Z" level=info msg="CreateContainer within sandbox \"e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f9f83c254db15be00a6904e68bbe1e22ae09f6b1df0999085d54e4a95b7ad507\"" Aug 5 22:34:43.358478 containerd[1959]: time="2024-08-05T22:34:43.351878439Z" level=info msg="StartContainer for \"f9f83c254db15be00a6904e68bbe1e22ae09f6b1df0999085d54e4a95b7ad507\"" Aug 5 22:34:43.663228 systemd[1]: run-containerd-runc-k8s.io-f9f83c254db15be00a6904e68bbe1e22ae09f6b1df0999085d54e4a95b7ad507-runc.cezAOU.mount: Deactivated successfully. Aug 5 22:34:43.682711 systemd[1]: Started cri-containerd-f9f83c254db15be00a6904e68bbe1e22ae09f6b1df0999085d54e4a95b7ad507.scope - libcontainer container f9f83c254db15be00a6904e68bbe1e22ae09f6b1df0999085d54e4a95b7ad507. Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.599 [WARNING][5550] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548", Pod:"coredns-7db6d8ff4d-dqhhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82833788d15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.600 [INFO][5550] k8s.go 608: Cleaning up netns ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.600 [INFO][5550] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" iface="eth0" netns="" Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.600 [INFO][5550] k8s.go 615: Releasing IP address(es) ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.600 [INFO][5550] utils.go 188: Calico CNI releasing IP address ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.780 [INFO][5567] ipam_plugin.go 411: Releasing address using handleID ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" HandleID="k8s-pod-network.8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.780 [INFO][5567] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.780 [INFO][5567] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.836 [WARNING][5567] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" HandleID="k8s-pod-network.8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.836 [INFO][5567] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" HandleID="k8s-pod-network.8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.841 [INFO][5567] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:43.853690 containerd[1959]: 2024-08-05 22:34:43.847 [INFO][5550] k8s.go 621: Teardown processing complete. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:43.854588 containerd[1959]: time="2024-08-05T22:34:43.853752618Z" level=info msg="TearDown network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\" successfully" Aug 5 22:34:43.854588 containerd[1959]: time="2024-08-05T22:34:43.853784860Z" level=info msg="StopPodSandbox for \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\" returns successfully" Aug 5 22:34:43.856301 containerd[1959]: time="2024-08-05T22:34:43.854956601Z" level=info msg="RemovePodSandbox for \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\"" Aug 5 22:34:43.856301 containerd[1959]: time="2024-08-05T22:34:43.855001190Z" level=info msg="Forcibly stopping sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\"" Aug 5 22:34:44.027512 sshd[5506]: pam_unix(sshd:session): session closed for user core Aug 5 22:34:44.044078 systemd[1]: sshd@8-172.31.23.20:22-147.75.109.163:56638.service: Deactivated successfully. Aug 5 22:34:44.053273 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:34:44.057246 systemd-logind[1945]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:34:44.060029 systemd-logind[1945]: Removed session 9. Aug 5 22:34:44.079399 containerd[1959]: time="2024-08-05T22:34:44.079179851Z" level=info msg="StartContainer for \"f9f83c254db15be00a6904e68bbe1e22ae09f6b1df0999085d54e4a95b7ad507\" returns successfully" Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.005 [WARNING][5601] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e92fcae9-7ce9-48eb-b1fb-347bcc3d67f7", ResourceVersion:"771", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 33, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"4fe28727af8eefe4fa6e545b141ca36854ce2663cc30f284d8a5f0f69d4ae548", Pod:"coredns-7db6d8ff4d-dqhhw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali82833788d15", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.009 [INFO][5601] k8s.go 608: Cleaning up netns ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.009 [INFO][5601] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" iface="eth0" netns="" Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.009 [INFO][5601] k8s.go 615: Releasing IP address(es) ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.009 [INFO][5601] utils.go 188: Calico CNI releasing IP address ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.118 [INFO][5613] ipam_plugin.go 411: Releasing address using handleID ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" HandleID="k8s-pod-network.8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.118 [INFO][5613] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.118 [INFO][5613] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.128 [WARNING][5613] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" HandleID="k8s-pod-network.8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.128 [INFO][5613] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" HandleID="k8s-pod-network.8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Workload="ip--172--31--23--20-k8s-coredns--7db6d8ff4d--dqhhw-eth0" Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.131 [INFO][5613] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:44.137092 containerd[1959]: 2024-08-05 22:34:44.134 [INFO][5601] k8s.go 621: Teardown processing complete. ContainerID="8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38" Aug 5 22:34:44.138270 containerd[1959]: time="2024-08-05T22:34:44.138105114Z" level=info msg="TearDown network for sandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\" successfully" Aug 5 22:34:44.144028 containerd[1959]: time="2024-08-05T22:34:44.143975986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:34:44.144239 containerd[1959]: time="2024-08-05T22:34:44.144072156Z" level=info msg="RemovePodSandbox \"8f1a42390ee9418d42b428895262fa06c1705c3a934e38dc007f0ce94626ae38\" returns successfully" Aug 5 22:34:44.144906 containerd[1959]: time="2024-08-05T22:34:44.144833745Z" level=info msg="StopPodSandbox for \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\"" Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.225 [WARNING][5645] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"59f1ecad-9abf-4018-81f1-db05fd12b487", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae", Pod:"csi-node-driver-hhrbp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali251f8150425", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.225 [INFO][5645] k8s.go 608: Cleaning up netns ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.225 [INFO][5645] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" iface="eth0" netns="" Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.225 [INFO][5645] k8s.go 615: Releasing IP address(es) ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.225 [INFO][5645] utils.go 188: Calico CNI releasing IP address ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.256 [INFO][5657] ipam_plugin.go 411: Releasing address using handleID ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" HandleID="k8s-pod-network.127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.256 [INFO][5657] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.256 [INFO][5657] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.264 [WARNING][5657] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" HandleID="k8s-pod-network.127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.264 [INFO][5657] ipam_plugin.go 439: Releasing address using workloadID ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" HandleID="k8s-pod-network.127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.266 [INFO][5657] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:44.270080 containerd[1959]: 2024-08-05 22:34:44.268 [INFO][5645] k8s.go 621: Teardown processing complete. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:44.271327 containerd[1959]: time="2024-08-05T22:34:44.270125863Z" level=info msg="TearDown network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\" successfully" Aug 5 22:34:44.271327 containerd[1959]: time="2024-08-05T22:34:44.270155189Z" level=info msg="StopPodSandbox for \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\" returns successfully" Aug 5 22:34:44.271327 containerd[1959]: time="2024-08-05T22:34:44.270691215Z" level=info msg="RemovePodSandbox for \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\"" Aug 5 22:34:44.271327 containerd[1959]: time="2024-08-05T22:34:44.270728250Z" level=info msg="Forcibly stopping sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\"" Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.330 [WARNING][5676] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"59f1ecad-9abf-4018-81f1-db05fd12b487", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"e3737a8ec91ae1fcb5bf236dc03bf9f0b3f2f61eb8b964744f2d010a076a54ae", Pod:"csi-node-driver-hhrbp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.109.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali251f8150425", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.331 [INFO][5676] k8s.go 608: Cleaning up netns ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.332 [INFO][5676] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" iface="eth0" netns="" Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.332 [INFO][5676] k8s.go 615: Releasing IP address(es) ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.332 [INFO][5676] utils.go 188: Calico CNI releasing IP address ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.393 [INFO][5682] ipam_plugin.go 411: Releasing address using handleID ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" HandleID="k8s-pod-network.127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.394 [INFO][5682] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.394 [INFO][5682] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.404 [WARNING][5682] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" HandleID="k8s-pod-network.127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.404 [INFO][5682] ipam_plugin.go 439: Releasing address using workloadID ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" HandleID="k8s-pod-network.127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Workload="ip--172--31--23--20-k8s-csi--node--driver--hhrbp-eth0" Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.406 [INFO][5682] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:44.415765 containerd[1959]: 2024-08-05 22:34:44.409 [INFO][5676] k8s.go 621: Teardown processing complete. ContainerID="127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f" Aug 5 22:34:44.415765 containerd[1959]: time="2024-08-05T22:34:44.414865501Z" level=info msg="TearDown network for sandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\" successfully" Aug 5 22:34:44.420547 containerd[1959]: time="2024-08-05T22:34:44.420499542Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:34:44.421972 containerd[1959]: time="2024-08-05T22:34:44.420581710Z" level=info msg="RemovePodSandbox \"127b3cfcdfc4974ff2d6cf179946562b4fa43888ee5d7097af5d3abc03012b4f\" returns successfully" Aug 5 22:34:44.421972 containerd[1959]: time="2024-08-05T22:34:44.421254806Z" level=info msg="StopPodSandbox for \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\"" Aug 5 22:34:44.435050 kubelet[3195]: I0805 22:34:44.435003 3195 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:34:44.437104 kubelet[3195]: I0805 22:34:44.437003 3195 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.496 [WARNING][5700] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0", GenerateName:"calico-kube-controllers-69cd57f8df-", Namespace:"calico-system", SelfLink:"", UID:"df8aa33e-ef06-4b03-a2b2-eefc0f040acf", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69cd57f8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3", Pod:"calico-kube-controllers-69cd57f8df-xq8fb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie2be352954a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.496 [INFO][5700] k8s.go 608: Cleaning up netns ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.496 [INFO][5700] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" iface="eth0" netns="" Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.496 [INFO][5700] k8s.go 615: Releasing IP address(es) ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.496 [INFO][5700] utils.go 188: Calico CNI releasing IP address ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.548 [INFO][5706] ipam_plugin.go 411: Releasing address using handleID ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" HandleID="k8s-pod-network.6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.548 [INFO][5706] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.548 [INFO][5706] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.555 [WARNING][5706] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" HandleID="k8s-pod-network.6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.555 [INFO][5706] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" HandleID="k8s-pod-network.6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.556 [INFO][5706] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:44.560672 containerd[1959]: 2024-08-05 22:34:44.558 [INFO][5700] k8s.go 621: Teardown processing complete. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:44.562330 containerd[1959]: time="2024-08-05T22:34:44.560744949Z" level=info msg="TearDown network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\" successfully" Aug 5 22:34:44.562330 containerd[1959]: time="2024-08-05T22:34:44.560776457Z" level=info msg="StopPodSandbox for \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\" returns successfully" Aug 5 22:34:44.562330 containerd[1959]: time="2024-08-05T22:34:44.562086667Z" level=info msg="RemovePodSandbox for \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\"" Aug 5 22:34:44.562330 containerd[1959]: time="2024-08-05T22:34:44.562153373Z" level=info msg="Forcibly stopping sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\"" Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.611 [WARNING][5724] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0", GenerateName:"calico-kube-controllers-69cd57f8df-", Namespace:"calico-system", SelfLink:"", UID:"df8aa33e-ef06-4b03-a2b2-eefc0f040acf", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 34, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69cd57f8df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"70584c3bf07e75f75fee56b7c2bb91d97e958abff52caefd50d22ef774a28ce3", Pod:"calico-kube-controllers-69cd57f8df-xq8fb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie2be352954a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.611 [INFO][5724] k8s.go 608: Cleaning up netns ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.611 [INFO][5724] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" iface="eth0" netns="" Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.611 [INFO][5724] k8s.go 615: Releasing IP address(es) ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.611 [INFO][5724] utils.go 188: Calico CNI releasing IP address ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.637 [INFO][5730] ipam_plugin.go 411: Releasing address using handleID ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" HandleID="k8s-pod-network.6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.638 [INFO][5730] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.638 [INFO][5730] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.645 [WARNING][5730] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" HandleID="k8s-pod-network.6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.645 [INFO][5730] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" HandleID="k8s-pod-network.6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Workload="ip--172--31--23--20-k8s-calico--kube--controllers--69cd57f8df--xq8fb-eth0" Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.650 [INFO][5730] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:34:44.656067 containerd[1959]: 2024-08-05 22:34:44.653 [INFO][5724] k8s.go 621: Teardown processing complete. ContainerID="6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4" Aug 5 22:34:44.656826 containerd[1959]: time="2024-08-05T22:34:44.656112472Z" level=info msg="TearDown network for sandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\" successfully" Aug 5 22:34:44.669126 containerd[1959]: time="2024-08-05T22:34:44.668893837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:34:44.669126 containerd[1959]: time="2024-08-05T22:34:44.668968912Z" level=info msg="RemovePodSandbox \"6039118fab15ee685b9222f4368ef2de97d70d15503bc2d1a146985c7520a4b4\" returns successfully" Aug 5 22:34:44.850058 kubelet[3195]: I0805 22:34:44.849978 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-hhrbp" podStartSLOduration=32.172902164 podStartE2EDuration="40.84995619s" podCreationTimestamp="2024-08-05 22:34:04 +0000 UTC" firstStartedPulling="2024-08-05 22:34:34.590813147 +0000 UTC m=+52.672950526" lastFinishedPulling="2024-08-05 22:34:43.267867173 +0000 UTC m=+61.350004552" observedRunningTime="2024-08-05 22:34:44.84929661 +0000 UTC m=+62.931434002" watchObservedRunningTime="2024-08-05 22:34:44.84995619 +0000 UTC m=+62.932093581" Aug 5 22:34:47.993286 systemd[1]: run-containerd-runc-k8s.io-3a7c24e92a5137162e75ef09fcd6a86780c334b0f98bf08748f744987efc0d96-runc.evwuou.mount: Deactivated successfully. Aug 5 22:34:49.059233 systemd[1]: Started sshd@9-172.31.23.20:22-147.75.109.163:60962.service - OpenSSH per-connection server daemon (147.75.109.163:60962). Aug 5 22:34:49.247793 sshd[5759]: Accepted publickey for core from 147.75.109.163 port 60962 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:34:49.250287 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:34:49.260235 systemd-logind[1945]: New session 10 of user core. Aug 5 22:34:49.268727 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:34:49.501163 sshd[5759]: pam_unix(sshd:session): session closed for user core Aug 5 22:34:49.507017 systemd[1]: sshd@9-172.31.23.20:22-147.75.109.163:60962.service: Deactivated successfully. Aug 5 22:34:49.510741 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:34:49.513006 systemd-logind[1945]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:34:49.516711 systemd-logind[1945]: Removed session 10. Aug 5 22:34:49.535866 systemd[1]: Started sshd@10-172.31.23.20:22-147.75.109.163:60966.service - OpenSSH per-connection server daemon (147.75.109.163:60966). Aug 5 22:34:49.710553 sshd[5776]: Accepted publickey for core from 147.75.109.163 port 60966 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:34:49.712268 sshd[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:34:49.720762 systemd-logind[1945]: New session 11 of user core. Aug 5 22:34:49.727979 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:34:50.084068 sshd[5776]: pam_unix(sshd:session): session closed for user core Aug 5 22:34:50.096098 systemd[1]: sshd@10-172.31.23.20:22-147.75.109.163:60966.service: Deactivated successfully. Aug 5 22:34:50.106599 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:34:50.110456 systemd-logind[1945]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:34:50.138888 systemd[1]: Started sshd@11-172.31.23.20:22-147.75.109.163:60972.service - OpenSSH per-connection server daemon (147.75.109.163:60972). Aug 5 22:34:50.147040 systemd-logind[1945]: Removed session 11. Aug 5 22:34:50.347529 sshd[5787]: Accepted publickey for core from 147.75.109.163 port 60972 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:34:50.349044 sshd[5787]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:34:50.363456 systemd-logind[1945]: New session 12 of user core. Aug 5 22:34:50.370759 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:34:50.618139 sshd[5787]: pam_unix(sshd:session): session closed for user core Aug 5 22:34:50.625367 systemd[1]: sshd@11-172.31.23.20:22-147.75.109.163:60972.service: Deactivated successfully. Aug 5 22:34:50.628405 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:34:50.629572 systemd-logind[1945]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:34:50.630880 systemd-logind[1945]: Removed session 12. Aug 5 22:34:55.658833 systemd[1]: Started sshd@12-172.31.23.20:22-147.75.109.163:55240.service - OpenSSH per-connection server daemon (147.75.109.163:55240). Aug 5 22:34:55.858781 sshd[5833]: Accepted publickey for core from 147.75.109.163 port 55240 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:34:55.862387 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:34:55.871196 systemd-logind[1945]: New session 13 of user core. Aug 5 22:34:55.876755 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:34:56.105448 sshd[5833]: pam_unix(sshd:session): session closed for user core Aug 5 22:34:56.116090 systemd[1]: sshd@12-172.31.23.20:22-147.75.109.163:55240.service: Deactivated successfully. Aug 5 22:34:56.118970 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:34:56.121819 systemd-logind[1945]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:34:56.123540 systemd-logind[1945]: Removed session 13. Aug 5 22:35:01.147050 systemd[1]: Started sshd@13-172.31.23.20:22-147.75.109.163:55254.service - OpenSSH per-connection server daemon (147.75.109.163:55254). Aug 5 22:35:01.384386 sshd[5874]: Accepted publickey for core from 147.75.109.163 port 55254 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:01.396182 sshd[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:01.423061 systemd-logind[1945]: New session 14 of user core. Aug 5 22:35:01.431801 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:35:01.729619 sshd[5874]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:01.742095 systemd[1]: sshd@13-172.31.23.20:22-147.75.109.163:55254.service: Deactivated successfully. Aug 5 22:35:01.750773 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:35:01.759457 systemd-logind[1945]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:35:01.789558 systemd-logind[1945]: Removed session 14. Aug 5 22:35:06.780225 systemd[1]: Started sshd@14-172.31.23.20:22-147.75.109.163:37966.service - OpenSSH per-connection server daemon (147.75.109.163:37966). Aug 5 22:35:07.015107 sshd[5893]: Accepted publickey for core from 147.75.109.163 port 37966 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:07.015978 sshd[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:07.021454 systemd-logind[1945]: New session 15 of user core. Aug 5 22:35:07.026773 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:35:07.311963 sshd[5893]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:07.317545 systemd[1]: sshd@14-172.31.23.20:22-147.75.109.163:37966.service: Deactivated successfully. Aug 5 22:35:07.323649 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:35:07.326299 systemd-logind[1945]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:35:07.327985 systemd-logind[1945]: Removed session 15. Aug 5 22:35:12.345080 systemd[1]: Started sshd@15-172.31.23.20:22-147.75.109.163:37968.service - OpenSSH per-connection server daemon (147.75.109.163:37968). Aug 5 22:35:12.674494 sshd[5906]: Accepted publickey for core from 147.75.109.163 port 37968 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:12.684546 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:12.700559 systemd-logind[1945]: New session 16 of user core. Aug 5 22:35:12.706665 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:35:13.074865 sshd[5906]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:13.079201 systemd[1]: sshd@15-172.31.23.20:22-147.75.109.163:37968.service: Deactivated successfully. Aug 5 22:35:13.083323 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:35:13.086844 systemd-logind[1945]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:35:13.089691 systemd-logind[1945]: Removed session 16. Aug 5 22:35:13.109901 systemd[1]: Started sshd@16-172.31.23.20:22-147.75.109.163:37978.service - OpenSSH per-connection server daemon (147.75.109.163:37978). Aug 5 22:35:13.317131 sshd[5923]: Accepted publickey for core from 147.75.109.163 port 37978 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:13.319788 sshd[5923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:13.326789 systemd-logind[1945]: New session 17 of user core. Aug 5 22:35:13.336349 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:35:14.168368 sshd[5923]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:14.175676 systemd[1]: sshd@16-172.31.23.20:22-147.75.109.163:37978.service: Deactivated successfully. Aug 5 22:35:14.178988 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:35:14.180307 systemd-logind[1945]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:35:14.181835 systemd-logind[1945]: Removed session 17. Aug 5 22:35:14.203403 systemd[1]: Started sshd@17-172.31.23.20:22-147.75.109.163:35522.service - OpenSSH per-connection server daemon (147.75.109.163:35522). Aug 5 22:35:14.389826 sshd[5940]: Accepted publickey for core from 147.75.109.163 port 35522 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:14.391818 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:14.398473 systemd-logind[1945]: New session 18 of user core. Aug 5 22:35:14.402761 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:35:17.364988 sshd[5940]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:17.372415 systemd-logind[1945]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:35:17.374829 systemd[1]: sshd@17-172.31.23.20:22-147.75.109.163:35522.service: Deactivated successfully. Aug 5 22:35:17.381258 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:35:17.398194 systemd-logind[1945]: Removed session 18. Aug 5 22:35:17.408996 systemd[1]: Started sshd@18-172.31.23.20:22-147.75.109.163:35538.service - OpenSSH per-connection server daemon (147.75.109.163:35538). Aug 5 22:35:17.654518 sshd[5965]: Accepted publickey for core from 147.75.109.163 port 35538 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:17.660421 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:17.673543 systemd-logind[1945]: New session 19 of user core. Aug 5 22:35:17.685753 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:35:18.083039 systemd[1]: run-containerd-runc-k8s.io-3a7c24e92a5137162e75ef09fcd6a86780c334b0f98bf08748f744987efc0d96-runc.As8t7U.mount: Deactivated successfully. Aug 5 22:35:18.639242 sshd[5965]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:18.644969 systemd[1]: sshd@18-172.31.23.20:22-147.75.109.163:35538.service: Deactivated successfully. Aug 5 22:35:18.648882 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:35:18.650394 systemd-logind[1945]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:35:18.654159 systemd-logind[1945]: Removed session 19. Aug 5 22:35:18.673131 systemd[1]: Started sshd@19-172.31.23.20:22-147.75.109.163:35546.service - OpenSSH per-connection server daemon (147.75.109.163:35546). Aug 5 22:35:18.850555 sshd[5998]: Accepted publickey for core from 147.75.109.163 port 35546 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:18.852369 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:18.857108 systemd-logind[1945]: New session 20 of user core. Aug 5 22:35:18.864682 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:35:19.133348 sshd[5998]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:19.139150 systemd[1]: sshd@19-172.31.23.20:22-147.75.109.163:35546.service: Deactivated successfully. Aug 5 22:35:19.143116 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:35:19.147623 systemd-logind[1945]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:35:19.152257 systemd-logind[1945]: Removed session 20. Aug 5 22:35:24.176986 systemd[1]: Started sshd@20-172.31.23.20:22-147.75.109.163:50422.service - OpenSSH per-connection server daemon (147.75.109.163:50422). Aug 5 22:35:24.358338 sshd[6013]: Accepted publickey for core from 147.75.109.163 port 50422 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:24.359168 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:24.367349 systemd-logind[1945]: New session 21 of user core. Aug 5 22:35:24.370707 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:35:24.581562 sshd[6013]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:24.595951 systemd-logind[1945]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:35:24.600184 systemd[1]: sshd@20-172.31.23.20:22-147.75.109.163:50422.service: Deactivated successfully. Aug 5 22:35:24.619168 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:35:24.622000 systemd-logind[1945]: Removed session 21. Aug 5 22:35:27.437481 systemd[1]: run-containerd-runc-k8s.io-0f051f1a655d0864fda7e15c1f7d2aa174e52516abbc9d5f6ff50b97929950bf-runc.WpV8aE.mount: Deactivated successfully. Aug 5 22:35:29.622935 systemd[1]: Started sshd@21-172.31.23.20:22-147.75.109.163:50428.service - OpenSSH per-connection server daemon (147.75.109.163:50428). Aug 5 22:35:29.793443 sshd[6056]: Accepted publickey for core from 147.75.109.163 port 50428 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:29.795729 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:29.801166 systemd-logind[1945]: New session 22 of user core. Aug 5 22:35:29.806657 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:35:30.054974 sshd[6056]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:30.061542 systemd[1]: sshd@21-172.31.23.20:22-147.75.109.163:50428.service: Deactivated successfully. Aug 5 22:35:30.065168 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:35:30.066012 systemd-logind[1945]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:35:30.067562 systemd-logind[1945]: Removed session 22. Aug 5 22:35:33.390146 kubelet[3195]: I0805 22:35:33.385960 3195 topology_manager.go:215] "Topology Admit Handler" podUID="aa2d5d3a-316b-4f5b-ae00-47c17570f8dd" podNamespace="calico-apiserver" podName="calico-apiserver-7774f4794d-f6f8p" Aug 5 22:35:33.466231 systemd[1]: Created slice kubepods-besteffort-podaa2d5d3a_316b_4f5b_ae00_47c17570f8dd.slice - libcontainer container kubepods-besteffort-podaa2d5d3a_316b_4f5b_ae00_47c17570f8dd.slice. Aug 5 22:35:33.499966 kubelet[3195]: I0805 22:35:33.499907 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa2d5d3a-316b-4f5b-ae00-47c17570f8dd-calico-apiserver-certs\") pod \"calico-apiserver-7774f4794d-f6f8p\" (UID: \"aa2d5d3a-316b-4f5b-ae00-47c17570f8dd\") " pod="calico-apiserver/calico-apiserver-7774f4794d-f6f8p" Aug 5 22:35:33.500517 kubelet[3195]: I0805 22:35:33.500281 3195 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfnrp\" (UniqueName: \"kubernetes.io/projected/aa2d5d3a-316b-4f5b-ae00-47c17570f8dd-kube-api-access-dfnrp\") pod \"calico-apiserver-7774f4794d-f6f8p\" (UID: \"aa2d5d3a-316b-4f5b-ae00-47c17570f8dd\") " pod="calico-apiserver/calico-apiserver-7774f4794d-f6f8p" Aug 5 22:35:33.691591 kubelet[3195]: E0805 22:35:33.630461 3195 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:35:33.723377 kubelet[3195]: E0805 22:35:33.723286 3195 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/aa2d5d3a-316b-4f5b-ae00-47c17570f8dd-calico-apiserver-certs podName:aa2d5d3a-316b-4f5b-ae00-47c17570f8dd nodeName:}" failed. No retries permitted until 2024-08-05 22:35:34.198000974 +0000 UTC m=+112.280138365 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/aa2d5d3a-316b-4f5b-ae00-47c17570f8dd-calico-apiserver-certs") pod "calico-apiserver-7774f4794d-f6f8p" (UID: "aa2d5d3a-316b-4f5b-ae00-47c17570f8dd") : secret "calico-apiserver-certs" not found Aug 5 22:35:34.388017 containerd[1959]: time="2024-08-05T22:35:34.387966880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774f4794d-f6f8p,Uid:aa2d5d3a-316b-4f5b-ae00-47c17570f8dd,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:35:34.849498 systemd-networkd[1807]: calic39b2eed9ee: Link UP Aug 5 22:35:34.849789 systemd-networkd[1807]: calic39b2eed9ee: Gained carrier Aug 5 22:35:34.857796 (udev-worker)[6093]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.728 [INFO][6074] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0 calico-apiserver-7774f4794d- calico-apiserver aa2d5d3a-316b-4f5b-ae00-47c17570f8dd 1159 0 2024-08-05 22:35:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7774f4794d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-20 calico-apiserver-7774f4794d-f6f8p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic39b2eed9ee [] []}} ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Namespace="calico-apiserver" Pod="calico-apiserver-7774f4794d-f6f8p" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.729 [INFO][6074] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Namespace="calico-apiserver" Pod="calico-apiserver-7774f4794d-f6f8p" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.791 [INFO][6085] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" HandleID="k8s-pod-network.ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Workload="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.804 [INFO][6085] ipam_plugin.go 264: Auto assigning IP ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" HandleID="k8s-pod-network.ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Workload="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a180), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-20", "pod":"calico-apiserver-7774f4794d-f6f8p", "timestamp":"2024-08-05 22:35:34.791332298 +0000 UTC"}, Hostname:"ip-172-31-23-20", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.804 [INFO][6085] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.804 [INFO][6085] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.804 [INFO][6085] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-20' Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.807 [INFO][6085] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" host="ip-172-31-23-20" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.813 [INFO][6085] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-20" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.819 [INFO][6085] ipam.go 489: Trying affinity for 192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.822 [INFO][6085] ipam.go 155: Attempting to load block cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.825 [INFO][6085] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.109.192/26 host="ip-172-31-23-20" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.826 [INFO][6085] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.109.192/26 handle="k8s-pod-network.ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" host="ip-172-31-23-20" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.827 [INFO][6085] ipam.go 1685: Creating new handle: k8s-pod-network.ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25 Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.831 [INFO][6085] ipam.go 1203: Writing block in order to claim IPs block=192.168.109.192/26 handle="k8s-pod-network.ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" host="ip-172-31-23-20" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.838 [INFO][6085] ipam.go 1216: Successfully claimed IPs: [192.168.109.197/26] block=192.168.109.192/26 handle="k8s-pod-network.ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" host="ip-172-31-23-20" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.839 [INFO][6085] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.109.197/26] handle="k8s-pod-network.ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" host="ip-172-31-23-20" Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.839 [INFO][6085] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:35:34.950549 containerd[1959]: 2024-08-05 22:35:34.839 [INFO][6085] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.109.197/26] IPv6=[] ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" HandleID="k8s-pod-network.ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Workload="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" Aug 5 22:35:34.956755 containerd[1959]: 2024-08-05 22:35:34.842 [INFO][6074] k8s.go 386: Populated endpoint ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Namespace="calico-apiserver" Pod="calico-apiserver-7774f4794d-f6f8p" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0", GenerateName:"calico-apiserver-7774f4794d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa2d5d3a-316b-4f5b-ae00-47c17570f8dd", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774f4794d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"", Pod:"calico-apiserver-7774f4794d-f6f8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic39b2eed9ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:35:34.956755 containerd[1959]: 2024-08-05 22:35:34.843 [INFO][6074] k8s.go 387: Calico CNI using IPs: [192.168.109.197/32] ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Namespace="calico-apiserver" Pod="calico-apiserver-7774f4794d-f6f8p" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" Aug 5 22:35:34.956755 containerd[1959]: 2024-08-05 22:35:34.843 [INFO][6074] dataplane_linux.go 68: Setting the host side veth name to calic39b2eed9ee ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Namespace="calico-apiserver" Pod="calico-apiserver-7774f4794d-f6f8p" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" Aug 5 22:35:34.956755 containerd[1959]: 2024-08-05 22:35:34.850 [INFO][6074] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Namespace="calico-apiserver" Pod="calico-apiserver-7774f4794d-f6f8p" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" Aug 5 22:35:34.956755 containerd[1959]: 2024-08-05 22:35:34.850 [INFO][6074] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Namespace="calico-apiserver" Pod="calico-apiserver-7774f4794d-f6f8p" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0", GenerateName:"calico-apiserver-7774f4794d-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa2d5d3a-316b-4f5b-ae00-47c17570f8dd", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 35, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7774f4794d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-20", ContainerID:"ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25", Pod:"calico-apiserver-7774f4794d-f6f8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic39b2eed9ee", MAC:"e6:4c:4b:e8:10:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:35:34.956755 containerd[1959]: 2024-08-05 22:35:34.925 [INFO][6074] k8s.go 500: Wrote updated endpoint to datastore ContainerID="ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25" Namespace="calico-apiserver" Pod="calico-apiserver-7774f4794d-f6f8p" WorkloadEndpoint="ip--172--31--23--20-k8s-calico--apiserver--7774f4794d--f6f8p-eth0" Aug 5 22:35:35.036695 containerd[1959]: time="2024-08-05T22:35:35.035583918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:35:35.036695 containerd[1959]: time="2024-08-05T22:35:35.035678524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:35:35.036695 containerd[1959]: time="2024-08-05T22:35:35.035709025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:35:35.036695 containerd[1959]: time="2024-08-05T22:35:35.035729129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:35:35.112761 systemd[1]: Started cri-containerd-ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25.scope - libcontainer container ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25. Aug 5 22:35:35.141944 systemd[1]: Started sshd@22-172.31.23.20:22-147.75.109.163:35598.service - OpenSSH per-connection server daemon (147.75.109.163:35598). Aug 5 22:35:35.310936 containerd[1959]: time="2024-08-05T22:35:35.310881033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7774f4794d-f6f8p,Uid:aa2d5d3a-316b-4f5b-ae00-47c17570f8dd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25\"" Aug 5 22:35:35.340280 containerd[1959]: time="2024-08-05T22:35:35.340164661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:35:35.404558 sshd[6137]: Accepted publickey for core from 147.75.109.163 port 35598 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:35.411198 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:35.436280 systemd-logind[1945]: New session 23 of user core. Aug 5 22:35:35.447159 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:35:36.139878 systemd-networkd[1807]: calic39b2eed9ee: Gained IPv6LL Aug 5 22:35:36.367069 sshd[6137]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:36.372526 systemd[1]: sshd@22-172.31.23.20:22-147.75.109.163:35598.service: Deactivated successfully. Aug 5 22:35:36.376389 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:35:36.379012 systemd-logind[1945]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:35:36.381388 systemd-logind[1945]: Removed session 23. Aug 5 22:35:38.686864 ntpd[1940]: Listen normally on 13 calic39b2eed9ee [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:35:38.694432 ntpd[1940]: 5 Aug 22:35:38 ntpd[1940]: Listen normally on 13 calic39b2eed9ee [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:35:40.324182 containerd[1959]: time="2024-08-05T22:35:40.323555123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:35:40.332281 containerd[1959]: time="2024-08-05T22:35:40.332230137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.991729036s" Aug 5 22:35:40.332281 containerd[1959]: time="2024-08-05T22:35:40.332280694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:35:40.341658 containerd[1959]: time="2024-08-05T22:35:40.341602997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:35:40.375500 containerd[1959]: time="2024-08-05T22:35:40.375326013Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:35:40.379101 containerd[1959]: time="2024-08-05T22:35:40.378638012Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:35:40.415090 containerd[1959]: time="2024-08-05T22:35:40.414684220Z" level=info msg="CreateContainer within sandbox \"ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:35:40.622315 containerd[1959]: time="2024-08-05T22:35:40.622193513Z" level=info msg="CreateContainer within sandbox \"ee16ea7d803f1816c0678119466bf9f5f802af91c848299efdebc1105471ff25\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a580f0cc3464c2fd498ac06695de365417677ec656e3950f72b37982c96ef487\"" Aug 5 22:35:40.623173 containerd[1959]: time="2024-08-05T22:35:40.623039058Z" level=info msg="StartContainer for \"a580f0cc3464c2fd498ac06695de365417677ec656e3950f72b37982c96ef487\"" Aug 5 22:35:40.748534 systemd[1]: Started cri-containerd-a580f0cc3464c2fd498ac06695de365417677ec656e3950f72b37982c96ef487.scope - libcontainer container a580f0cc3464c2fd498ac06695de365417677ec656e3950f72b37982c96ef487. Aug 5 22:35:40.826991 containerd[1959]: time="2024-08-05T22:35:40.826823745Z" level=info msg="StartContainer for \"a580f0cc3464c2fd498ac06695de365417677ec656e3950f72b37982c96ef487\" returns successfully" Aug 5 22:35:41.423629 systemd[1]: Started sshd@23-172.31.23.20:22-147.75.109.163:35600.service - OpenSSH per-connection server daemon (147.75.109.163:35600). Aug 5 22:35:41.653665 sshd[6222]: Accepted publickey for core from 147.75.109.163 port 35600 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:41.657232 sshd[6222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:41.664706 systemd-logind[1945]: New session 24 of user core. Aug 5 22:35:41.672767 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:35:43.005116 sshd[6222]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:43.024551 systemd[1]: sshd@23-172.31.23.20:22-147.75.109.163:35600.service: Deactivated successfully. Aug 5 22:35:43.026759 systemd-logind[1945]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:35:43.038570 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:35:43.043555 systemd-logind[1945]: Removed session 24. Aug 5 22:35:43.078004 kubelet[3195]: I0805 22:35:43.068353 3195 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7774f4794d-f6f8p" podStartSLOduration=5.002212189 podStartE2EDuration="10.046220225s" podCreationTimestamp="2024-08-05 22:35:33 +0000 UTC" firstStartedPulling="2024-08-05 22:35:35.331956961 +0000 UTC m=+113.414094344" lastFinishedPulling="2024-08-05 22:35:40.375965004 +0000 UTC m=+118.458102380" observedRunningTime="2024-08-05 22:35:41.067969346 +0000 UTC m=+119.150106738" watchObservedRunningTime="2024-08-05 22:35:43.046220225 +0000 UTC m=+121.128357618" Aug 5 22:35:48.042945 systemd[1]: Started sshd@24-172.31.23.20:22-147.75.109.163:57054.service - OpenSSH per-connection server daemon (147.75.109.163:57054). Aug 5 22:35:48.216683 sshd[6266]: Accepted publickey for core from 147.75.109.163 port 57054 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:48.218591 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:48.225271 systemd-logind[1945]: New session 25 of user core. Aug 5 22:35:48.234871 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:35:48.531759 sshd[6266]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:48.539090 systemd[1]: sshd@24-172.31.23.20:22-147.75.109.163:57054.service: Deactivated successfully. Aug 5 22:35:48.542035 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:35:48.545686 systemd-logind[1945]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:35:48.547076 systemd-logind[1945]: Removed session 25. Aug 5 22:35:53.577735 systemd[1]: Started sshd@25-172.31.23.20:22-147.75.109.163:57056.service - OpenSSH per-connection server daemon (147.75.109.163:57056). Aug 5 22:35:53.797621 sshd[6300]: Accepted publickey for core from 147.75.109.163 port 57056 ssh2: RSA SHA256:8mVYG1EE6TvyH1P+hHOwxp/5fDCl4ZJSIIW+VaOgwvw Aug 5 22:35:53.802894 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:35:53.816868 systemd-logind[1945]: New session 26 of user core. Aug 5 22:35:53.824717 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:35:54.284298 sshd[6300]: pam_unix(sshd:session): session closed for user core Aug 5 22:35:54.292198 systemd[1]: sshd@25-172.31.23.20:22-147.75.109.163:57056.service: Deactivated successfully. Aug 5 22:35:54.298991 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:35:54.301334 systemd-logind[1945]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:35:54.306130 systemd-logind[1945]: Removed session 26. Aug 5 22:36:09.495902 systemd[1]: cri-containerd-5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64.scope: Deactivated successfully. Aug 5 22:36:09.496261 systemd[1]: cri-containerd-5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64.scope: Consumed 3.399s CPU time, 22.3M memory peak, 0B memory swap peak. Aug 5 22:36:09.579268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64-rootfs.mount: Deactivated successfully. Aug 5 22:36:09.591747 containerd[1959]: time="2024-08-05T22:36:09.572667265Z" level=info msg="shim disconnected" id=5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64 namespace=k8s.io Aug 5 22:36:09.592994 containerd[1959]: time="2024-08-05T22:36:09.591750192Z" level=warning msg="cleaning up after shim disconnected" id=5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64 namespace=k8s.io Aug 5 22:36:09.592994 containerd[1959]: time="2024-08-05T22:36:09.591771173Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:36:09.937506 systemd[1]: cri-containerd-9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c.scope: Deactivated successfully. Aug 5 22:36:09.938188 systemd[1]: cri-containerd-9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c.scope: Consumed 6.623s CPU time. Aug 5 22:36:10.018767 containerd[1959]: time="2024-08-05T22:36:10.011248763Z" level=info msg="shim disconnected" id=9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c namespace=k8s.io Aug 5 22:36:10.018767 containerd[1959]: time="2024-08-05T22:36:10.011319032Z" level=warning msg="cleaning up after shim disconnected" id=9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c namespace=k8s.io Aug 5 22:36:10.018767 containerd[1959]: time="2024-08-05T22:36:10.011331423Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:36:10.027828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c-rootfs.mount: Deactivated successfully. Aug 5 22:36:10.044942 containerd[1959]: time="2024-08-05T22:36:10.044890966Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:36:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:36:10.160692 kubelet[3195]: I0805 22:36:10.160657 3195 scope.go:117] "RemoveContainer" containerID="9d799cab2cf32b682522c9c271bc3469b3057ff7e4305912b9ba4dfb601ad86c" Aug 5 22:36:10.172492 kubelet[3195]: I0805 22:36:10.172385 3195 scope.go:117] "RemoveContainer" containerID="5f3c263610bb2d34ca0cfdbec4ce10dee32f17433336decb50e8b40f3cb47d64" Aug 5 22:36:10.214560 containerd[1959]: time="2024-08-05T22:36:10.213991325Z" level=info msg="CreateContainer within sandbox \"112dd1c2940f77be42a91ddcbb416376387be7bd1dbef97840e4c5c2b156b1e0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 5 22:36:10.215967 containerd[1959]: time="2024-08-05T22:36:10.215806223Z" level=info msg="CreateContainer within sandbox \"79e5a87994da8be63fa399526e74061d6d3d99f3f8a5e7ddb2cb537baa308148\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 5 22:36:10.240306 containerd[1959]: time="2024-08-05T22:36:10.240260820Z" level=info msg="CreateContainer within sandbox \"112dd1c2940f77be42a91ddcbb416376387be7bd1dbef97840e4c5c2b156b1e0\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"bf8e928ed49d2a16cab3b1925c5b63b47a7a5e3f8a3d0516417ad020e5441972\"" Aug 5 22:36:10.244507 containerd[1959]: time="2024-08-05T22:36:10.242698554Z" level=info msg="StartContainer for \"bf8e928ed49d2a16cab3b1925c5b63b47a7a5e3f8a3d0516417ad020e5441972\"" Aug 5 22:36:10.252046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174823047.mount: Deactivated successfully. Aug 5 22:36:10.261528 containerd[1959]: time="2024-08-05T22:36:10.261294999Z" level=info msg="CreateContainer within sandbox \"79e5a87994da8be63fa399526e74061d6d3d99f3f8a5e7ddb2cb537baa308148\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"e68f828c0ea5c82c7832d7f17919c18c2c9fdddbf52e935809ef6012e388fedb\"" Aug 5 22:36:10.262816 containerd[1959]: time="2024-08-05T22:36:10.262782510Z" level=info msg="StartContainer for \"e68f828c0ea5c82c7832d7f17919c18c2c9fdddbf52e935809ef6012e388fedb\"" Aug 5 22:36:10.304888 systemd[1]: Started cri-containerd-bf8e928ed49d2a16cab3b1925c5b63b47a7a5e3f8a3d0516417ad020e5441972.scope - libcontainer container bf8e928ed49d2a16cab3b1925c5b63b47a7a5e3f8a3d0516417ad020e5441972. Aug 5 22:36:10.328062 systemd[1]: Started cri-containerd-e68f828c0ea5c82c7832d7f17919c18c2c9fdddbf52e935809ef6012e388fedb.scope - libcontainer container e68f828c0ea5c82c7832d7f17919c18c2c9fdddbf52e935809ef6012e388fedb. Aug 5 22:36:10.379053 containerd[1959]: time="2024-08-05T22:36:10.378838536Z" level=info msg="StartContainer for \"bf8e928ed49d2a16cab3b1925c5b63b47a7a5e3f8a3d0516417ad020e5441972\" returns successfully" Aug 5 22:36:10.422567 containerd[1959]: time="2024-08-05T22:36:10.422514565Z" level=info msg="StartContainer for \"e68f828c0ea5c82c7832d7f17919c18c2c9fdddbf52e935809ef6012e388fedb\" returns successfully" Aug 5 22:36:14.418833 systemd[1]: cri-containerd-1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731.scope: Deactivated successfully. Aug 5 22:36:14.421560 systemd[1]: cri-containerd-1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731.scope: Consumed 1.560s CPU time, 16.5M memory peak, 0B memory swap peak. Aug 5 22:36:14.510532 containerd[1959]: time="2024-08-05T22:36:14.509241015Z" level=info msg="shim disconnected" id=1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731 namespace=k8s.io Aug 5 22:36:14.513041 containerd[1959]: time="2024-08-05T22:36:14.512654614Z" level=warning msg="cleaning up after shim disconnected" id=1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731 namespace=k8s.io Aug 5 22:36:14.513041 containerd[1959]: time="2024-08-05T22:36:14.512708426Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:36:14.512534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731-rootfs.mount: Deactivated successfully. Aug 5 22:36:15.182139 kubelet[3195]: I0805 22:36:15.182101 3195 scope.go:117] "RemoveContainer" containerID="1f14e8c22fcf75e1b4b27322adc6e7e01d95ccde0a649a6436e69ba0e6444731" Aug 5 22:36:15.193978 containerd[1959]: time="2024-08-05T22:36:15.193915415Z" level=info msg="CreateContainer within sandbox \"952a882f706851d73f0f346049df4c283864be80396d4aeaa0e144e822afb7f6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 5 22:36:15.238295 containerd[1959]: time="2024-08-05T22:36:15.237910654Z" level=info msg="CreateContainer within sandbox \"952a882f706851d73f0f346049df4c283864be80396d4aeaa0e144e822afb7f6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"7bbb9bc7c79db67974651ef48c19bfcb22124f271733b02374cd79c8a6f7c323\"" Aug 5 22:36:15.239421 containerd[1959]: time="2024-08-05T22:36:15.239384968Z" level=info msg="StartContainer for \"7bbb9bc7c79db67974651ef48c19bfcb22124f271733b02374cd79c8a6f7c323\"" Aug 5 22:36:15.327283 systemd[1]: Started cri-containerd-7bbb9bc7c79db67974651ef48c19bfcb22124f271733b02374cd79c8a6f7c323.scope - libcontainer container 7bbb9bc7c79db67974651ef48c19bfcb22124f271733b02374cd79c8a6f7c323. Aug 5 22:36:15.444436 containerd[1959]: time="2024-08-05T22:36:15.444291964Z" level=info msg="StartContainer for \"7bbb9bc7c79db67974651ef48c19bfcb22124f271733b02374cd79c8a6f7c323\" returns successfully" Aug 5 22:36:15.850734 kubelet[3195]: E0805 22:36:15.847083 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-20?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 5 22:36:25.859737 kubelet[3195]: E0805 22:36:25.859668 3195 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-20?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"