Aug 5 22:12:27.089914 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:27 -00 2024 Aug 5 22:12:27.089959 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:12:27.089976 kernel: BIOS-provided physical RAM map: Aug 5 22:12:27.089988 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 5 22:12:27.089999 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 5 22:12:27.090011 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 5 22:12:27.090028 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Aug 5 22:12:27.090041 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Aug 5 22:12:27.090053 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Aug 5 22:12:27.090065 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 5 22:12:27.090077 kernel: NX (Execute Disable) protection: active Aug 5 22:12:27.090567 kernel: APIC: Static calls initialized Aug 5 22:12:27.090578 kernel: SMBIOS 2.7 present. Aug 5 22:12:27.090586 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Aug 5 22:12:27.090600 kernel: Hypervisor detected: KVM Aug 5 22:12:27.090608 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 5 22:12:27.090616 kernel: kvm-clock: using sched offset of 5989680211 cycles Aug 5 22:12:27.090625 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 5 22:12:27.090633 kernel: tsc: Detected 2499.996 MHz processor Aug 5 22:12:27.090641 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:12:27.090649 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:12:27.090659 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Aug 5 22:12:27.090667 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 5 22:12:27.090675 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:12:27.090682 kernel: Using GB pages for direct mapping Aug 5 22:12:27.090690 kernel: ACPI: Early table checksum verification disabled Aug 5 22:12:27.090698 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Aug 5 22:12:27.090705 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Aug 5 22:12:27.090713 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Aug 5 22:12:27.090721 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Aug 5 22:12:27.090732 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Aug 5 22:12:27.090740 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:12:27.090747 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Aug 5 22:12:27.090860 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Aug 5 22:12:27.090877 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Aug 5 22:12:27.090892 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Aug 5 22:12:27.090906 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Aug 5 22:12:27.090920 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Aug 5 22:12:27.090942 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Aug 5 22:12:27.090956 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Aug 5 22:12:27.090977 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Aug 5 22:12:27.090992 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Aug 5 22:12:27.091006 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Aug 5 22:12:27.091022 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Aug 5 22:12:27.091040 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Aug 5 22:12:27.091054 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Aug 5 22:12:27.091069 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Aug 5 22:12:27.091084 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Aug 5 22:12:27.091099 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:12:27.093190 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:12:27.093212 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Aug 5 22:12:27.093228 kernel: NUMA: Initialized distance table, cnt=1 Aug 5 22:12:27.093243 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Aug 5 22:12:27.093266 kernel: Zone ranges: Aug 5 22:12:27.093281 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:12:27.093296 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Aug 5 22:12:27.093310 kernel: Normal empty Aug 5 22:12:27.093325 kernel: Movable zone start for each node Aug 5 22:12:27.093340 kernel: Early memory node ranges Aug 5 22:12:27.093355 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 5 22:12:27.093369 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Aug 5 22:12:27.093384 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Aug 5 22:12:27.093402 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:12:27.093435 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 5 22:12:27.093449 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Aug 5 22:12:27.093464 kernel: ACPI: PM-Timer IO Port: 0xb008 Aug 5 22:12:27.093479 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 5 22:12:27.093493 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Aug 5 22:12:27.093509 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 5 22:12:27.093523 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:12:27.093538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 5 22:12:27.093556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 5 22:12:27.093571 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:12:27.093585 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 5 22:12:27.093600 kernel: TSC deadline timer available Aug 5 22:12:27.093614 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:12:27.093629 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 5 22:12:27.093644 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Aug 5 22:12:27.093659 kernel: Booting paravirtualized kernel on KVM Aug 5 22:12:27.093674 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:12:27.093692 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:12:27.093707 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:12:27.093722 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:12:27.093736 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:12:27.093750 kernel: kvm-guest: PV spinlocks enabled Aug 5 22:12:27.093766 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:12:27.093783 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:12:27.093798 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:12:27.093815 kernel: random: crng init done Aug 5 22:12:27.093829 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 22:12:27.093845 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:12:27.093859 kernel: Fallback order for Node 0: 0 Aug 5 22:12:27.093874 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Aug 5 22:12:27.093889 kernel: Policy zone: DMA32 Aug 5 22:12:27.093904 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:12:27.093919 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Aug 5 22:12:27.093934 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:12:27.093952 kernel: Kernel/User page tables isolation: enabled Aug 5 22:12:27.093967 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:12:27.093982 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:12:27.093996 kernel: Dynamic Preempt: voluntary Aug 5 22:12:27.094011 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:12:27.094027 kernel: rcu: RCU event tracing is enabled. Aug 5 22:12:27.094042 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:12:27.094058 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:12:27.094073 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:12:27.094087 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:12:27.095152 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:12:27.095177 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:12:27.095193 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 5 22:12:27.095208 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:12:27.095223 kernel: Console: colour VGA+ 80x25 Aug 5 22:12:27.095238 kernel: printk: console [ttyS0] enabled Aug 5 22:12:27.095253 kernel: ACPI: Core revision 20230628 Aug 5 22:12:27.095268 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Aug 5 22:12:27.095282 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:12:27.095559 kernel: x2apic enabled Aug 5 22:12:27.095576 kernel: APIC: Switched APIC routing to: physical x2apic Aug 5 22:12:27.095605 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 5 22:12:27.095623 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Aug 5 22:12:27.095638 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 5 22:12:27.095655 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Aug 5 22:12:27.095670 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:12:27.095686 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:12:27.095701 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:12:27.095716 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:12:27.095731 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 5 22:12:27.095747 kernel: RETBleed: Vulnerable Aug 5 22:12:27.095768 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:12:27.095784 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:12:27.095799 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:12:27.095814 kernel: GDS: Unknown: Dependent on hypervisor status Aug 5 22:12:27.095830 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:12:27.095845 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:12:27.095863 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:12:27.095878 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Aug 5 22:12:27.095894 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Aug 5 22:12:27.095920 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 5 22:12:27.095935 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 5 22:12:27.095950 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 5 22:12:27.095966 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 5 22:12:27.095981 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:12:27.095997 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Aug 5 22:12:27.096012 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Aug 5 22:12:27.096028 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Aug 5 22:12:27.096047 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Aug 5 22:12:27.096062 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Aug 5 22:12:27.096077 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Aug 5 22:12:27.096092 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Aug 5 22:12:27.096228 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:12:27.096244 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:12:27.096259 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:12:27.096275 kernel: SELinux: Initializing. Aug 5 22:12:27.096291 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:12:27.096306 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 5 22:12:27.096322 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 5 22:12:27.096338 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:12:27.096358 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:12:27.096374 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:12:27.096390 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 5 22:12:27.096406 kernel: signal: max sigframe size: 3632 Aug 5 22:12:27.096422 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:12:27.096439 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:12:27.096455 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:12:27.096471 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:12:27.096487 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:12:27.096506 kernel: .... node #0, CPUs: #1 Aug 5 22:12:27.096523 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Aug 5 22:12:27.096540 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:12:27.096556 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:12:27.096571 kernel: smpboot: Max logical packages: 1 Aug 5 22:12:27.096587 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Aug 5 22:12:27.096666 kernel: devtmpfs: initialized Aug 5 22:12:27.096684 kernel: x86/mm: Memory block size: 128MB Aug 5 22:12:27.096704 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:12:27.096720 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:12:27.096736 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:12:27.096751 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:12:27.096767 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:12:27.096783 kernel: audit: type=2000 audit(1722895946.204:1): state=initialized audit_enabled=0 res=1 Aug 5 22:12:27.096799 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:12:27.096814 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:12:27.096830 kernel: cpuidle: using governor menu Aug 5 22:12:27.096849 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:12:27.096865 kernel: dca service started, version 1.12.1 Aug 5 22:12:27.096881 kernel: PCI: Using configuration type 1 for base access Aug 5 22:12:27.096897 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:12:27.096913 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:12:27.096929 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:12:27.096944 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:12:27.096960 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:12:27.096975 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:12:27.096995 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:12:27.097011 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:12:27.097026 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:12:27.097042 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Aug 5 22:12:27.097058 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:12:27.097074 kernel: ACPI: Interpreter enabled Aug 5 22:12:27.097089 kernel: ACPI: PM: (supports S0 S5) Aug 5 22:12:27.098144 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:12:27.098172 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:12:27.098196 kernel: PCI: Using E820 reservations for host bridge windows Aug 5 22:12:27.098213 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Aug 5 22:12:27.098229 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 5 22:12:27.098493 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Aug 5 22:12:27.098640 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Aug 5 22:12:27.098775 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Aug 5 22:12:27.098796 kernel: acpiphp: Slot [3] registered Aug 5 22:12:27.098816 kernel: acpiphp: Slot [4] registered Aug 5 22:12:27.098833 kernel: acpiphp: Slot [5] registered Aug 5 22:12:27.098849 kernel: acpiphp: Slot [6] registered Aug 5 22:12:27.098864 kernel: acpiphp: Slot [7] registered Aug 5 22:12:27.098879 kernel: acpiphp: Slot [8] registered Aug 5 22:12:27.098896 kernel: acpiphp: Slot [9] registered Aug 5 22:12:27.098912 kernel: acpiphp: Slot [10] registered Aug 5 22:12:27.098928 kernel: acpiphp: Slot [11] registered Aug 5 22:12:27.098943 kernel: acpiphp: Slot [12] registered Aug 5 22:12:27.098959 kernel: acpiphp: Slot [13] registered Aug 5 22:12:27.098978 kernel: acpiphp: Slot [14] registered Aug 5 22:12:27.098994 kernel: acpiphp: Slot [15] registered Aug 5 22:12:27.099010 kernel: acpiphp: Slot [16] registered Aug 5 22:12:27.099025 kernel: acpiphp: Slot [17] registered Aug 5 22:12:27.099041 kernel: acpiphp: Slot [18] registered Aug 5 22:12:27.099056 kernel: acpiphp: Slot [19] registered Aug 5 22:12:27.099072 kernel: acpiphp: Slot [20] registered Aug 5 22:12:27.099088 kernel: acpiphp: Slot [21] registered Aug 5 22:12:27.099126 kernel: acpiphp: Slot [22] registered Aug 5 22:12:27.099146 kernel: acpiphp: Slot [23] registered Aug 5 22:12:27.099162 kernel: acpiphp: Slot [24] registered Aug 5 22:12:27.099178 kernel: acpiphp: Slot [25] registered Aug 5 22:12:27.099193 kernel: acpiphp: Slot [26] registered Aug 5 22:12:27.099209 kernel: acpiphp: Slot [27] registered Aug 5 22:12:27.099225 kernel: acpiphp: Slot [28] registered Aug 5 22:12:27.099240 kernel: acpiphp: Slot [29] registered Aug 5 22:12:27.099256 kernel: acpiphp: Slot [30] registered Aug 5 22:12:27.099271 kernel: acpiphp: Slot [31] registered Aug 5 22:12:27.099287 kernel: PCI host bridge to bus 0000:00 Aug 5 22:12:27.099437 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 5 22:12:27.099565 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 5 22:12:27.099692 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 5 22:12:27.099813 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Aug 5 22:12:27.099935 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 5 22:12:27.100090 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Aug 5 22:12:27.104361 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Aug 5 22:12:27.104525 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Aug 5 22:12:27.104738 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Aug 5 22:12:27.104894 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Aug 5 22:12:27.105034 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Aug 5 22:12:27.105201 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Aug 5 22:12:27.105342 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Aug 5 22:12:27.105498 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Aug 5 22:12:27.105636 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Aug 5 22:12:27.105773 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Aug 5 22:12:27.105921 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Aug 5 22:12:27.106060 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Aug 5 22:12:27.108279 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Aug 5 22:12:27.108435 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 5 22:12:27.108656 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Aug 5 22:12:27.108802 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Aug 5 22:12:27.108943 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Aug 5 22:12:27.109069 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Aug 5 22:12:27.109086 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 5 22:12:27.109100 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 5 22:12:27.109136 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 5 22:12:27.109156 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 5 22:12:27.109169 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Aug 5 22:12:27.109183 kernel: iommu: Default domain type: Translated Aug 5 22:12:27.109196 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:12:27.109211 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:12:27.109224 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 5 22:12:27.109237 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 5 22:12:27.109250 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Aug 5 22:12:27.110391 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Aug 5 22:12:27.110563 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Aug 5 22:12:27.110836 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 5 22:12:27.110862 kernel: vgaarb: loaded Aug 5 22:12:27.110880 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Aug 5 22:12:27.110896 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Aug 5 22:12:27.110914 kernel: clocksource: Switched to clocksource kvm-clock Aug 5 22:12:27.110930 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:12:27.110946 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:12:27.110968 kernel: pnp: PnP ACPI init Aug 5 22:12:27.110984 kernel: pnp: PnP ACPI: found 5 devices Aug 5 22:12:27.111000 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:12:27.111017 kernel: NET: Registered PF_INET protocol family Aug 5 22:12:27.111033 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 22:12:27.111050 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 5 22:12:27.111066 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:12:27.111084 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:12:27.111099 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 5 22:12:27.111151 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 5 22:12:27.111168 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:12:27.111184 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 5 22:12:27.111201 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:12:27.111217 kernel: NET: Registered PF_XDP protocol family Aug 5 22:12:27.111356 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 5 22:12:27.111480 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 5 22:12:27.111635 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 5 22:12:27.111919 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Aug 5 22:12:27.112079 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 5 22:12:27.112100 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:12:27.113230 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:12:27.113254 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Aug 5 22:12:27.113271 kernel: clocksource: Switched to clocksource tsc Aug 5 22:12:27.113287 kernel: Initialise system trusted keyrings Aug 5 22:12:27.113303 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 5 22:12:27.113319 kernel: Key type asymmetric registered Aug 5 22:12:27.113339 kernel: Asymmetric key parser 'x509' registered Aug 5 22:12:27.113355 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:12:27.113371 kernel: io scheduler mq-deadline registered Aug 5 22:12:27.113387 kernel: io scheduler kyber registered Aug 5 22:12:27.113402 kernel: io scheduler bfq registered Aug 5 22:12:27.113426 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:12:27.113443 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:12:27.113458 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:12:27.113475 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 5 22:12:27.113494 kernel: i8042: Warning: Keylock active Aug 5 22:12:27.113509 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 5 22:12:27.113525 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 5 22:12:27.113694 kernel: rtc_cmos 00:00: RTC can wake from S4 Aug 5 22:12:27.113824 kernel: rtc_cmos 00:00: registered as rtc0 Aug 5 22:12:27.113951 kernel: rtc_cmos 00:00: setting system clock to 2024-08-05T22:12:26 UTC (1722895946) Aug 5 22:12:27.114074 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Aug 5 22:12:27.114097 kernel: intel_pstate: CPU model not supported Aug 5 22:12:27.115201 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:12:27.115220 kernel: Segment Routing with IPv6 Aug 5 22:12:27.115236 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:12:27.115252 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:12:27.115268 kernel: Key type dns_resolver registered Aug 5 22:12:27.115284 kernel: IPI shorthand broadcast: enabled Aug 5 22:12:27.115300 kernel: sched_clock: Marking stable (575003433, 242231054)->(897455405, -80220918) Aug 5 22:12:27.115316 kernel: registered taskstats version 1 Aug 5 22:12:27.115331 kernel: Loading compiled-in X.509 certificates Aug 5 22:12:27.115352 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: e31e857530e65c19b206dbf3ab8297cc37ac5d55' Aug 5 22:12:27.115367 kernel: Key type .fscrypt registered Aug 5 22:12:27.115382 kernel: Key type fscrypt-provisioning registered Aug 5 22:12:27.115397 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:12:27.115412 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:12:27.115429 kernel: ima: No architecture policies found Aug 5 22:12:27.115444 kernel: clk: Disabling unused clocks Aug 5 22:12:27.115460 kernel: Freeing unused kernel image (initmem) memory: 49328K Aug 5 22:12:27.115479 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:12:27.115495 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:12:27.115510 kernel: Run /init as init process Aug 5 22:12:27.115526 kernel: with arguments: Aug 5 22:12:27.115542 kernel: /init Aug 5 22:12:27.115558 kernel: with environment: Aug 5 22:12:27.115573 kernel: HOME=/ Aug 5 22:12:27.115588 kernel: TERM=linux Aug 5 22:12:27.115603 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:12:27.115622 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:12:27.115645 systemd[1]: Detected virtualization amazon. Aug 5 22:12:27.115680 systemd[1]: Detected architecture x86-64. Aug 5 22:12:27.115697 systemd[1]: Running in initrd. Aug 5 22:12:27.115713 systemd[1]: No hostname configured, using default hostname. Aug 5 22:12:27.115733 systemd[1]: Hostname set to . Aug 5 22:12:27.115751 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:12:27.115768 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:12:27.115785 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:12:27.115803 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:12:27.115821 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:12:27.115838 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:12:27.115855 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:12:27.115876 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:12:27.115896 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:12:27.115913 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:12:27.115930 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:12:27.115948 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:12:27.115964 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:12:27.115982 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:12:27.116002 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:12:27.116019 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:12:27.116036 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:12:27.116054 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:12:27.116071 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:12:27.116088 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:12:27.116116 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:12:27.117305 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:12:27.117324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:12:27.117348 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:12:27.117365 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:12:27.117383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:12:27.117401 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:12:27.117425 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:12:27.117442 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:12:27.117460 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 5 22:12:27.117482 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:12:27.117499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:12:27.117517 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:12:27.117534 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:12:27.117552 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:12:27.117574 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:12:27.117631 systemd-journald[178]: Collecting audit messages is disabled. Aug 5 22:12:27.117673 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:12:27.117691 systemd-journald[178]: Journal started Aug 5 22:12:27.117726 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2b78db73f440e95c83aa827f1326b8) is 4.8M, max 38.6M, 33.7M free. Aug 5 22:12:27.076801 systemd-modules-load[179]: Inserted module 'overlay' Aug 5 22:12:27.121333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:12:27.127333 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:12:27.138183 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:12:27.146636 kernel: Bridge firewalling registered Aug 5 22:12:27.145919 systemd-modules-load[179]: Inserted module 'br_netfilter' Aug 5 22:12:27.156748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:12:27.268580 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:12:27.269999 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:27.272755 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:12:27.290389 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:12:27.304630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:12:27.305208 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:12:27.322699 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:12:27.330401 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:12:27.337067 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:12:27.355379 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:12:27.391814 dracut-cmdline[214]: dracut-dracut-053 Aug 5 22:12:27.396509 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:12:27.417504 systemd-resolved[209]: Positive Trust Anchors: Aug 5 22:12:27.417530 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:12:27.417581 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:12:27.441727 systemd-resolved[209]: Defaulting to hostname 'linux'. Aug 5 22:12:27.444870 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:12:27.447578 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:12:27.518145 kernel: SCSI subsystem initialized Aug 5 22:12:27.530135 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:12:27.545238 kernel: iscsi: registered transport (tcp) Aug 5 22:12:27.572139 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:12:27.572218 kernel: QLogic iSCSI HBA Driver Aug 5 22:12:27.636403 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:12:27.643314 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:12:27.703272 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:12:27.703351 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:12:27.703373 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:12:27.752164 kernel: raid6: avx512x4 gen() 17260 MB/s Aug 5 22:12:27.769157 kernel: raid6: avx512x2 gen() 17430 MB/s Aug 5 22:12:27.786158 kernel: raid6: avx512x1 gen() 17250 MB/s Aug 5 22:12:27.803154 kernel: raid6: avx2x4 gen() 15628 MB/s Aug 5 22:12:27.820164 kernel: raid6: avx2x2 gen() 14859 MB/s Aug 5 22:12:27.837158 kernel: raid6: avx2x1 gen() 12457 MB/s Aug 5 22:12:27.837253 kernel: raid6: using algorithm avx512x2 gen() 17430 MB/s Aug 5 22:12:27.854132 kernel: raid6: .... xor() 23256 MB/s, rmw enabled Aug 5 22:12:27.854246 kernel: raid6: using avx512x2 recovery algorithm Aug 5 22:12:27.885146 kernel: xor: automatically using best checksumming function avx Aug 5 22:12:28.113177 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:12:28.127427 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:12:28.135382 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:12:28.165479 systemd-udevd[397]: Using default interface naming scheme 'v255'. Aug 5 22:12:28.171276 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:12:28.181424 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:12:28.211639 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Aug 5 22:12:28.255662 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:12:28.268513 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:12:28.341805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:12:28.351346 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:12:28.400339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:12:28.409098 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:12:28.413937 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:12:28.417399 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:12:28.427312 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:12:28.461692 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:12:28.514129 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:12:28.519363 kernel: ena 0000:00:05.0: ENA device version: 0.10 Aug 5 22:12:28.542242 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Aug 5 22:12:28.542655 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Aug 5 22:12:28.542828 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:a1:63:d6:e1:6f Aug 5 22:12:28.538636 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:12:28.538793 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:12:28.540855 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:12:28.543288 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:12:28.543838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:28.545534 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:12:28.551550 (udev-worker)[447]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:12:28.574705 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:12:28.555903 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:12:28.590150 kernel: AES CTR mode by8 optimization enabled Aug 5 22:12:28.620414 kernel: nvme nvme0: pci function 0000:00:04.0 Aug 5 22:12:28.620689 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Aug 5 22:12:28.638135 kernel: nvme nvme0: 2/0/0 default/read/poll queues Aug 5 22:12:28.641143 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 5 22:12:28.641202 kernel: GPT:9289727 != 16777215 Aug 5 22:12:28.641221 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 5 22:12:28.641248 kernel: GPT:9289727 != 16777215 Aug 5 22:12:28.641263 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 5 22:12:28.641280 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:28.733480 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:28.744422 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:12:28.752269 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (457) Aug 5 22:12:28.796138 kernel: BTRFS: device fsid d3844c60-0a2c-449a-9ee9-2a875f8d8e12 devid 1 transid 36 /dev/nvme0n1p3 scanned by (udev-worker) (460) Aug 5 22:12:28.809170 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:12:28.854405 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Aug 5 22:12:28.868070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:12:28.890552 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Aug 5 22:12:28.898119 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Aug 5 22:12:28.899632 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Aug 5 22:12:28.907357 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:12:28.924872 disk-uuid[628]: Primary Header is updated. Aug 5 22:12:28.924872 disk-uuid[628]: Secondary Entries is updated. Aug 5 22:12:28.924872 disk-uuid[628]: Secondary Header is updated. Aug 5 22:12:28.929164 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:28.935211 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:28.941187 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:29.943045 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Aug 5 22:12:29.943135 disk-uuid[629]: The operation has completed successfully. Aug 5 22:12:30.145521 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:12:30.145643 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:12:30.227357 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:12:30.246257 sh[970]: Success Aug 5 22:12:30.275281 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:12:30.376422 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:12:30.384520 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:12:30.391461 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:12:30.422245 kernel: BTRFS info (device dm-0): first mount of filesystem d3844c60-0a2c-449a-9ee9-2a875f8d8e12 Aug 5 22:12:30.422310 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:12:30.422330 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:12:30.423508 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:12:30.423531 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:12:30.511155 kernel: BTRFS info (device dm-0): enabling ssd optimizations Aug 5 22:12:30.522246 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:12:30.523053 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:12:30.532369 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:12:30.544408 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:12:30.585730 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:30.585808 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:12:30.585830 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:12:30.590137 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:12:30.609133 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:12:30.618440 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:30.633020 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:12:30.640426 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:12:30.741896 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:12:30.760504 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:12:30.833584 systemd-networkd[1163]: lo: Link UP Aug 5 22:12:30.833595 systemd-networkd[1163]: lo: Gained carrier Aug 5 22:12:30.836436 systemd-networkd[1163]: Enumeration completed Aug 5 22:12:30.836805 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:12:30.838522 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:30.838527 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:12:30.838592 systemd[1]: Reached target network.target - Network. Aug 5 22:12:30.849649 systemd-networkd[1163]: eth0: Link UP Aug 5 22:12:30.849657 systemd-networkd[1163]: eth0: Gained carrier Aug 5 22:12:30.849671 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:30.871248 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.23.76/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:12:31.120212 ignition[1092]: Ignition 2.18.0 Aug 5 22:12:31.120226 ignition[1092]: Stage: fetch-offline Aug 5 22:12:31.120600 ignition[1092]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:31.120613 ignition[1092]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:31.128593 ignition[1092]: Ignition finished successfully Aug 5 22:12:31.130321 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:12:31.136561 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:12:31.165473 ignition[1172]: Ignition 2.18.0 Aug 5 22:12:31.165500 ignition[1172]: Stage: fetch Aug 5 22:12:31.166592 ignition[1172]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:31.166607 ignition[1172]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:31.168121 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:31.176426 ignition[1172]: PUT result: OK Aug 5 22:12:31.185062 ignition[1172]: parsed url from cmdline: "" Aug 5 22:12:31.185076 ignition[1172]: no config URL provided Aug 5 22:12:31.185087 ignition[1172]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:12:31.185122 ignition[1172]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:12:31.185149 ignition[1172]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:31.187795 ignition[1172]: PUT result: OK Aug 5 22:12:31.189048 ignition[1172]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Aug 5 22:12:31.191809 ignition[1172]: GET result: OK Aug 5 22:12:31.191991 ignition[1172]: parsing config with SHA512: 81fcffd65f02ea6a580ab37218a5c814e4ea79086f472df5752d5173ef75beaaca76b5c5568a5d5c560efd1395594e3f577507513ab96cbcd9f558cab78dd2e1 Aug 5 22:12:31.201031 unknown[1172]: fetched base config from "system" Aug 5 22:12:31.202417 ignition[1172]: fetch: fetch complete Aug 5 22:12:31.201056 unknown[1172]: fetched base config from "system" Aug 5 22:12:31.202431 ignition[1172]: fetch: fetch passed Aug 5 22:12:31.201069 unknown[1172]: fetched user config from "aws" Aug 5 22:12:31.202526 ignition[1172]: Ignition finished successfully Aug 5 22:12:31.205782 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:12:31.219777 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:12:31.245235 ignition[1179]: Ignition 2.18.0 Aug 5 22:12:31.245249 ignition[1179]: Stage: kargs Aug 5 22:12:31.245956 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:31.245970 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:31.246120 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:31.248797 ignition[1179]: PUT result: OK Aug 5 22:12:31.259508 ignition[1179]: kargs: kargs passed Aug 5 22:12:31.259639 ignition[1179]: Ignition finished successfully Aug 5 22:12:31.264277 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:12:31.271293 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:12:31.308295 ignition[1186]: Ignition 2.18.0 Aug 5 22:12:31.308310 ignition[1186]: Stage: disks Aug 5 22:12:31.308861 ignition[1186]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:31.308874 ignition[1186]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:31.309174 ignition[1186]: PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:31.312376 ignition[1186]: PUT result: OK Aug 5 22:12:31.317891 ignition[1186]: disks: disks passed Aug 5 22:12:31.317994 ignition[1186]: Ignition finished successfully Aug 5 22:12:31.320985 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:12:31.322008 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:12:31.325882 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:12:31.327382 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:12:31.329844 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:12:31.331192 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:12:31.341214 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:12:31.382553 systemd-fsck[1195]: ROOT: clean, 14/553520 files, 52654/553472 blocks Aug 5 22:12:31.390089 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:12:31.396220 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:12:31.545134 kernel: EXT4-fs (nvme0n1p9): mounted filesystem e865ac73-053b-4efa-9a0f-50dec3f650d9 r/w with ordered data mode. Quota mode: none. Aug 5 22:12:31.545552 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:12:31.546513 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:12:31.570656 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:12:31.577455 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:12:31.581222 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 5 22:12:31.581353 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:12:31.581398 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:12:31.593123 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1214) Aug 5 22:12:31.595260 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:31.595316 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:12:31.597144 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:12:31.601678 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:12:31.604344 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:12:31.604897 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:12:31.618471 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:12:32.031372 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:12:32.038085 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:12:32.046206 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:12:32.053226 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:12:32.094001 systemd-networkd[1163]: eth0: Gained IPv6LL Aug 5 22:12:32.338935 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:12:32.358271 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:12:32.385705 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:12:32.403223 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:12:32.404468 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:32.440173 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:12:32.465622 ignition[1327]: INFO : Ignition 2.18.0 Aug 5 22:12:32.465622 ignition[1327]: INFO : Stage: mount Aug 5 22:12:32.477139 ignition[1327]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:32.477139 ignition[1327]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:32.477139 ignition[1327]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:32.482741 ignition[1327]: INFO : PUT result: OK Aug 5 22:12:32.489092 ignition[1327]: INFO : mount: mount passed Aug 5 22:12:32.490469 ignition[1327]: INFO : Ignition finished successfully Aug 5 22:12:32.493631 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:12:32.500216 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:12:32.561256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:12:32.594126 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1339) Aug 5 22:12:32.598413 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:12:32.598580 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:12:32.598604 kernel: BTRFS info (device nvme0n1p6): using free space tree Aug 5 22:12:32.608164 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Aug 5 22:12:32.611002 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:12:32.649069 ignition[1356]: INFO : Ignition 2.18.0 Aug 5 22:12:32.650821 ignition[1356]: INFO : Stage: files Aug 5 22:12:32.652224 ignition[1356]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:32.652224 ignition[1356]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:32.657669 ignition[1356]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:32.673958 ignition[1356]: INFO : PUT result: OK Aug 5 22:12:32.679043 ignition[1356]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:12:32.681506 ignition[1356]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:12:32.681506 ignition[1356]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:12:32.724615 ignition[1356]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:12:32.726546 ignition[1356]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:12:32.728423 unknown[1356]: wrote ssh authorized keys file for user: core Aug 5 22:12:32.729710 ignition[1356]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:12:32.732686 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:12:32.735769 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:12:32.801640 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:12:32.965986 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:12:32.965986 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:12:32.970682 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:12:32.970682 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:12:32.975909 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:12:32.975909 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:12:32.975909 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:12:32.975909 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:12:32.975909 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:12:32.975909 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:12:32.991255 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:12:32.991255 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:12:32.991255 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:12:32.991255 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:12:32.991255 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Aug 5 22:12:33.501137 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:12:34.469218 ignition[1356]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Aug 5 22:12:34.469218 ignition[1356]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:12:34.475950 ignition[1356]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:12:34.478346 ignition[1356]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:12:34.478346 ignition[1356]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:12:34.478346 ignition[1356]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:12:34.483516 ignition[1356]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:12:34.483516 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:12:34.483516 ignition[1356]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:12:34.483516 ignition[1356]: INFO : files: files passed Aug 5 22:12:34.483516 ignition[1356]: INFO : Ignition finished successfully Aug 5 22:12:34.493293 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:12:34.500343 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:12:34.504685 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:12:34.506895 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:12:34.507132 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:12:34.555052 initrd-setup-root-after-ignition[1385]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:12:34.555052 initrd-setup-root-after-ignition[1385]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:12:34.559184 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:12:34.563930 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:12:34.568172 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:12:34.580335 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:12:34.620962 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:12:34.621096 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:12:34.623558 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:12:34.625609 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:12:34.627620 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:12:34.638316 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:12:34.652150 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:12:34.659319 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:12:34.673689 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:12:34.676325 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:12:34.679926 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:12:34.682008 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:12:34.682157 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:12:34.685943 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:12:34.688351 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:12:34.689679 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:12:34.693461 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:12:34.693765 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:12:34.698401 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:12:34.700439 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:12:34.704300 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:12:34.707345 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:12:34.711855 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:12:34.715039 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:12:34.715367 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:12:34.728616 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:12:34.732412 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:12:34.734710 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:12:34.735667 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:12:34.737093 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:12:34.737273 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:12:34.740826 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:12:34.741016 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:12:34.746440 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:12:34.746622 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:12:34.768546 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:12:34.772958 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:12:34.775355 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:12:34.777218 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:12:34.780625 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:12:34.781987 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:12:34.793239 ignition[1409]: INFO : Ignition 2.18.0 Aug 5 22:12:34.794388 ignition[1409]: INFO : Stage: umount Aug 5 22:12:34.795457 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:12:34.795591 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:12:34.801481 ignition[1409]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:12:34.802751 ignition[1409]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Aug 5 22:12:34.804452 ignition[1409]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Aug 5 22:12:34.807055 ignition[1409]: INFO : PUT result: OK Aug 5 22:12:34.812129 ignition[1409]: INFO : umount: umount passed Aug 5 22:12:34.813165 ignition[1409]: INFO : Ignition finished successfully Aug 5 22:12:34.813697 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:12:34.813873 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:12:34.815876 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:12:34.815985 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:12:34.817275 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:12:34.817351 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:12:34.819298 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:12:34.819341 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:12:34.827601 systemd[1]: Stopped target network.target - Network. Aug 5 22:12:34.828716 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:12:34.828789 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:12:34.831634 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:12:34.833790 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:12:34.837209 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:12:34.838502 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:12:34.841157 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:12:34.843291 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:12:34.843354 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:12:34.847347 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:12:34.847410 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:12:34.849965 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:12:34.850046 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:12:34.852774 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:12:34.852846 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:12:34.863145 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:12:34.872475 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:12:34.879779 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:12:34.881400 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:12:34.882322 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:12:34.884890 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:12:34.885120 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:12:34.886212 systemd-networkd[1163]: eth0: DHCPv6 lease lost Aug 5 22:12:34.890194 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:12:34.890301 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:12:34.894368 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:12:34.894494 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:12:34.904479 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:12:34.904530 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:12:34.913291 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:12:34.915296 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:12:34.916417 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:12:34.918367 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:12:34.918443 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:12:34.919546 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:12:34.919607 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:12:34.921994 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:12:34.922057 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:12:34.926416 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:12:34.943943 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:12:34.944158 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:12:34.946824 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:12:34.946938 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:12:34.950362 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:12:34.950433 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:12:34.953180 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:12:34.953260 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:12:34.962227 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:12:34.962475 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:12:34.963166 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:12:34.963225 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:12:34.963742 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:12:34.963787 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:12:34.978446 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:12:34.995197 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:12:34.995318 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:12:35.007146 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:12:35.007619 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:12:35.010832 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:12:35.011016 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:12:35.012936 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:12:35.012993 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:35.016341 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:12:35.016432 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:12:35.019631 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:12:35.048324 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:12:35.083254 systemd[1]: Switching root. Aug 5 22:12:35.130528 systemd-journald[178]: Journal stopped Aug 5 22:12:37.675491 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Aug 5 22:12:37.675589 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:12:37.675614 kernel: SELinux: policy capability open_perms=1 Aug 5 22:12:37.675634 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:12:37.675654 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:12:37.675771 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:12:37.675797 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:12:37.675815 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:12:37.675833 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:12:37.675856 kernel: audit: type=1403 audit(1722895956.169:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:12:37.675882 systemd[1]: Successfully loaded SELinux policy in 48.865ms. Aug 5 22:12:37.675909 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.799ms. Aug 5 22:12:37.675933 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:12:37.675956 systemd[1]: Detected virtualization amazon. Aug 5 22:12:37.675979 systemd[1]: Detected architecture x86-64. Aug 5 22:12:37.679527 systemd[1]: Detected first boot. Aug 5 22:12:37.679581 systemd[1]: Initializing machine ID from VM UUID. Aug 5 22:12:37.679604 zram_generator::config[1451]: No configuration found. Aug 5 22:12:37.679632 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:12:37.679659 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:12:37.679679 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:12:37.679699 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:12:37.679718 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:12:37.679737 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:12:37.679756 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:12:37.679776 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:12:37.679799 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:12:37.679822 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:12:37.679842 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:12:37.679861 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:12:37.679881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:12:37.679901 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:12:37.679919 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:12:37.679940 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:12:37.679961 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:12:37.679983 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:12:37.680003 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:12:37.680022 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:12:37.680041 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:12:37.680058 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:12:37.680076 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:12:37.680095 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:12:37.680160 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:12:37.680179 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:12:37.680197 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:12:37.680215 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:12:37.680236 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:12:37.680255 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:12:37.680280 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:12:37.680300 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:12:37.680319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:12:37.680339 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:12:37.680363 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:12:37.680383 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:12:37.680403 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:12:37.680426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:37.680448 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:12:37.680469 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:12:37.680492 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:12:37.680516 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:12:37.680542 systemd[1]: Reached target machines.target - Containers. Aug 5 22:12:37.680561 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:12:37.680583 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:37.680607 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:12:37.680627 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:12:37.680648 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:12:37.680669 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:12:37.680691 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:12:37.680712 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:12:37.680734 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:12:37.680755 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:12:37.680772 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:12:37.680789 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:12:37.680809 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:12:37.680828 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:12:37.680848 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:12:37.680866 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:12:37.680886 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:12:37.680908 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:12:37.680927 kernel: fuse: init (API version 7.39) Aug 5 22:12:37.680948 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:12:37.680966 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:12:37.680984 systemd[1]: Stopped verity-setup.service. Aug 5 22:12:37.682328 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:37.682365 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:12:37.682388 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:12:37.682417 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:12:37.682442 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:12:37.682462 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:12:37.682481 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:12:37.682499 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:12:37.682523 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:12:37.682541 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:12:37.682559 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:12:37.682587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:12:37.682611 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:12:37.682633 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:12:37.682655 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:12:37.682674 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:12:37.682696 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:12:37.682715 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:12:37.682734 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:12:37.682753 kernel: loop: module loaded Aug 5 22:12:37.682773 kernel: ACPI: bus type drm_connector registered Aug 5 22:12:37.682791 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:12:37.682814 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:12:37.682833 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:12:37.682852 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:12:37.682871 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:12:37.682888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:12:37.682908 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:12:37.682927 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:12:37.682946 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:12:37.682967 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:12:37.682990 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:12:37.683009 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:12:37.683029 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:12:37.683048 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:12:37.685329 systemd-journald[1525]: Collecting audit messages is disabled. Aug 5 22:12:37.690726 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:12:37.690773 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:37.690794 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:12:37.690814 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:12:37.690836 systemd-journald[1525]: Journal started Aug 5 22:12:37.691294 systemd-journald[1525]: Runtime Journal (/run/log/journal/ec2b78db73f440e95c83aa827f1326b8) is 4.8M, max 38.6M, 33.7M free. Aug 5 22:12:37.098594 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:12:37.145137 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Aug 5 22:12:37.145613 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:12:37.720426 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:12:37.734667 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:12:37.744381 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:12:37.776168 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:12:37.779132 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:12:37.783074 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:12:37.785405 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:12:37.800193 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:12:37.841157 kernel: loop0: detected capacity change from 0 to 80568 Aug 5 22:12:37.828550 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Aug 5 22:12:37.875374 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:12:37.828580 systemd-tmpfiles[1546]: ACLs are not supported, ignoring. Aug 5 22:12:37.835651 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:12:37.874625 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:12:37.888548 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:12:37.890914 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:12:37.894605 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:12:37.896848 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:12:37.916517 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:12:37.920770 systemd-journald[1525]: Time spent on flushing to /var/log/journal/ec2b78db73f440e95c83aa827f1326b8 is 104.451ms for 971 entries. Aug 5 22:12:37.920770 systemd-journald[1525]: System Journal (/var/log/journal/ec2b78db73f440e95c83aa827f1326b8) is 8.0M, max 195.6M, 187.6M free. Aug 5 22:12:38.076313 systemd-journald[1525]: Received client request to flush runtime journal. Aug 5 22:12:38.076374 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:12:38.076399 kernel: loop1: detected capacity change from 0 to 211296 Aug 5 22:12:37.928212 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:12:37.991352 udevadm[1590]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 22:12:38.065445 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:12:38.071226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:12:38.086631 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:12:38.089609 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:12:38.090503 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:12:38.151494 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Aug 5 22:12:38.151525 systemd-tmpfiles[1595]: ACLs are not supported, ignoring. Aug 5 22:12:38.164136 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:12:38.181281 kernel: loop2: detected capacity change from 0 to 139904 Aug 5 22:12:38.316126 kernel: loop3: detected capacity change from 0 to 60984 Aug 5 22:12:38.417135 kernel: loop4: detected capacity change from 0 to 80568 Aug 5 22:12:38.445131 kernel: loop5: detected capacity change from 0 to 211296 Aug 5 22:12:38.482181 kernel: loop6: detected capacity change from 0 to 139904 Aug 5 22:12:38.518169 kernel: loop7: detected capacity change from 0 to 60984 Aug 5 22:12:38.542410 (sd-merge)[1604]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Aug 5 22:12:38.543097 (sd-merge)[1604]: Merged extensions into '/usr'. Aug 5 22:12:38.556475 systemd[1]: Reloading requested from client PID 1560 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:12:38.556655 systemd[1]: Reloading... Aug 5 22:12:38.689437 zram_generator::config[1626]: No configuration found. Aug 5 22:12:39.012570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:12:39.124990 systemd[1]: Reloading finished in 565 ms. Aug 5 22:12:39.177018 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:12:39.193389 systemd[1]: Starting ensure-sysext.service... Aug 5 22:12:39.206367 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:12:39.229164 systemd[1]: Reloading requested from client PID 1677 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:12:39.229326 systemd[1]: Reloading... Aug 5 22:12:39.259283 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:12:39.259955 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:12:39.262973 systemd-tmpfiles[1678]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:12:39.268780 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Aug 5 22:12:39.268870 systemd-tmpfiles[1678]: ACLs are not supported, ignoring. Aug 5 22:12:39.274981 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:12:39.274997 systemd-tmpfiles[1678]: Skipping /boot Aug 5 22:12:39.297261 systemd-tmpfiles[1678]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:12:39.297276 systemd-tmpfiles[1678]: Skipping /boot Aug 5 22:12:39.420686 zram_generator::config[1704]: No configuration found. Aug 5 22:12:39.577386 ldconfig[1554]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:12:39.636843 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:12:39.723483 systemd[1]: Reloading finished in 493 ms. Aug 5 22:12:39.746165 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:12:39.748046 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:12:39.758166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:12:39.773552 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:39.783523 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:12:39.790376 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:12:39.806402 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:12:39.818415 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:12:39.823285 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:12:39.840525 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:12:39.845194 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:39.845489 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:39.863693 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:12:39.886262 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:12:39.913020 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:12:39.915214 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:39.915424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:39.928829 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:39.929695 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:39.930142 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:39.930365 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:39.942045 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:12:39.945012 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:12:39.945171 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:12:39.958685 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:12:39.959605 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:12:39.972440 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:39.972918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:12:39.987092 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:12:40.002445 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:12:40.004845 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:12:40.005168 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:12:40.006842 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:12:40.016130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:12:40.016343 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:12:40.045707 systemd[1]: Finished ensure-sysext.service. Aug 5 22:12:40.049880 systemd-udevd[1764]: Using default interface naming scheme 'v255'. Aug 5 22:12:40.052055 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:12:40.056370 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:12:40.056668 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:12:40.059419 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:12:40.066986 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:12:40.067346 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:12:40.069814 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:12:40.077313 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:12:40.084733 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:12:40.096527 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:12:40.098577 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:12:40.125867 augenrules[1797]: No rules Aug 5 22:12:40.125734 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:40.134952 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:12:40.136771 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:12:40.152870 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:12:40.227332 systemd-resolved[1761]: Positive Trust Anchors: Aug 5 22:12:40.228175 systemd-resolved[1761]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:12:40.228543 systemd-resolved[1761]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:12:40.241083 systemd-resolved[1761]: Defaulting to hostname 'linux'. Aug 5 22:12:40.261565 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:12:40.264729 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:12:40.311177 systemd-networkd[1809]: lo: Link UP Aug 5 22:12:40.311616 systemd-networkd[1809]: lo: Gained carrier Aug 5 22:12:40.313939 systemd-networkd[1809]: Enumeration completed Aug 5 22:12:40.316707 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:12:40.317343 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:12:40.318830 systemd[1]: Reached target network.target - Network. Aug 5 22:12:40.325379 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:12:40.338145 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1815) Aug 5 22:12:40.348121 (udev-worker)[1812]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:12:40.413889 systemd-networkd[1809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:40.413905 systemd-networkd[1809]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:12:40.420204 systemd-networkd[1809]: eth0: Link UP Aug 5 22:12:40.420442 systemd-networkd[1809]: eth0: Gained carrier Aug 5 22:12:40.420470 systemd-networkd[1809]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:12:40.431191 systemd-networkd[1809]: eth0: DHCPv4 address 172.31.23.76/20, gateway 172.31.16.1 acquired from 172.31.16.1 Aug 5 22:12:40.458508 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Aug 5 22:12:40.470153 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Aug 5 22:12:40.478850 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1814) Aug 5 22:12:40.478979 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 5 22:12:40.488155 kernel: ACPI: button: Power Button [PWRF] Aug 5 22:12:40.492135 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Aug 5 22:12:40.514154 kernel: ACPI: button: Sleep Button [SLPF] Aug 5 22:12:40.607820 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:12:40.611149 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:12:40.697801 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Aug 5 22:12:40.698322 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:12:40.704438 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:12:40.708496 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:12:40.741710 lvm[1920]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:12:40.744290 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:12:40.778964 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:12:40.782622 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:12:40.938363 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:12:40.940170 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:12:40.942745 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:12:40.945113 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:12:40.946589 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:12:40.948096 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:12:40.949615 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:12:40.950868 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:12:40.952038 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:12:40.952066 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:12:40.952954 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:12:40.955478 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:12:40.957326 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:12:40.958738 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:12:40.977414 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:12:40.985790 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:12:40.987321 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:12:40.989974 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:12:40.992334 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:12:40.992381 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:12:41.001586 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:12:41.009062 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 22:12:41.012317 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:12:41.015475 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:12:41.021305 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:12:41.022780 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:12:41.032641 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:12:41.078419 systemd[1]: Started ntpd.service - Network Time Service. Aug 5 22:12:41.100316 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:12:41.108331 systemd[1]: Starting setup-oem.service - Setup OEM... Aug 5 22:12:41.117341 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:12:41.125420 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:12:41.139816 jq[1934]: false Aug 5 22:12:41.142571 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:12:41.144795 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:12:41.145483 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:12:41.158433 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:12:41.162593 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:12:41.168089 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:12:41.176772 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:12:41.177032 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:12:41.217191 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:28 UTC 2024 (1): Starting Aug 5 22:12:41.217191 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:12:41.217191 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: ---------------------------------------------------- Aug 5 22:12:41.217191 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:12:41.217191 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:12:41.217191 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: corporation. Support and training for ntp-4 are Aug 5 22:12:41.217191 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: available at https://www.nwtime.org/support Aug 5 22:12:41.217191 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: ---------------------------------------------------- Aug 5 22:12:41.214512 ntpd[1937]: ntpd 4.2.8p17@1.4004-o Mon Aug 5 19:55:28 UTC 2024 (1): Starting Aug 5 22:12:41.214540 ntpd[1937]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Aug 5 22:12:41.214551 ntpd[1937]: ---------------------------------------------------- Aug 5 22:12:41.214561 ntpd[1937]: ntp-4 is maintained by Network Time Foundation, Aug 5 22:12:41.214570 ntpd[1937]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Aug 5 22:12:41.214581 ntpd[1937]: corporation. Support and training for ntp-4 are Aug 5 22:12:41.214593 ntpd[1937]: available at https://www.nwtime.org/support Aug 5 22:12:41.214604 ntpd[1937]: ---------------------------------------------------- Aug 5 22:12:41.223439 ntpd[1937]: proto: precision = 0.079 usec (-24) Aug 5 22:12:41.235562 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: proto: precision = 0.079 usec (-24) Aug 5 22:12:41.235562 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: basedate set to 2024-07-24 Aug 5 22:12:41.235562 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: gps base set to 2024-07-28 (week 2325) Aug 5 22:12:41.235707 extend-filesystems[1935]: Found loop4 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found loop5 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found loop6 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found loop7 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found nvme0n1 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found nvme0n1p1 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found nvme0n1p2 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found nvme0n1p3 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found usr Aug 5 22:12:41.235707 extend-filesystems[1935]: Found nvme0n1p4 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found nvme0n1p6 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found nvme0n1p7 Aug 5 22:12:41.235707 extend-filesystems[1935]: Found nvme0n1p9 Aug 5 22:12:41.235707 extend-filesystems[1935]: Checking size of /dev/nvme0n1p9 Aug 5 22:12:41.231430 ntpd[1937]: basedate set to 2024-07-24 Aug 5 22:12:41.242505 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:12:41.231453 ntpd[1937]: gps base set to 2024-07-28 (week 2325) Aug 5 22:12:41.242759 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:12:41.262804 dbus-daemon[1933]: [system] SELinux support is enabled Aug 5 22:12:41.264475 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:12:41.279908 update_engine[1949]: I0805 22:12:41.278491 1949 main.cc:92] Flatcar Update Engine starting Aug 5 22:12:41.280616 dbus-daemon[1933]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1809 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 5 22:12:41.280736 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:12:41.280782 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:12:41.283723 dbus-daemon[1933]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 5 22:12:41.284202 jq[1954]: true Aug 5 22:12:41.285854 update_engine[1949]: I0805 22:12:41.285801 1949 update_check_scheduler.cc:74] Next update check in 7m18s Aug 5 22:12:41.287411 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: Listen normally on 3 eth0 172.31.23.76:123 Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: Listen normally on 4 lo [::1]:123 Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: bind(21) AF_INET6 fe80::4a1:63ff:fed6:e16f%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: unable to create socket on eth0 (5) for fe80::4a1:63ff:fed6:e16f%2#123 Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: failed to init interface for address fe80::4a1:63ff:fed6:e16f%2 Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: Listening on routing socket on fd #21 for interface updates Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:41.291589 ntpd[1937]: 5 Aug 22:12:41 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:41.287419 ntpd[1937]: Listen and drop on 0 v6wildcard [::]:123 Aug 5 22:12:41.287451 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:12:41.287471 ntpd[1937]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Aug 5 22:12:41.287634 ntpd[1937]: Listen normally on 2 lo 127.0.0.1:123 Aug 5 22:12:41.287663 ntpd[1937]: Listen normally on 3 eth0 172.31.23.76:123 Aug 5 22:12:41.287751 ntpd[1937]: Listen normally on 4 lo [::1]:123 Aug 5 22:12:41.287795 ntpd[1937]: bind(21) AF_INET6 fe80::4a1:63ff:fed6:e16f%2#123 flags 0x11 failed: Cannot assign requested address Aug 5 22:12:41.287812 ntpd[1937]: unable to create socket on eth0 (5) for fe80::4a1:63ff:fed6:e16f%2#123 Aug 5 22:12:41.287825 ntpd[1937]: failed to init interface for address fe80::4a1:63ff:fed6:e16f%2 Aug 5 22:12:41.287851 ntpd[1937]: Listening on routing socket on fd #21 for interface updates Aug 5 22:12:41.290944 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:41.290981 ntpd[1937]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Aug 5 22:12:41.300829 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:12:41.301307 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:12:41.330567 (ntainerd)[1968]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:12:41.365140 jq[1971]: true Aug 5 22:12:41.365336 tar[1956]: linux-amd64/helm Aug 5 22:12:41.354605 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:12:41.367367 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 5 22:12:41.377142 extend-filesystems[1935]: Resized partition /dev/nvme0n1p9 Aug 5 22:12:41.389238 extend-filesystems[1985]: resize2fs 1.47.0 (5-Feb-2023) Aug 5 22:12:41.407200 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Aug 5 22:12:41.393370 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:12:41.538139 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Aug 5 22:12:41.592179 extend-filesystems[1985]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Aug 5 22:12:41.592179 extend-filesystems[1985]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 5 22:12:41.592179 extend-filesystems[1985]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Aug 5 22:12:41.600866 extend-filesystems[1935]: Resized filesystem in /dev/nvme0n1p9 Aug 5 22:12:41.615507 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:12:41.615731 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:12:41.631533 coreos-metadata[1932]: Aug 05 22:12:41.627 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:12:41.636427 systemd[1]: Finished setup-oem.service - Setup OEM. Aug 5 22:12:41.642151 coreos-metadata[1932]: Aug 05 22:12:41.640 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Aug 5 22:12:41.642264 bash[2009]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:12:41.646131 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (1816) Aug 5 22:12:41.652767 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:12:41.658352 coreos-metadata[1932]: Aug 05 22:12:41.652 INFO Fetch successful Aug 5 22:12:41.658352 coreos-metadata[1932]: Aug 05 22:12:41.658 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Aug 5 22:12:41.663627 systemd-logind[1945]: Watching system buttons on /dev/input/event2 (Power Button) Aug 5 22:12:41.664192 systemd-logind[1945]: Watching system buttons on /dev/input/event3 (Sleep Button) Aug 5 22:12:41.667236 coreos-metadata[1932]: Aug 05 22:12:41.664 INFO Fetch successful Aug 5 22:12:41.667236 coreos-metadata[1932]: Aug 05 22:12:41.664 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Aug 5 22:12:41.664220 systemd-logind[1945]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:12:41.665003 systemd-logind[1945]: New seat seat0. Aug 5 22:12:41.674277 systemd[1]: Starting sshkeys.service... Aug 5 22:12:41.674788 coreos-metadata[1932]: Aug 05 22:12:41.674 INFO Fetch successful Aug 5 22:12:41.674975 coreos-metadata[1932]: Aug 05 22:12:41.674 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Aug 5 22:12:41.676759 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:12:41.678018 coreos-metadata[1932]: Aug 05 22:12:41.677 INFO Fetch successful Aug 5 22:12:41.678018 coreos-metadata[1932]: Aug 05 22:12:41.677 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Aug 5 22:12:41.688060 coreos-metadata[1932]: Aug 05 22:12:41.688 INFO Fetch failed with 404: resource not found Aug 5 22:12:41.688060 coreos-metadata[1932]: Aug 05 22:12:41.688 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Aug 5 22:12:41.691383 coreos-metadata[1932]: Aug 05 22:12:41.691 INFO Fetch successful Aug 5 22:12:41.691383 coreos-metadata[1932]: Aug 05 22:12:41.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Aug 5 22:12:41.695000 coreos-metadata[1932]: Aug 05 22:12:41.694 INFO Fetch successful Aug 5 22:12:41.695000 coreos-metadata[1932]: Aug 05 22:12:41.695 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Aug 5 22:12:41.695751 coreos-metadata[1932]: Aug 05 22:12:41.695 INFO Fetch successful Aug 5 22:12:41.695990 coreos-metadata[1932]: Aug 05 22:12:41.695 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Aug 5 22:12:41.699410 coreos-metadata[1932]: Aug 05 22:12:41.699 INFO Fetch successful Aug 5 22:12:41.699410 coreos-metadata[1932]: Aug 05 22:12:41.699 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Aug 5 22:12:41.701387 coreos-metadata[1932]: Aug 05 22:12:41.701 INFO Fetch successful Aug 5 22:12:41.766961 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 5 22:12:41.783565 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 5 22:12:41.878405 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 22:12:41.880862 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:12:41.884567 systemd-networkd[1809]: eth0: Gained IPv6LL Aug 5 22:12:41.954662 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:12:41.960244 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:12:41.971400 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Aug 5 22:12:41.988473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:41.997955 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:12:42.107219 dbus-daemon[1933]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 5 22:12:42.107286 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:12:42.112187 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 5 22:12:42.120086 dbus-daemon[1933]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1983 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 5 22:12:42.146546 systemd[1]: Starting polkit.service - Authorization Manager... Aug 5 22:12:42.174135 coreos-metadata[2035]: Aug 05 22:12:42.170 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Aug 5 22:12:42.182158 coreos-metadata[2035]: Aug 05 22:12:42.177 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Aug 5 22:12:42.182158 coreos-metadata[2035]: Aug 05 22:12:42.182 INFO Fetch successful Aug 5 22:12:42.182158 coreos-metadata[2035]: Aug 05 22:12:42.182 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 5 22:12:42.190931 coreos-metadata[2035]: Aug 05 22:12:42.189 INFO Fetch successful Aug 5 22:12:42.192224 locksmithd[1986]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:12:42.192902 unknown[2035]: wrote ssh authorized keys file for user: core Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: Initializing new seelog logger Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: New Seelog Logger Creation Complete Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: 2024/08/05 22:12:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: 2024/08/05 22:12:42 processing appconfig overrides Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: 2024/08/05 22:12:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: 2024/08/05 22:12:42 processing appconfig overrides Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO Proxy environment variables: Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: 2024/08/05 22:12:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:42.203868 amazon-ssm-agent[2065]: 2024/08/05 22:12:42 processing appconfig overrides Aug 5 22:12:42.220137 amazon-ssm-agent[2065]: 2024/08/05 22:12:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:42.220137 amazon-ssm-agent[2065]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Aug 5 22:12:42.220137 amazon-ssm-agent[2065]: 2024/08/05 22:12:42 processing appconfig overrides Aug 5 22:12:42.243702 polkitd[2098]: Started polkitd version 121 Aug 5 22:12:42.281347 update-ssh-keys[2121]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:12:42.281724 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 5 22:12:42.289606 polkitd[2098]: Loading rules from directory /etc/polkit-1/rules.d Aug 5 22:12:42.292823 systemd[1]: Finished sshkeys.service. Aug 5 22:12:42.289727 polkitd[2098]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 5 22:12:42.294534 polkitd[2098]: Finished loading, compiling and executing 2 rules Aug 5 22:12:42.301658 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO https_proxy: Aug 5 22:12:42.306624 dbus-daemon[1933]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 5 22:12:42.310053 systemd[1]: Started polkit.service - Authorization Manager. Aug 5 22:12:42.353596 polkitd[2098]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 5 22:12:42.414333 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO http_proxy: Aug 5 22:12:42.468201 systemd-resolved[1761]: System hostname changed to 'ip-172-31-23-76'. Aug 5 22:12:42.468997 sshd_keygen[1976]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:12:42.475550 systemd-hostnamed[1983]: Hostname set to (transient) Aug 5 22:12:42.584798 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO no_proxy: Aug 5 22:12:42.695269 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO Checking if agent identity type OnPrem can be assumed Aug 5 22:12:42.715769 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:12:42.734584 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:12:42.788073 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:12:42.790755 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO Checking if agent identity type EC2 can be assumed Aug 5 22:12:42.790370 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:12:42.807458 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:12:42.854190 containerd[1968]: time="2024-08-05T22:12:42.853504067Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 22:12:42.854286 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:12:42.871067 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:12:42.881313 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:12:42.884389 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:12:42.904214 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO Agent will take identity from EC2 Aug 5 22:12:42.991999 containerd[1968]: time="2024-08-05T22:12:42.991883167Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:12:42.992323 containerd[1968]: time="2024-08-05T22:12:42.992295558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:42.995165 containerd[1968]: time="2024-08-05T22:12:42.995093054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:42.995321 containerd[1968]: time="2024-08-05T22:12:42.995302535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:42.995800 containerd[1968]: time="2024-08-05T22:12:42.995766909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:42.996170 containerd[1968]: time="2024-08-05T22:12:42.996148295Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:12:42.996371 containerd[1968]: time="2024-08-05T22:12:42.996352419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:42.996574 containerd[1968]: time="2024-08-05T22:12:42.996549167Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:42.996665 containerd[1968]: time="2024-08-05T22:12:42.996649329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:42.996813 containerd[1968]: time="2024-08-05T22:12:42.996797850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:42.999716 containerd[1968]: time="2024-08-05T22:12:42.999630643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:42.999871 containerd[1968]: time="2024-08-05T22:12:42.999852087Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:12:42.999955 containerd[1968]: time="2024-08-05T22:12:42.999941262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:12:43.000917 containerd[1968]: time="2024-08-05T22:12:43.000864281Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:12:43.001129 containerd[1968]: time="2024-08-05T22:12:43.001081590Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:12:43.001523 containerd[1968]: time="2024-08-05T22:12:43.001498241Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:12:43.001854 containerd[1968]: time="2024-08-05T22:12:43.001612364Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:12:43.003354 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.010870276Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.011172146Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.011209218Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.011260127Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.011288225Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.011470345Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.011509233Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.011758903Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.011959413Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.011988010Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:12:43.013161 containerd[1968]: time="2024-08-05T22:12:43.012014975Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:12:43.018138 containerd[1968]: time="2024-08-05T22:12:43.016258452Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:12:43.018138 containerd[1968]: time="2024-08-05T22:12:43.016328699Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:12:43.018138 containerd[1968]: time="2024-08-05T22:12:43.016419398Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:12:43.018138 containerd[1968]: time="2024-08-05T22:12:43.016452016Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:12:43.018138 containerd[1968]: time="2024-08-05T22:12:43.016898170Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:12:43.018138 containerd[1968]: time="2024-08-05T22:12:43.016936821Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:12:43.018138 containerd[1968]: time="2024-08-05T22:12:43.016966099Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:12:43.018138 containerd[1968]: time="2024-08-05T22:12:43.016992048Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:12:43.018138 containerd[1968]: time="2024-08-05T22:12:43.017245847Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:12:43.023644 containerd[1968]: time="2024-08-05T22:12:43.023600706Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:12:43.023847 containerd[1968]: time="2024-08-05T22:12:43.023824770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.023945 containerd[1968]: time="2024-08-05T22:12:43.023927563Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:12:43.024054 containerd[1968]: time="2024-08-05T22:12:43.024035631Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025254208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025302835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025324998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025344721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025365112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025384868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025403755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025422280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025446679Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025631068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025662989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025681846Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025700645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025720211Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.026126 containerd[1968]: time="2024-08-05T22:12:43.025740287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.027070 containerd[1968]: time="2024-08-05T22:12:43.025760169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.027070 containerd[1968]: time="2024-08-05T22:12:43.025865073Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:12:43.030276 containerd[1968]: time="2024-08-05T22:12:43.028753213Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:12:43.030276 containerd[1968]: time="2024-08-05T22:12:43.028866662Z" level=info msg="Connect containerd service" Aug 5 22:12:43.030276 containerd[1968]: time="2024-08-05T22:12:43.028937528Z" level=info msg="using legacy CRI server" Aug 5 22:12:43.030276 containerd[1968]: time="2024-08-05T22:12:43.028951250Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:12:43.030276 containerd[1968]: time="2024-08-05T22:12:43.029097356Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:12:43.035286 containerd[1968]: time="2024-08-05T22:12:43.034791333Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:12:43.035286 containerd[1968]: time="2024-08-05T22:12:43.034886761Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:12:43.035286 containerd[1968]: time="2024-08-05T22:12:43.034960768Z" level=info msg="Start subscribing containerd event" Aug 5 22:12:43.035286 containerd[1968]: time="2024-08-05T22:12:43.035016027Z" level=info msg="Start recovering state" Aug 5 22:12:43.035286 containerd[1968]: time="2024-08-05T22:12:43.035100991Z" level=info msg="Start event monitor" Aug 5 22:12:43.035286 containerd[1968]: time="2024-08-05T22:12:43.035150539Z" level=info msg="Start snapshots syncer" Aug 5 22:12:43.035286 containerd[1968]: time="2024-08-05T22:12:43.035164748Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:12:43.035286 containerd[1968]: time="2024-08-05T22:12:43.035178129Z" level=info msg="Start streaming server" Aug 5 22:12:43.038477 containerd[1968]: time="2024-08-05T22:12:43.037167709Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:12:43.038477 containerd[1968]: time="2024-08-05T22:12:43.037440347Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:12:43.038477 containerd[1968]: time="2024-08-05T22:12:43.037501545Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:12:43.038477 containerd[1968]: time="2024-08-05T22:12:43.037830125Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:12:43.038477 containerd[1968]: time="2024-08-05T22:12:43.037947228Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:12:43.038477 containerd[1968]: time="2024-08-05T22:12:43.038299188Z" level=info msg="containerd successfully booted in 0.191441s" Aug 5 22:12:43.038165 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:12:43.105347 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:12:43.205776 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO [amazon-ssm-agent] using named pipe channel for IPC Aug 5 22:12:43.239758 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Aug 5 22:12:43.239758 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Aug 5 22:12:43.239758 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO [amazon-ssm-agent] Starting Core Agent Aug 5 22:12:43.239758 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO [amazon-ssm-agent] registrar detected. Attempting registration Aug 5 22:12:43.240169 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO [Registrar] Starting registrar module Aug 5 22:12:43.240169 amazon-ssm-agent[2065]: 2024-08-05 22:12:42 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Aug 5 22:12:43.240169 amazon-ssm-agent[2065]: 2024-08-05 22:12:43 INFO [EC2Identity] EC2 registration was successful. Aug 5 22:12:43.240169 amazon-ssm-agent[2065]: 2024-08-05 22:12:43 INFO [CredentialRefresher] credentialRefresher has started Aug 5 22:12:43.240169 amazon-ssm-agent[2065]: 2024-08-05 22:12:43 INFO [CredentialRefresher] Starting credentials refresher loop Aug 5 22:12:43.240169 amazon-ssm-agent[2065]: 2024-08-05 22:12:43 INFO EC2RoleProvider Successfully connected with instance profile role credentials Aug 5 22:12:43.304814 amazon-ssm-agent[2065]: 2024-08-05 22:12:43 INFO [CredentialRefresher] Next credential rotation will be in 32.433326370433335 minutes Aug 5 22:12:43.463529 tar[1956]: linux-amd64/LICENSE Aug 5 22:12:43.463529 tar[1956]: linux-amd64/README.md Aug 5 22:12:43.485437 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:12:43.891457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:43.893680 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:12:43.896003 systemd[1]: Startup finished in 722ms (kernel) + 9.452s (initrd) + 7.772s (userspace) = 17.947s. Aug 5 22:12:44.050741 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:44.215289 ntpd[1937]: Listen normally on 6 eth0 [fe80::4a1:63ff:fed6:e16f%2]:123 Aug 5 22:12:44.215715 ntpd[1937]: 5 Aug 22:12:44 ntpd[1937]: Listen normally on 6 eth0 [fe80::4a1:63ff:fed6:e16f%2]:123 Aug 5 22:12:44.255208 amazon-ssm-agent[2065]: 2024-08-05 22:12:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Aug 5 22:12:44.356537 amazon-ssm-agent[2065]: 2024-08-05 22:12:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2191) started Aug 5 22:12:44.456216 amazon-ssm-agent[2065]: 2024-08-05 22:12:44 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Aug 5 22:12:44.933803 kubelet[2181]: E0805 22:12:44.933719 2181 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:44.937157 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:44.938149 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:44.939306 systemd[1]: kubelet.service: Consumed 1.087s CPU time. Aug 5 22:12:48.744527 systemd-resolved[1761]: Clock change detected. Flushing caches. Aug 5 22:12:51.451882 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:12:51.459469 systemd[1]: Started sshd@0-172.31.23.76:22-139.178.89.65:33244.service - OpenSSH per-connection server daemon (139.178.89.65:33244). Aug 5 22:12:51.659175 sshd[2205]: Accepted publickey for core from 139.178.89.65 port 33244 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:51.661317 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:51.676758 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:12:51.685812 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:12:51.689134 systemd-logind[1945]: New session 1 of user core. Aug 5 22:12:51.724637 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:12:51.732212 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:12:51.742606 (systemd)[2209]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:51.867790 systemd[2209]: Queued start job for default target default.target. Aug 5 22:12:51.876557 systemd[2209]: Created slice app.slice - User Application Slice. Aug 5 22:12:51.876599 systemd[2209]: Reached target paths.target - Paths. Aug 5 22:12:51.876619 systemd[2209]: Reached target timers.target - Timers. Aug 5 22:12:51.879800 systemd[2209]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:12:51.899217 systemd[2209]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:12:51.899372 systemd[2209]: Reached target sockets.target - Sockets. Aug 5 22:12:51.899392 systemd[2209]: Reached target basic.target - Basic System. Aug 5 22:12:51.899443 systemd[2209]: Reached target default.target - Main User Target. Aug 5 22:12:51.899481 systemd[2209]: Startup finished in 149ms. Aug 5 22:12:51.899781 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:12:51.910419 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:12:52.064436 systemd[1]: Started sshd@1-172.31.23.76:22-139.178.89.65:33246.service - OpenSSH per-connection server daemon (139.178.89.65:33246). Aug 5 22:12:52.233452 sshd[2220]: Accepted publickey for core from 139.178.89.65 port 33246 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:52.235257 sshd[2220]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:52.240136 systemd-logind[1945]: New session 2 of user core. Aug 5 22:12:52.247330 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:12:52.372786 sshd[2220]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:52.376578 systemd[1]: sshd@1-172.31.23.76:22-139.178.89.65:33246.service: Deactivated successfully. Aug 5 22:12:52.378525 systemd[1]: session-2.scope: Deactivated successfully. Aug 5 22:12:52.379950 systemd-logind[1945]: Session 2 logged out. Waiting for processes to exit. Aug 5 22:12:52.381208 systemd-logind[1945]: Removed session 2. Aug 5 22:12:52.403273 systemd[1]: Started sshd@2-172.31.23.76:22-139.178.89.65:33252.service - OpenSSH per-connection server daemon (139.178.89.65:33252). Aug 5 22:12:52.569729 sshd[2227]: Accepted publickey for core from 139.178.89.65 port 33252 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:52.571385 sshd[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:52.577137 systemd-logind[1945]: New session 3 of user core. Aug 5 22:12:52.587386 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:12:52.700131 sshd[2227]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:52.707986 systemd[1]: sshd@2-172.31.23.76:22-139.178.89.65:33252.service: Deactivated successfully. Aug 5 22:12:52.712395 systemd[1]: session-3.scope: Deactivated successfully. Aug 5 22:12:52.714013 systemd-logind[1945]: Session 3 logged out. Waiting for processes to exit. Aug 5 22:12:52.715666 systemd-logind[1945]: Removed session 3. Aug 5 22:12:52.743520 systemd[1]: Started sshd@3-172.31.23.76:22-139.178.89.65:33262.service - OpenSSH per-connection server daemon (139.178.89.65:33262). Aug 5 22:12:52.906292 sshd[2234]: Accepted publickey for core from 139.178.89.65 port 33262 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:52.908293 sshd[2234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:52.916801 systemd-logind[1945]: New session 4 of user core. Aug 5 22:12:52.925349 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:12:53.057011 sshd[2234]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:53.062750 systemd[1]: sshd@3-172.31.23.76:22-139.178.89.65:33262.service: Deactivated successfully. Aug 5 22:12:53.068964 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:12:53.070618 systemd-logind[1945]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:12:53.096492 systemd-logind[1945]: Removed session 4. Aug 5 22:12:53.108778 systemd[1]: Started sshd@4-172.31.23.76:22-139.178.89.65:33264.service - OpenSSH per-connection server daemon (139.178.89.65:33264). Aug 5 22:12:53.266232 sshd[2241]: Accepted publickey for core from 139.178.89.65 port 33264 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:53.267741 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:53.273064 systemd-logind[1945]: New session 5 of user core. Aug 5 22:12:53.279296 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:12:53.406173 sudo[2244]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:12:53.406545 sudo[2244]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:53.423676 sudo[2244]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:53.445982 sshd[2241]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:53.450632 systemd[1]: sshd@4-172.31.23.76:22-139.178.89.65:33264.service: Deactivated successfully. Aug 5 22:12:53.452433 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:12:53.453205 systemd-logind[1945]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:12:53.454565 systemd-logind[1945]: Removed session 5. Aug 5 22:12:53.484989 systemd[1]: Started sshd@5-172.31.23.76:22-139.178.89.65:33272.service - OpenSSH per-connection server daemon (139.178.89.65:33272). Aug 5 22:12:53.639438 sshd[2249]: Accepted publickey for core from 139.178.89.65 port 33272 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:53.640940 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:53.645733 systemd-logind[1945]: New session 6 of user core. Aug 5 22:12:53.654311 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:12:53.753384 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:12:53.753742 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:53.758299 sudo[2253]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:53.763945 sudo[2252]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:12:53.764439 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:53.790605 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:53.795471 auditctl[2256]: No rules Aug 5 22:12:53.795891 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:12:53.796129 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:53.799237 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:53.866005 augenrules[2274]: No rules Aug 5 22:12:53.867785 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:53.869478 sudo[2252]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:53.892977 sshd[2249]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:53.896656 systemd[1]: sshd@5-172.31.23.76:22-139.178.89.65:33272.service: Deactivated successfully. Aug 5 22:12:53.898772 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:12:53.900999 systemd-logind[1945]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:12:53.902751 systemd-logind[1945]: Removed session 6. Aug 5 22:12:53.928525 systemd[1]: Started sshd@6-172.31.23.76:22-139.178.89.65:33280.service - OpenSSH per-connection server daemon (139.178.89.65:33280). Aug 5 22:12:54.094522 sshd[2282]: Accepted publickey for core from 139.178.89.65 port 33280 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:12:54.096189 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:54.102975 systemd-logind[1945]: New session 7 of user core. Aug 5 22:12:54.110365 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:12:54.209884 sudo[2285]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:12:54.210301 sudo[2285]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:54.405574 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:12:54.409113 (dockerd)[2294]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:12:54.989404 dockerd[2294]: time="2024-08-05T22:12:54.989340986Z" level=info msg="Starting up" Aug 5 22:12:55.020617 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3353192021-merged.mount: Deactivated successfully. Aug 5 22:12:55.058797 systemd[1]: var-lib-docker-metacopy\x2dcheck1234827490-merged.mount: Deactivated successfully. Aug 5 22:12:55.119008 dockerd[2294]: time="2024-08-05T22:12:55.118647783Z" level=info msg="Loading containers: start." Aug 5 22:12:55.342106 kernel: Initializing XFRM netlink socket Aug 5 22:12:55.396764 (udev-worker)[2306]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:12:55.480698 systemd-networkd[1809]: docker0: Link UP Aug 5 22:12:55.500572 dockerd[2294]: time="2024-08-05T22:12:55.500527117Z" level=info msg="Loading containers: done." Aug 5 22:12:55.584576 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:12:55.590542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:55.867288 dockerd[2294]: time="2024-08-05T22:12:55.867233586Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:12:55.867511 dockerd[2294]: time="2024-08-05T22:12:55.867491306Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:12:55.867654 dockerd[2294]: time="2024-08-05T22:12:55.867617634Z" level=info msg="Daemon has completed initialization" Aug 5 22:12:55.977501 dockerd[2294]: time="2024-08-05T22:12:55.977161599Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:12:55.977270 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:12:56.014715 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3795168300-merged.mount: Deactivated successfully. Aug 5 22:12:56.342465 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:56.355508 (kubelet)[2426]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:56.468956 kubelet[2426]: E0805 22:12:56.468849 2426 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:56.474048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:56.474530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:57.435965 containerd[1968]: time="2024-08-05T22:12:57.435636027Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\"" Aug 5 22:12:58.166001 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1154389656.mount: Deactivated successfully. Aug 5 22:13:01.561464 containerd[1968]: time="2024-08-05T22:13:01.561402495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:01.564641 containerd[1968]: time="2024-08-05T22:13:01.564093357Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.7: active requests=0, bytes read=35232396" Aug 5 22:13:01.567206 containerd[1968]: time="2024-08-05T22:13:01.567115254Z" level=info msg="ImageCreate event name:\"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:01.574235 containerd[1968]: time="2024-08-05T22:13:01.574156711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:01.577751 containerd[1968]: time="2024-08-05T22:13:01.577232408Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.7\" with image id \"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\", size \"35229196\" in 4.141553847s" Aug 5 22:13:01.577751 containerd[1968]: time="2024-08-05T22:13:01.577363946Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\" returns image reference \"sha256:a2e0d7fa8464a06b07519d78f53fef101bb1bcf716a85f2ac8b397f1a0025bea\"" Aug 5 22:13:01.667825 containerd[1968]: time="2024-08-05T22:13:01.667770911Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\"" Aug 5 22:13:05.318934 containerd[1968]: time="2024-08-05T22:13:05.318872992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:05.320682 containerd[1968]: time="2024-08-05T22:13:05.320449032Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.7: active requests=0, bytes read=32204824" Aug 5 22:13:05.323777 containerd[1968]: time="2024-08-05T22:13:05.322130982Z" level=info msg="ImageCreate event name:\"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:05.326300 containerd[1968]: time="2024-08-05T22:13:05.326258766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:05.327750 containerd[1968]: time="2024-08-05T22:13:05.327704375Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.7\" with image id \"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\", size \"33754770\" in 3.659879824s" Aug 5 22:13:05.327981 containerd[1968]: time="2024-08-05T22:13:05.327956472Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\" returns image reference \"sha256:32fe966e5c2b2a05d6b6a56a63a60e09d4c227ec1742d68f921c0b72e23537f8\"" Aug 5 22:13:05.359937 containerd[1968]: time="2024-08-05T22:13:05.359892344Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\"" Aug 5 22:13:06.590279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:13:06.598123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:07.483428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:07.492099 (kubelet)[2522]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:13:07.610612 kubelet[2522]: E0805 22:13:07.610478 2522 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:13:07.617667 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:13:07.617862 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:13:07.800045 containerd[1968]: time="2024-08-05T22:13:07.799594168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:07.813578 containerd[1968]: time="2024-08-05T22:13:07.813177834Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.7: active requests=0, bytes read=17320803" Aug 5 22:13:07.824055 containerd[1968]: time="2024-08-05T22:13:07.822498037Z" level=info msg="ImageCreate event name:\"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:07.835147 containerd[1968]: time="2024-08-05T22:13:07.835089806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:07.836649 containerd[1968]: time="2024-08-05T22:13:07.836600031Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.7\" with image id \"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\", size \"18870767\" in 2.476658159s" Aug 5 22:13:07.836845 containerd[1968]: time="2024-08-05T22:13:07.836653433Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\" returns image reference \"sha256:9cffb486021b39220589cbd71b6537e6f9cafdede1eba315b4b0dc83e2f4fc8e\"" Aug 5 22:13:07.896254 containerd[1968]: time="2024-08-05T22:13:07.896209597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\"" Aug 5 22:13:09.339648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2712702951.mount: Deactivated successfully. Aug 5 22:13:10.350020 containerd[1968]: time="2024-08-05T22:13:10.349939122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:10.356461 containerd[1968]: time="2024-08-05T22:13:10.356400816Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.7: active requests=0, bytes read=28600088" Aug 5 22:13:10.362691 containerd[1968]: time="2024-08-05T22:13:10.362616661Z" level=info msg="ImageCreate event name:\"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:10.377729 containerd[1968]: time="2024-08-05T22:13:10.376587447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:10.378085 containerd[1968]: time="2024-08-05T22:13:10.378030637Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.7\" with image id \"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\", repo tag \"registry.k8s.io/kube-proxy:v1.29.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\", size \"28599107\" in 2.481773584s" Aug 5 22:13:10.378203 containerd[1968]: time="2024-08-05T22:13:10.378180028Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\" returns image reference \"sha256:cc8c46cf9d741d1e8a357e5899f298d2f4ac4d890a2d248026b57e130e91cd07\"" Aug 5 22:13:10.409204 containerd[1968]: time="2024-08-05T22:13:10.409162719Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 22:13:11.128700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount511645702.mount: Deactivated successfully. Aug 5 22:13:12.538426 containerd[1968]: time="2024-08-05T22:13:12.538368111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:12.541205 containerd[1968]: time="2024-08-05T22:13:12.541141383Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Aug 5 22:13:12.544800 containerd[1968]: time="2024-08-05T22:13:12.544743371Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:12.550141 containerd[1968]: time="2024-08-05T22:13:12.549930005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:12.551515 containerd[1968]: time="2024-08-05T22:13:12.551321190Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.142106515s" Aug 5 22:13:12.551515 containerd[1968]: time="2024-08-05T22:13:12.551367637Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Aug 5 22:13:12.585217 containerd[1968]: time="2024-08-05T22:13:12.585179078Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:13:13.039023 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 5 22:13:13.166700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3260177589.mount: Deactivated successfully. Aug 5 22:13:13.177797 containerd[1968]: time="2024-08-05T22:13:13.177740542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:13.180387 containerd[1968]: time="2024-08-05T22:13:13.179354897Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Aug 5 22:13:13.182145 containerd[1968]: time="2024-08-05T22:13:13.181989725Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:13.186287 containerd[1968]: time="2024-08-05T22:13:13.186242748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:13.188591 containerd[1968]: time="2024-08-05T22:13:13.188445250Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 603.221836ms" Aug 5 22:13:13.188591 containerd[1968]: time="2024-08-05T22:13:13.188492070Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:13:13.225542 containerd[1968]: time="2024-08-05T22:13:13.225492715Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:13:13.857018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2238764968.mount: Deactivated successfully. Aug 5 22:13:17.019985 containerd[1968]: time="2024-08-05T22:13:17.019923857Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:17.022707 containerd[1968]: time="2024-08-05T22:13:17.022643061Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Aug 5 22:13:17.025732 containerd[1968]: time="2024-08-05T22:13:17.024828421Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:17.028905 containerd[1968]: time="2024-08-05T22:13:17.028862385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:17.031014 containerd[1968]: time="2024-08-05T22:13:17.030965778Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.805430153s" Aug 5 22:13:17.031158 containerd[1968]: time="2024-08-05T22:13:17.031024278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 5 22:13:17.657067 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 22:13:17.662486 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:18.319452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:18.328541 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:13:18.422152 kubelet[2723]: E0805 22:13:18.422064 2723 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:13:18.427178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:13:18.427385 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:13:22.235065 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:22.246692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:22.292313 systemd[1]: Reloading requested from client PID 2737 ('systemctl') (unit session-7.scope)... Aug 5 22:13:22.292332 systemd[1]: Reloading... Aug 5 22:13:22.435300 zram_generator::config[2775]: No configuration found. Aug 5 22:13:22.605032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:13:22.735393 systemd[1]: Reloading finished in 442 ms. Aug 5 22:13:22.781800 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:13:22.781915 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:13:22.782214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:22.789725 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:23.237049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:23.249101 (kubelet)[2832]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:13:23.326523 kubelet[2832]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:23.326523 kubelet[2832]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:13:23.326523 kubelet[2832]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:23.329188 kubelet[2832]: I0805 22:13:23.329115 2832 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:13:23.662602 kubelet[2832]: I0805 22:13:23.662562 2832 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:13:23.662602 kubelet[2832]: I0805 22:13:23.662593 2832 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:13:23.663156 kubelet[2832]: I0805 22:13:23.663131 2832 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:13:23.708728 kubelet[2832]: E0805 22:13:23.708656 2832 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:23.709343 kubelet[2832]: I0805 22:13:23.709054 2832 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:13:23.732620 kubelet[2832]: I0805 22:13:23.732582 2832 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:13:23.737410 kubelet[2832]: I0805 22:13:23.737364 2832 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:13:23.738740 kubelet[2832]: I0805 22:13:23.738694 2832 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:13:23.739386 kubelet[2832]: I0805 22:13:23.739356 2832 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:13:23.739453 kubelet[2832]: I0805 22:13:23.739395 2832 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:13:23.739690 kubelet[2832]: I0805 22:13:23.739665 2832 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:23.739823 kubelet[2832]: I0805 22:13:23.739807 2832 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:13:23.739884 kubelet[2832]: I0805 22:13:23.739832 2832 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:13:23.739884 kubelet[2832]: I0805 22:13:23.739875 2832 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:13:23.739954 kubelet[2832]: I0805 22:13:23.739894 2832 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:13:23.745381 kubelet[2832]: W0805 22:13:23.744827 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-76&limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:23.745381 kubelet[2832]: E0805 22:13:23.744907 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-76&limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:23.746349 kubelet[2832]: I0805 22:13:23.746326 2832 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:13:23.753866 kubelet[2832]: I0805 22:13:23.753830 2832 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:13:23.756574 kubelet[2832]: W0805 22:13:23.756320 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:23.757860 kubelet[2832]: E0805 22:13:23.756998 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:23.757860 kubelet[2832]: W0805 22:13:23.756022 2832 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:13:23.757860 kubelet[2832]: I0805 22:13:23.757709 2832 server.go:1256] "Started kubelet" Aug 5 22:13:23.757860 kubelet[2832]: I0805 22:13:23.757799 2832 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:13:23.759359 kubelet[2832]: I0805 22:13:23.758870 2832 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:13:23.762813 kubelet[2832]: I0805 22:13:23.762662 2832 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:13:23.765309 kubelet[2832]: I0805 22:13:23.765285 2832 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:13:23.768112 kubelet[2832]: I0805 22:13:23.765677 2832 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:13:23.771707 kubelet[2832]: E0805 22:13:23.769736 2832 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.76:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.76:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-76.17e8f4c980eab6e2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-76,UID:ip-172-31-23-76,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-76,},FirstTimestamp:2024-08-05 22:13:23.75768445 +0000 UTC m=+0.501396073,LastTimestamp:2024-08-05 22:13:23.75768445 +0000 UTC m=+0.501396073,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-76,}" Aug 5 22:13:23.779142 kubelet[2832]: I0805 22:13:23.778018 2832 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:13:23.780767 kubelet[2832]: I0805 22:13:23.780738 2832 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:13:23.780873 kubelet[2832]: I0805 22:13:23.780821 2832 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:13:23.782210 kubelet[2832]: E0805 22:13:23.782188 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-76?timeout=10s\": dial tcp 172.31.23.76:6443: connect: connection refused" interval="200ms" Aug 5 22:13:23.782335 kubelet[2832]: I0805 22:13:23.782320 2832 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:13:23.782439 kubelet[2832]: I0805 22:13:23.782425 2832 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:13:23.785120 kubelet[2832]: W0805 22:13:23.785044 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:23.785207 kubelet[2832]: E0805 22:13:23.785132 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:23.785581 kubelet[2832]: E0805 22:13:23.785562 2832 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:13:23.786587 kubelet[2832]: I0805 22:13:23.786561 2832 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:13:23.801326 kubelet[2832]: I0805 22:13:23.801224 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:13:23.802943 kubelet[2832]: I0805 22:13:23.802909 2832 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:13:23.803054 kubelet[2832]: I0805 22:13:23.802955 2832 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:13:23.803054 kubelet[2832]: I0805 22:13:23.802978 2832 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:13:23.803054 kubelet[2832]: E0805 22:13:23.803036 2832 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:13:23.819683 kubelet[2832]: W0805 22:13:23.817842 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:23.819683 kubelet[2832]: E0805 22:13:23.817914 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:23.829785 kubelet[2832]: I0805 22:13:23.829751 2832 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:13:23.829785 kubelet[2832]: I0805 22:13:23.829774 2832 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:13:23.829785 kubelet[2832]: I0805 22:13:23.829794 2832 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:23.833753 kubelet[2832]: I0805 22:13:23.833718 2832 policy_none.go:49] "None policy: Start" Aug 5 22:13:23.834482 kubelet[2832]: I0805 22:13:23.834458 2832 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:13:23.834482 kubelet[2832]: I0805 22:13:23.834486 2832 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:13:23.846653 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:13:23.860397 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:13:23.864545 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:13:23.877256 kubelet[2832]: I0805 22:13:23.877217 2832 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:13:23.877546 kubelet[2832]: I0805 22:13:23.877524 2832 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:13:23.882038 kubelet[2832]: E0805 22:13:23.882017 2832 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-76\" not found" Aug 5 22:13:23.882472 kubelet[2832]: I0805 22:13:23.882426 2832 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-76" Aug 5 22:13:23.882933 kubelet[2832]: E0805 22:13:23.882909 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.76:6443/api/v1/nodes\": dial tcp 172.31.23.76:6443: connect: connection refused" node="ip-172-31-23-76" Aug 5 22:13:23.903861 kubelet[2832]: I0805 22:13:23.903822 2832 topology_manager.go:215] "Topology Admit Handler" podUID="cffec846eb8a63c905909af7b90e3433" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-76" Aug 5 22:13:23.914369 kubelet[2832]: I0805 22:13:23.914250 2832 topology_manager.go:215] "Topology Admit Handler" podUID="2007eebafeeddb4baffca141ce9bda75" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-76" Aug 5 22:13:23.917574 kubelet[2832]: I0805 22:13:23.917541 2832 topology_manager.go:215] "Topology Admit Handler" podUID="5fe95be357996ce2302d9c9f91aa8f13" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:23.946414 systemd[1]: Created slice kubepods-burstable-podcffec846eb8a63c905909af7b90e3433.slice - libcontainer container kubepods-burstable-podcffec846eb8a63c905909af7b90e3433.slice. Aug 5 22:13:23.964145 systemd[1]: Created slice kubepods-burstable-pod2007eebafeeddb4baffca141ce9bda75.slice - libcontainer container kubepods-burstable-pod2007eebafeeddb4baffca141ce9bda75.slice. Aug 5 22:13:23.974713 systemd[1]: Created slice kubepods-burstable-pod5fe95be357996ce2302d9c9f91aa8f13.slice - libcontainer container kubepods-burstable-pod5fe95be357996ce2302d9c9f91aa8f13.slice. Aug 5 22:13:23.982065 kubelet[2832]: I0805 22:13:23.982020 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:23.982065 kubelet[2832]: I0805 22:13:23.982304 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:23.982065 kubelet[2832]: I0805 22:13:23.982373 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2007eebafeeddb4baffca141ce9bda75-ca-certs\") pod \"kube-apiserver-ip-172-31-23-76\" (UID: \"2007eebafeeddb4baffca141ce9bda75\") " pod="kube-system/kube-apiserver-ip-172-31-23-76" Aug 5 22:13:23.982065 kubelet[2832]: I0805 22:13:23.982420 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2007eebafeeddb4baffca141ce9bda75-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-76\" (UID: \"2007eebafeeddb4baffca141ce9bda75\") " pod="kube-system/kube-apiserver-ip-172-31-23-76" Aug 5 22:13:23.982065 kubelet[2832]: I0805 22:13:23.982458 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2007eebafeeddb4baffca141ce9bda75-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-76\" (UID: \"2007eebafeeddb4baffca141ce9bda75\") " pod="kube-system/kube-apiserver-ip-172-31-23-76" Aug 5 22:13:23.982800 kubelet[2832]: I0805 22:13:23.982486 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:23.982800 kubelet[2832]: I0805 22:13:23.982538 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:23.982800 kubelet[2832]: I0805 22:13:23.982572 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:23.982800 kubelet[2832]: I0805 22:13:23.982599 2832 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cffec846eb8a63c905909af7b90e3433-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-76\" (UID: \"cffec846eb8a63c905909af7b90e3433\") " pod="kube-system/kube-scheduler-ip-172-31-23-76" Aug 5 22:13:23.983062 kubelet[2832]: E0805 22:13:23.982828 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-76?timeout=10s\": dial tcp 172.31.23.76:6443: connect: connection refused" interval="400ms" Aug 5 22:13:24.086512 kubelet[2832]: I0805 22:13:24.086488 2832 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-76" Aug 5 22:13:24.087100 kubelet[2832]: E0805 22:13:24.087056 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.76:6443/api/v1/nodes\": dial tcp 172.31.23.76:6443: connect: connection refused" node="ip-172-31-23-76" Aug 5 22:13:24.260209 containerd[1968]: time="2024-08-05T22:13:24.260059386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-76,Uid:cffec846eb8a63c905909af7b90e3433,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:24.274193 containerd[1968]: time="2024-08-05T22:13:24.274106885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-76,Uid:2007eebafeeddb4baffca141ce9bda75,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:24.279038 containerd[1968]: time="2024-08-05T22:13:24.278993934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-76,Uid:5fe95be357996ce2302d9c9f91aa8f13,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:24.383288 kubelet[2832]: E0805 22:13:24.383252 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-76?timeout=10s\": dial tcp 172.31.23.76:6443: connect: connection refused" interval="800ms" Aug 5 22:13:24.492353 kubelet[2832]: I0805 22:13:24.492324 2832 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-76" Aug 5 22:13:24.492784 kubelet[2832]: E0805 22:13:24.492758 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.76:6443/api/v1/nodes\": dial tcp 172.31.23.76:6443: connect: connection refused" node="ip-172-31-23-76" Aug 5 22:13:24.663067 kubelet[2832]: W0805 22:13:24.663025 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.23.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:24.663067 kubelet[2832]: E0805 22:13:24.663088 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.23.76:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:24.809833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4032453136.mount: Deactivated successfully. Aug 5 22:13:24.827391 containerd[1968]: time="2024-08-05T22:13:24.827294614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:24.830433 containerd[1968]: time="2024-08-05T22:13:24.830254769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:13:24.832563 containerd[1968]: time="2024-08-05T22:13:24.832522187Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:24.840873 containerd[1968]: time="2024-08-05T22:13:24.835593944Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:24.840992 kubelet[2832]: W0805 22:13:24.836032 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.23.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-76&limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:24.840992 kubelet[2832]: E0805 22:13:24.836128 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.23.76:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-76&limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:24.845378 containerd[1968]: time="2024-08-05T22:13:24.845328067Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:24.850841 containerd[1968]: time="2024-08-05T22:13:24.850775741Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:13:24.852206 containerd[1968]: time="2024-08-05T22:13:24.852130153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Aug 5 22:13:24.856681 containerd[1968]: time="2024-08-05T22:13:24.856567248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:13:24.863641 containerd[1968]: time="2024-08-05T22:13:24.860575610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 594.835787ms" Aug 5 22:13:24.867462 containerd[1968]: time="2024-08-05T22:13:24.867314492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 588.128999ms" Aug 5 22:13:24.877183 containerd[1968]: time="2024-08-05T22:13:24.876238692Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 601.79265ms" Aug 5 22:13:25.021309 kubelet[2832]: W0805 22:13:25.021094 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.23.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:25.021309 kubelet[2832]: E0805 22:13:25.021147 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.23.76:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:25.073490 kubelet[2832]: W0805 22:13:25.073367 2832 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.23.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:25.073490 kubelet[2832]: E0805 22:13:25.073421 2832 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.23.76:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:25.186486 kubelet[2832]: E0805 22:13:25.186456 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-76?timeout=10s\": dial tcp 172.31.23.76:6443: connect: connection refused" interval="1.6s" Aug 5 22:13:25.211285 containerd[1968]: time="2024-08-05T22:13:25.211186880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:25.211710 containerd[1968]: time="2024-08-05T22:13:25.211258349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:25.211710 containerd[1968]: time="2024-08-05T22:13:25.211285964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:25.211710 containerd[1968]: time="2024-08-05T22:13:25.211301341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:25.212476 containerd[1968]: time="2024-08-05T22:13:25.212303909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:25.212476 containerd[1968]: time="2024-08-05T22:13:25.212441941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:25.212802 containerd[1968]: time="2024-08-05T22:13:25.212475004Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:25.212802 containerd[1968]: time="2024-08-05T22:13:25.212502816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:25.220027 containerd[1968]: time="2024-08-05T22:13:25.219913831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:25.220641 containerd[1968]: time="2024-08-05T22:13:25.220302116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:25.221062 containerd[1968]: time="2024-08-05T22:13:25.220620873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:25.221062 containerd[1968]: time="2024-08-05T22:13:25.220941827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:25.294314 systemd[1]: Started cri-containerd-02c22eda51579900bd96ac3760bf1658df0e6b54568d9f29cf2cf838413cf8de.scope - libcontainer container 02c22eda51579900bd96ac3760bf1658df0e6b54568d9f29cf2cf838413cf8de. Aug 5 22:13:25.296152 kubelet[2832]: I0805 22:13:25.294499 2832 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-76" Aug 5 22:13:25.296152 kubelet[2832]: E0805 22:13:25.295049 2832 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.23.76:6443/api/v1/nodes\": dial tcp 172.31.23.76:6443: connect: connection refused" node="ip-172-31-23-76" Aug 5 22:13:25.297658 systemd[1]: Started cri-containerd-06c2b8ce5091a19afb858250c0d9ee1fb3635101172c8476ee38a41fdef685f8.scope - libcontainer container 06c2b8ce5091a19afb858250c0d9ee1fb3635101172c8476ee38a41fdef685f8. Aug 5 22:13:25.300682 systemd[1]: Started cri-containerd-ada22606e0b1f38c444f1530a5f555cd44ba1a2913c286af3605d71fb28f5503.scope - libcontainer container ada22606e0b1f38c444f1530a5f555cd44ba1a2913c286af3605d71fb28f5503. Aug 5 22:13:25.469457 containerd[1968]: time="2024-08-05T22:13:25.469410067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-76,Uid:5fe95be357996ce2302d9c9f91aa8f13,Namespace:kube-system,Attempt:0,} returns sandbox id \"02c22eda51579900bd96ac3760bf1658df0e6b54568d9f29cf2cf838413cf8de\"" Aug 5 22:13:25.477143 containerd[1968]: time="2024-08-05T22:13:25.477087272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-76,Uid:2007eebafeeddb4baffca141ce9bda75,Namespace:kube-system,Attempt:0,} returns sandbox id \"06c2b8ce5091a19afb858250c0d9ee1fb3635101172c8476ee38a41fdef685f8\"" Aug 5 22:13:25.485780 containerd[1968]: time="2024-08-05T22:13:25.485731315Z" level=info msg="CreateContainer within sandbox \"02c22eda51579900bd96ac3760bf1658df0e6b54568d9f29cf2cf838413cf8de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:13:25.486420 containerd[1968]: time="2024-08-05T22:13:25.486069686Z" level=info msg="CreateContainer within sandbox \"06c2b8ce5091a19afb858250c0d9ee1fb3635101172c8476ee38a41fdef685f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:13:25.496533 containerd[1968]: time="2024-08-05T22:13:25.496485612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-76,Uid:cffec846eb8a63c905909af7b90e3433,Namespace:kube-system,Attempt:0,} returns sandbox id \"ada22606e0b1f38c444f1530a5f555cd44ba1a2913c286af3605d71fb28f5503\"" Aug 5 22:13:25.500345 containerd[1968]: time="2024-08-05T22:13:25.500291752Z" level=info msg="CreateContainer within sandbox \"ada22606e0b1f38c444f1530a5f555cd44ba1a2913c286af3605d71fb28f5503\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:13:25.523294 containerd[1968]: time="2024-08-05T22:13:25.523243381Z" level=info msg="CreateContainer within sandbox \"02c22eda51579900bd96ac3760bf1658df0e6b54568d9f29cf2cf838413cf8de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5\"" Aug 5 22:13:25.525172 containerd[1968]: time="2024-08-05T22:13:25.524142637Z" level=info msg="StartContainer for \"295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5\"" Aug 5 22:13:25.559819 containerd[1968]: time="2024-08-05T22:13:25.558942431Z" level=info msg="CreateContainer within sandbox \"06c2b8ce5091a19afb858250c0d9ee1fb3635101172c8476ee38a41fdef685f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"58726c4202343dd676c0dcff7daa55e5bb5e5896efeebd76c71e4bc2cca00ab2\"" Aug 5 22:13:25.560590 containerd[1968]: time="2024-08-05T22:13:25.560555631Z" level=info msg="StartContainer for \"58726c4202343dd676c0dcff7daa55e5bb5e5896efeebd76c71e4bc2cca00ab2\"" Aug 5 22:13:25.561991 containerd[1968]: time="2024-08-05T22:13:25.561939179Z" level=info msg="CreateContainer within sandbox \"ada22606e0b1f38c444f1530a5f555cd44ba1a2913c286af3605d71fb28f5503\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502\"" Aug 5 22:13:25.563046 containerd[1968]: time="2024-08-05T22:13:25.563014953Z" level=info msg="StartContainer for \"91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502\"" Aug 5 22:13:25.580318 systemd[1]: Started cri-containerd-295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5.scope - libcontainer container 295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5. Aug 5 22:13:25.624354 systemd[1]: Started cri-containerd-91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502.scope - libcontainer container 91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502. Aug 5 22:13:25.662408 systemd[1]: Started cri-containerd-58726c4202343dd676c0dcff7daa55e5bb5e5896efeebd76c71e4bc2cca00ab2.scope - libcontainer container 58726c4202343dd676c0dcff7daa55e5bb5e5896efeebd76c71e4bc2cca00ab2. Aug 5 22:13:25.702973 containerd[1968]: time="2024-08-05T22:13:25.702304251Z" level=info msg="StartContainer for \"295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5\" returns successfully" Aug 5 22:13:25.745647 containerd[1968]: time="2024-08-05T22:13:25.743542230Z" level=info msg="StartContainer for \"91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502\" returns successfully" Aug 5 22:13:25.825603 containerd[1968]: time="2024-08-05T22:13:25.824891565Z" level=info msg="StartContainer for \"58726c4202343dd676c0dcff7daa55e5bb5e5896efeebd76c71e4bc2cca00ab2\" returns successfully" Aug 5 22:13:25.874136 kubelet[2832]: E0805 22:13:25.873318 2832 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.23.76:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.23.76:6443: connect: connection refused Aug 5 22:13:26.897522 kubelet[2832]: I0805 22:13:26.896993 2832 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-76" Aug 5 22:13:27.478217 update_engine[1949]: I0805 22:13:27.478173 1949 update_attempter.cc:509] Updating boot flags... Aug 5 22:13:27.588384 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3113) Aug 5 22:13:27.929191 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 36 scanned by (udev-worker) (3114) Aug 5 22:13:29.156818 kubelet[2832]: I0805 22:13:29.156774 2832 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-76" Aug 5 22:13:29.252872 kubelet[2832]: E0805 22:13:29.252772 2832 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Aug 5 22:13:29.747506 kubelet[2832]: I0805 22:13:29.747434 2832 apiserver.go:52] "Watching apiserver" Aug 5 22:13:29.781293 kubelet[2832]: I0805 22:13:29.781178 2832 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:13:32.417196 systemd[1]: Reloading requested from client PID 3282 ('systemctl') (unit session-7.scope)... Aug 5 22:13:32.417220 systemd[1]: Reloading... Aug 5 22:13:32.630165 zram_generator::config[3323]: No configuration found. Aug 5 22:13:32.844069 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:13:32.995369 systemd[1]: Reloading finished in 577 ms. Aug 5 22:13:33.091260 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:33.091655 kubelet[2832]: I0805 22:13:33.091270 2832 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:13:33.106368 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:13:33.106726 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:33.114331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:13:33.554361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:13:33.563393 (kubelet)[3377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:13:33.688982 kubelet[3377]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:33.688982 kubelet[3377]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:13:33.688982 kubelet[3377]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:13:33.688982 kubelet[3377]: I0805 22:13:33.688527 3377 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:13:33.699855 kubelet[3377]: I0805 22:13:33.699802 3377 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 22:13:33.699855 kubelet[3377]: I0805 22:13:33.699834 3377 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:13:33.700306 kubelet[3377]: I0805 22:13:33.700281 3377 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 22:13:33.710982 kubelet[3377]: I0805 22:13:33.709578 3377 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:13:33.722683 kubelet[3377]: I0805 22:13:33.722546 3377 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:13:33.731744 kubelet[3377]: I0805 22:13:33.731708 3377 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:13:33.732051 kubelet[3377]: I0805 22:13:33.732027 3377 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:13:33.732325 kubelet[3377]: I0805 22:13:33.732299 3377 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:13:33.732477 kubelet[3377]: I0805 22:13:33.732335 3377 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:13:33.732477 kubelet[3377]: I0805 22:13:33.732350 3377 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:13:33.732477 kubelet[3377]: I0805 22:13:33.732392 3377 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:33.732608 kubelet[3377]: I0805 22:13:33.732557 3377 kubelet.go:396] "Attempting to sync node with API server" Aug 5 22:13:33.732608 kubelet[3377]: I0805 22:13:33.732576 3377 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:13:33.733930 kubelet[3377]: I0805 22:13:33.733778 3377 kubelet.go:312] "Adding apiserver pod source" Aug 5 22:13:33.733930 kubelet[3377]: I0805 22:13:33.733814 3377 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:13:33.742111 kubelet[3377]: I0805 22:13:33.739925 3377 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:13:33.742407 kubelet[3377]: I0805 22:13:33.742388 3377 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 22:13:33.744404 kubelet[3377]: I0805 22:13:33.743559 3377 server.go:1256] "Started kubelet" Aug 5 22:13:33.751486 kubelet[3377]: I0805 22:13:33.750581 3377 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:13:33.767062 kubelet[3377]: I0805 22:13:33.767018 3377 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:13:33.769007 kubelet[3377]: I0805 22:13:33.768970 3377 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 22:13:33.769356 kubelet[3377]: I0805 22:13:33.769322 3377 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:13:33.775691 kubelet[3377]: I0805 22:13:33.775153 3377 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:13:33.779600 kubelet[3377]: I0805 22:13:33.779573 3377 server.go:461] "Adding debug handlers to kubelet server" Aug 5 22:13:33.789113 kubelet[3377]: I0805 22:13:33.781541 3377 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:13:33.789113 kubelet[3377]: I0805 22:13:33.781712 3377 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:13:33.798662 kubelet[3377]: I0805 22:13:33.798626 3377 factory.go:221] Registration of the containerd container factory successfully Aug 5 22:13:33.798841 kubelet[3377]: I0805 22:13:33.798830 3377 factory.go:221] Registration of the systemd container factory successfully Aug 5 22:13:33.799417 kubelet[3377]: I0805 22:13:33.799112 3377 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 22:13:33.825909 kubelet[3377]: I0805 22:13:33.825804 3377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:13:33.828128 kubelet[3377]: I0805 22:13:33.828045 3377 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:13:33.828304 kubelet[3377]: I0805 22:13:33.828291 3377 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:13:33.828419 kubelet[3377]: I0805 22:13:33.828408 3377 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 22:13:33.828593 kubelet[3377]: E0805 22:13:33.828579 3377 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:13:33.895397 kubelet[3377]: I0805 22:13:33.895370 3377 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-23-76" Aug 5 22:13:33.913193 kubelet[3377]: I0805 22:13:33.912187 3377 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-23-76" Aug 5 22:13:33.913193 kubelet[3377]: I0805 22:13:33.912278 3377 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-23-76" Aug 5 22:13:33.929200 kubelet[3377]: E0805 22:13:33.928890 3377 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:13:33.955835 kubelet[3377]: I0805 22:13:33.955211 3377 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:13:33.955835 kubelet[3377]: I0805 22:13:33.955240 3377 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:13:33.955835 kubelet[3377]: I0805 22:13:33.955261 3377 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:13:33.955835 kubelet[3377]: I0805 22:13:33.955450 3377 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:13:33.955835 kubelet[3377]: I0805 22:13:33.955479 3377 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:13:33.955835 kubelet[3377]: I0805 22:13:33.955488 3377 policy_none.go:49] "None policy: Start" Aug 5 22:13:33.956503 kubelet[3377]: I0805 22:13:33.956484 3377 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 22:13:33.956602 kubelet[3377]: I0805 22:13:33.956512 3377 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:13:33.956710 kubelet[3377]: I0805 22:13:33.956689 3377 state_mem.go:75] "Updated machine memory state" Aug 5 22:13:33.962415 kubelet[3377]: I0805 22:13:33.961993 3377 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:13:33.966219 kubelet[3377]: I0805 22:13:33.964748 3377 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:13:34.130090 kubelet[3377]: I0805 22:13:34.129406 3377 topology_manager.go:215] "Topology Admit Handler" podUID="cffec846eb8a63c905909af7b90e3433" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-23-76" Aug 5 22:13:34.130090 kubelet[3377]: I0805 22:13:34.129541 3377 topology_manager.go:215] "Topology Admit Handler" podUID="2007eebafeeddb4baffca141ce9bda75" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-23-76" Aug 5 22:13:34.130090 kubelet[3377]: I0805 22:13:34.129592 3377 topology_manager.go:215] "Topology Admit Handler" podUID="5fe95be357996ce2302d9c9f91aa8f13" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:34.143883 kubelet[3377]: E0805 22:13:34.143829 3377 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-23-76\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:34.144918 kubelet[3377]: E0805 22:13:34.144850 3377 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-23-76\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-76" Aug 5 22:13:34.195556 kubelet[3377]: I0805 22:13:34.194751 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:34.195556 kubelet[3377]: I0805 22:13:34.194818 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cffec846eb8a63c905909af7b90e3433-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-76\" (UID: \"cffec846eb8a63c905909af7b90e3433\") " pod="kube-system/kube-scheduler-ip-172-31-23-76" Aug 5 22:13:34.195556 kubelet[3377]: I0805 22:13:34.194856 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2007eebafeeddb4baffca141ce9bda75-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-76\" (UID: \"2007eebafeeddb4baffca141ce9bda75\") " pod="kube-system/kube-apiserver-ip-172-31-23-76" Aug 5 22:13:34.195556 kubelet[3377]: I0805 22:13:34.194900 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2007eebafeeddb4baffca141ce9bda75-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-76\" (UID: \"2007eebafeeddb4baffca141ce9bda75\") " pod="kube-system/kube-apiserver-ip-172-31-23-76" Aug 5 22:13:34.195556 kubelet[3377]: I0805 22:13:34.194940 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:34.195835 kubelet[3377]: I0805 22:13:34.195024 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:34.195835 kubelet[3377]: I0805 22:13:34.195064 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:34.195835 kubelet[3377]: I0805 22:13:34.195126 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5fe95be357996ce2302d9c9f91aa8f13-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-76\" (UID: \"5fe95be357996ce2302d9c9f91aa8f13\") " pod="kube-system/kube-controller-manager-ip-172-31-23-76" Aug 5 22:13:34.195835 kubelet[3377]: I0805 22:13:34.195176 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2007eebafeeddb4baffca141ce9bda75-ca-certs\") pod \"kube-apiserver-ip-172-31-23-76\" (UID: \"2007eebafeeddb4baffca141ce9bda75\") " pod="kube-system/kube-apiserver-ip-172-31-23-76" Aug 5 22:13:34.742368 kubelet[3377]: I0805 22:13:34.740380 3377 apiserver.go:52] "Watching apiserver" Aug 5 22:13:34.788332 kubelet[3377]: I0805 22:13:34.788261 3377 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:13:35.082382 kubelet[3377]: I0805 22:13:35.082266 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-76" podStartSLOduration=5.082153551 podStartE2EDuration="5.082153551s" podCreationTimestamp="2024-08-05 22:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:35.031610091 +0000 UTC m=+1.451303365" watchObservedRunningTime="2024-08-05 22:13:35.082153551 +0000 UTC m=+1.501846827" Aug 5 22:13:35.117116 kubelet[3377]: I0805 22:13:35.117056 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-76" podStartSLOduration=1.117001172 podStartE2EDuration="1.117001172s" podCreationTimestamp="2024-08-05 22:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:35.083523587 +0000 UTC m=+1.503216862" watchObservedRunningTime="2024-08-05 22:13:35.117001172 +0000 UTC m=+1.536694445" Aug 5 22:13:35.152578 kubelet[3377]: I0805 22:13:35.152545 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-76" podStartSLOduration=5.152494442 podStartE2EDuration="5.152494442s" podCreationTimestamp="2024-08-05 22:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:35.118329944 +0000 UTC m=+1.538023219" watchObservedRunningTime="2024-08-05 22:13:35.152494442 +0000 UTC m=+1.572187716" Aug 5 22:13:38.411323 sudo[2285]: pam_unix(sudo:session): session closed for user root Aug 5 22:13:38.438706 sshd[2282]: pam_unix(sshd:session): session closed for user core Aug 5 22:13:38.447855 systemd-logind[1945]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:13:38.448263 systemd[1]: sshd@6-172.31.23.76:22-139.178.89.65:33280.service: Deactivated successfully. Aug 5 22:13:38.451988 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:13:38.452364 systemd[1]: session-7.scope: Consumed 6.006s CPU time, 136.2M memory peak, 0B memory swap peak. Aug 5 22:13:38.453656 systemd-logind[1945]: Removed session 7. Aug 5 22:13:45.224412 kubelet[3377]: I0805 22:13:45.224365 3377 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:13:45.225579 containerd[1968]: time="2024-08-05T22:13:45.225538976Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:13:45.225986 kubelet[3377]: I0805 22:13:45.225829 3377 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:13:45.767549 kubelet[3377]: I0805 22:13:45.767494 3377 topology_manager.go:215] "Topology Admit Handler" podUID="d279b4b9-d361-453d-af85-6c793f707ea5" podNamespace="kube-system" podName="kube-proxy-9jnfb" Aug 5 22:13:45.782961 systemd[1]: Created slice kubepods-besteffort-podd279b4b9_d361_453d_af85_6c793f707ea5.slice - libcontainer container kubepods-besteffort-podd279b4b9_d361_453d_af85_6c793f707ea5.slice. Aug 5 22:13:45.900955 kubelet[3377]: I0805 22:13:45.900913 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d279b4b9-d361-453d-af85-6c793f707ea5-kube-proxy\") pod \"kube-proxy-9jnfb\" (UID: \"d279b4b9-d361-453d-af85-6c793f707ea5\") " pod="kube-system/kube-proxy-9jnfb" Aug 5 22:13:45.901130 kubelet[3377]: I0805 22:13:45.900971 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d279b4b9-d361-453d-af85-6c793f707ea5-xtables-lock\") pod \"kube-proxy-9jnfb\" (UID: \"d279b4b9-d361-453d-af85-6c793f707ea5\") " pod="kube-system/kube-proxy-9jnfb" Aug 5 22:13:45.901130 kubelet[3377]: I0805 22:13:45.901006 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d279b4b9-d361-453d-af85-6c793f707ea5-lib-modules\") pod \"kube-proxy-9jnfb\" (UID: \"d279b4b9-d361-453d-af85-6c793f707ea5\") " pod="kube-system/kube-proxy-9jnfb" Aug 5 22:13:45.901130 kubelet[3377]: I0805 22:13:45.901059 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfgl8\" (UniqueName: \"kubernetes.io/projected/d279b4b9-d361-453d-af85-6c793f707ea5-kube-api-access-pfgl8\") pod \"kube-proxy-9jnfb\" (UID: \"d279b4b9-d361-453d-af85-6c793f707ea5\") " pod="kube-system/kube-proxy-9jnfb" Aug 5 22:13:45.963170 kubelet[3377]: I0805 22:13:45.962356 3377 topology_manager.go:215] "Topology Admit Handler" podUID="651e47be-8cd0-432e-a635-fed3dfdad108" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-wqx8f" Aug 5 22:13:45.983742 systemd[1]: Created slice kubepods-besteffort-pod651e47be_8cd0_432e_a635_fed3dfdad108.slice - libcontainer container kubepods-besteffort-pod651e47be_8cd0_432e_a635_fed3dfdad108.slice. Aug 5 22:13:46.091996 containerd[1968]: time="2024-08-05T22:13:46.091947699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jnfb,Uid:d279b4b9-d361-453d-af85-6c793f707ea5,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:46.103300 kubelet[3377]: I0805 22:13:46.103141 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/651e47be-8cd0-432e-a635-fed3dfdad108-var-lib-calico\") pod \"tigera-operator-76c4974c85-wqx8f\" (UID: \"651e47be-8cd0-432e-a635-fed3dfdad108\") " pod="tigera-operator/tigera-operator-76c4974c85-wqx8f" Aug 5 22:13:46.103300 kubelet[3377]: I0805 22:13:46.103194 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvcdh\" (UniqueName: \"kubernetes.io/projected/651e47be-8cd0-432e-a635-fed3dfdad108-kube-api-access-dvcdh\") pod \"tigera-operator-76c4974c85-wqx8f\" (UID: \"651e47be-8cd0-432e-a635-fed3dfdad108\") " pod="tigera-operator/tigera-operator-76c4974c85-wqx8f" Aug 5 22:13:46.149104 containerd[1968]: time="2024-08-05T22:13:46.148632243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:46.149104 containerd[1968]: time="2024-08-05T22:13:46.148692715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:46.149104 containerd[1968]: time="2024-08-05T22:13:46.148731279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:46.149104 containerd[1968]: time="2024-08-05T22:13:46.148758956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:46.189293 systemd[1]: Started cri-containerd-054d29ac98a03fed76a124b85e348d873b7ad859e1d3b95090da3a281ee35566.scope - libcontainer container 054d29ac98a03fed76a124b85e348d873b7ad859e1d3b95090da3a281ee35566. Aug 5 22:13:46.258580 containerd[1968]: time="2024-08-05T22:13:46.258538181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9jnfb,Uid:d279b4b9-d361-453d-af85-6c793f707ea5,Namespace:kube-system,Attempt:0,} returns sandbox id \"054d29ac98a03fed76a124b85e348d873b7ad859e1d3b95090da3a281ee35566\"" Aug 5 22:13:46.262533 containerd[1968]: time="2024-08-05T22:13:46.262492100Z" level=info msg="CreateContainer within sandbox \"054d29ac98a03fed76a124b85e348d873b7ad859e1d3b95090da3a281ee35566\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:13:46.290537 containerd[1968]: time="2024-08-05T22:13:46.290490943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-wqx8f,Uid:651e47be-8cd0-432e-a635-fed3dfdad108,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:13:46.304192 containerd[1968]: time="2024-08-05T22:13:46.303234018Z" level=info msg="CreateContainer within sandbox \"054d29ac98a03fed76a124b85e348d873b7ad859e1d3b95090da3a281ee35566\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"904b69a07e4bbc1e6f9e8e19d0ba6e7fb5ec52271c07316f23427ca1887cbd17\"" Aug 5 22:13:46.307031 containerd[1968]: time="2024-08-05T22:13:46.306790549Z" level=info msg="StartContainer for \"904b69a07e4bbc1e6f9e8e19d0ba6e7fb5ec52271c07316f23427ca1887cbd17\"" Aug 5 22:13:46.351008 containerd[1968]: time="2024-08-05T22:13:46.350174461Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:46.351008 containerd[1968]: time="2024-08-05T22:13:46.350258848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:46.351008 containerd[1968]: time="2024-08-05T22:13:46.350289532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:46.351008 containerd[1968]: time="2024-08-05T22:13:46.350310222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:46.360569 systemd[1]: Started cri-containerd-904b69a07e4bbc1e6f9e8e19d0ba6e7fb5ec52271c07316f23427ca1887cbd17.scope - libcontainer container 904b69a07e4bbc1e6f9e8e19d0ba6e7fb5ec52271c07316f23427ca1887cbd17. Aug 5 22:13:46.399431 systemd[1]: Started cri-containerd-2ba300cca9c3b727f32e12bdf329b896438a3660d35822ebb2d4dab28fed5431.scope - libcontainer container 2ba300cca9c3b727f32e12bdf329b896438a3660d35822ebb2d4dab28fed5431. Aug 5 22:13:46.428517 containerd[1968]: time="2024-08-05T22:13:46.428295415Z" level=info msg="StartContainer for \"904b69a07e4bbc1e6f9e8e19d0ba6e7fb5ec52271c07316f23427ca1887cbd17\" returns successfully" Aug 5 22:13:46.464216 containerd[1968]: time="2024-08-05T22:13:46.464160341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-wqx8f,Uid:651e47be-8cd0-432e-a635-fed3dfdad108,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2ba300cca9c3b727f32e12bdf329b896438a3660d35822ebb2d4dab28fed5431\"" Aug 5 22:13:46.467016 containerd[1968]: time="2024-08-05T22:13:46.466666009Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:13:46.954728 kubelet[3377]: I0805 22:13:46.954601 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9jnfb" podStartSLOduration=1.9545493139999999 podStartE2EDuration="1.954549314s" podCreationTimestamp="2024-08-05 22:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:46.954008004 +0000 UTC m=+13.373701286" watchObservedRunningTime="2024-08-05 22:13:46.954549314 +0000 UTC m=+13.374242590" Aug 5 22:13:47.936693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794521967.mount: Deactivated successfully. Aug 5 22:13:48.968709 containerd[1968]: time="2024-08-05T22:13:48.968657784Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:48.972502 containerd[1968]: time="2024-08-05T22:13:48.972418859Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076104" Aug 5 22:13:48.976527 containerd[1968]: time="2024-08-05T22:13:48.975668858Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:48.993238 containerd[1968]: time="2024-08-05T22:13:48.993189030Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:48.994951 containerd[1968]: time="2024-08-05T22:13:48.994905952Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.528183993s" Aug 5 22:13:48.995096 containerd[1968]: time="2024-08-05T22:13:48.994954838Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:13:49.003230 containerd[1968]: time="2024-08-05T22:13:49.003168955Z" level=info msg="CreateContainer within sandbox \"2ba300cca9c3b727f32e12bdf329b896438a3660d35822ebb2d4dab28fed5431\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:13:49.052455 containerd[1968]: time="2024-08-05T22:13:49.052409967Z" level=info msg="CreateContainer within sandbox \"2ba300cca9c3b727f32e12bdf329b896438a3660d35822ebb2d4dab28fed5431\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c\"" Aug 5 22:13:49.053811 containerd[1968]: time="2024-08-05T22:13:49.053575287Z" level=info msg="StartContainer for \"4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c\"" Aug 5 22:13:49.103650 systemd[1]: run-containerd-runc-k8s.io-4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c-runc.HkOBka.mount: Deactivated successfully. Aug 5 22:13:49.117565 systemd[1]: Started cri-containerd-4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c.scope - libcontainer container 4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c. Aug 5 22:13:49.187436 containerd[1968]: time="2024-08-05T22:13:49.186479231Z" level=info msg="StartContainer for \"4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c\" returns successfully" Aug 5 22:13:52.555505 kubelet[3377]: I0805 22:13:52.553997 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-wqx8f" podStartSLOduration=5.024318385 podStartE2EDuration="7.553934524s" podCreationTimestamp="2024-08-05 22:13:45 +0000 UTC" firstStartedPulling="2024-08-05 22:13:46.465766896 +0000 UTC m=+12.885460161" lastFinishedPulling="2024-08-05 22:13:48.995383035 +0000 UTC m=+15.415076300" observedRunningTime="2024-08-05 22:13:49.985017008 +0000 UTC m=+16.404710284" watchObservedRunningTime="2024-08-05 22:13:52.553934524 +0000 UTC m=+18.973627841" Aug 5 22:13:52.555505 kubelet[3377]: I0805 22:13:52.554217 3377 topology_manager.go:215] "Topology Admit Handler" podUID="e12e5cbc-5944-4111-99e5-142aa858e836" podNamespace="calico-system" podName="calico-typha-7c8dbfdf78-8m8gj" Aug 5 22:13:52.583043 systemd[1]: Created slice kubepods-besteffort-pode12e5cbc_5944_4111_99e5_142aa858e836.slice - libcontainer container kubepods-besteffort-pode12e5cbc_5944_4111_99e5_142aa858e836.slice. Aug 5 22:13:52.662738 kubelet[3377]: I0805 22:13:52.662699 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e12e5cbc-5944-4111-99e5-142aa858e836-tigera-ca-bundle\") pod \"calico-typha-7c8dbfdf78-8m8gj\" (UID: \"e12e5cbc-5944-4111-99e5-142aa858e836\") " pod="calico-system/calico-typha-7c8dbfdf78-8m8gj" Aug 5 22:13:52.663139 kubelet[3377]: I0805 22:13:52.662838 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e12e5cbc-5944-4111-99e5-142aa858e836-typha-certs\") pod \"calico-typha-7c8dbfdf78-8m8gj\" (UID: \"e12e5cbc-5944-4111-99e5-142aa858e836\") " pod="calico-system/calico-typha-7c8dbfdf78-8m8gj" Aug 5 22:13:52.663139 kubelet[3377]: I0805 22:13:52.663047 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67cl5\" (UniqueName: \"kubernetes.io/projected/e12e5cbc-5944-4111-99e5-142aa858e836-kube-api-access-67cl5\") pod \"calico-typha-7c8dbfdf78-8m8gj\" (UID: \"e12e5cbc-5944-4111-99e5-142aa858e836\") " pod="calico-system/calico-typha-7c8dbfdf78-8m8gj" Aug 5 22:13:52.691381 kubelet[3377]: I0805 22:13:52.691341 3377 topology_manager.go:215] "Topology Admit Handler" podUID="cbdcb9cf-5147-484b-9bd0-d3e7889be219" podNamespace="calico-system" podName="calico-node-7fwpf" Aug 5 22:13:52.702786 systemd[1]: Created slice kubepods-besteffort-podcbdcb9cf_5147_484b_9bd0_d3e7889be219.slice - libcontainer container kubepods-besteffort-podcbdcb9cf_5147_484b_9bd0_d3e7889be219.slice. Aug 5 22:13:52.833685 kubelet[3377]: I0805 22:13:52.832762 3377 topology_manager.go:215] "Topology Admit Handler" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" podNamespace="calico-system" podName="csi-node-driver-wxt6z" Aug 5 22:13:52.833685 kubelet[3377]: E0805 22:13:52.833156 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:13:52.864046 kubelet[3377]: I0805 22:13:52.864007 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/cbdcb9cf-5147-484b-9bd0-d3e7889be219-flexvol-driver-host\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.864585 kubelet[3377]: I0805 22:13:52.864564 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/cbdcb9cf-5147-484b-9bd0-d3e7889be219-cni-log-dir\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.864827 kubelet[3377]: I0805 22:13:52.864766 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbdcb9cf-5147-484b-9bd0-d3e7889be219-xtables-lock\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.865393 kubelet[3377]: I0805 22:13:52.865362 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4flc2\" (UniqueName: \"kubernetes.io/projected/cbdcb9cf-5147-484b-9bd0-d3e7889be219-kube-api-access-4flc2\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.865991 kubelet[3377]: I0805 22:13:52.865859 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/cbdcb9cf-5147-484b-9bd0-d3e7889be219-var-run-calico\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.867537 kubelet[3377]: I0805 22:13:52.866244 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/cbdcb9cf-5147-484b-9bd0-d3e7889be219-cni-net-dir\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.867537 kubelet[3377]: I0805 22:13:52.866310 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/cbdcb9cf-5147-484b-9bd0-d3e7889be219-policysync\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.867537 kubelet[3377]: I0805 22:13:52.866341 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbdcb9cf-5147-484b-9bd0-d3e7889be219-tigera-ca-bundle\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.867537 kubelet[3377]: I0805 22:13:52.866427 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbdcb9cf-5147-484b-9bd0-d3e7889be219-lib-modules\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.867537 kubelet[3377]: I0805 22:13:52.866457 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/cbdcb9cf-5147-484b-9bd0-d3e7889be219-node-certs\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.867856 kubelet[3377]: I0805 22:13:52.866502 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cbdcb9cf-5147-484b-9bd0-d3e7889be219-var-lib-calico\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.867856 kubelet[3377]: I0805 22:13:52.866766 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/cbdcb9cf-5147-484b-9bd0-d3e7889be219-cni-bin-dir\") pod \"calico-node-7fwpf\" (UID: \"cbdcb9cf-5147-484b-9bd0-d3e7889be219\") " pod="calico-system/calico-node-7fwpf" Aug 5 22:13:52.898372 containerd[1968]: time="2024-08-05T22:13:52.898134106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8dbfdf78-8m8gj,Uid:e12e5cbc-5944-4111-99e5-142aa858e836,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:52.968431 kubelet[3377]: I0805 22:13:52.967923 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3970d43f-5b24-4d33-8e29-668cdfe6b128-kubelet-dir\") pod \"csi-node-driver-wxt6z\" (UID: \"3970d43f-5b24-4d33-8e29-668cdfe6b128\") " pod="calico-system/csi-node-driver-wxt6z" Aug 5 22:13:52.968431 kubelet[3377]: I0805 22:13:52.967989 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2hkj\" (UniqueName: \"kubernetes.io/projected/3970d43f-5b24-4d33-8e29-668cdfe6b128-kube-api-access-b2hkj\") pod \"csi-node-driver-wxt6z\" (UID: \"3970d43f-5b24-4d33-8e29-668cdfe6b128\") " pod="calico-system/csi-node-driver-wxt6z" Aug 5 22:13:52.968431 kubelet[3377]: I0805 22:13:52.968121 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3970d43f-5b24-4d33-8e29-668cdfe6b128-socket-dir\") pod \"csi-node-driver-wxt6z\" (UID: \"3970d43f-5b24-4d33-8e29-668cdfe6b128\") " pod="calico-system/csi-node-driver-wxt6z" Aug 5 22:13:52.968431 kubelet[3377]: I0805 22:13:52.968156 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3970d43f-5b24-4d33-8e29-668cdfe6b128-registration-dir\") pod \"csi-node-driver-wxt6z\" (UID: \"3970d43f-5b24-4d33-8e29-668cdfe6b128\") " pod="calico-system/csi-node-driver-wxt6z" Aug 5 22:13:52.968431 kubelet[3377]: I0805 22:13:52.968234 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3970d43f-5b24-4d33-8e29-668cdfe6b128-varrun\") pod \"csi-node-driver-wxt6z\" (UID: \"3970d43f-5b24-4d33-8e29-668cdfe6b128\") " pod="calico-system/csi-node-driver-wxt6z" Aug 5 22:13:52.982099 kubelet[3377]: E0805 22:13:52.980910 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.982099 kubelet[3377]: W0805 22:13:52.980954 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.982099 kubelet[3377]: E0805 22:13:52.980988 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.990326 kubelet[3377]: E0805 22:13:52.987443 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.990326 kubelet[3377]: W0805 22:13:52.987473 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.990326 kubelet[3377]: E0805 22:13:52.987520 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.990326 kubelet[3377]: E0805 22:13:52.988560 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.990326 kubelet[3377]: W0805 22:13:52.988578 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.990326 kubelet[3377]: E0805 22:13:52.988774 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.990326 kubelet[3377]: E0805 22:13:52.990213 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.990326 kubelet[3377]: W0805 22:13:52.990230 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.990990 kubelet[3377]: E0805 22:13:52.990796 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.991336 kubelet[3377]: E0805 22:13:52.991262 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.991651 kubelet[3377]: W0805 22:13:52.991587 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.992778 kubelet[3377]: E0805 22:13:52.992546 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.993993 kubelet[3377]: E0805 22:13:52.993631 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.993993 kubelet[3377]: W0805 22:13:52.993651 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.993993 kubelet[3377]: E0805 22:13:52.993674 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.996725 kubelet[3377]: E0805 22:13:52.996568 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.996725 kubelet[3377]: W0805 22:13:52.996587 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.996725 kubelet[3377]: E0805 22:13:52.996610 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.998172 kubelet[3377]: E0805 22:13:52.997635 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.998172 kubelet[3377]: W0805 22:13:52.997651 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.998172 kubelet[3377]: E0805 22:13:52.997674 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:52.998759 kubelet[3377]: E0805 22:13:52.998458 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:52.998759 kubelet[3377]: W0805 22:13:52.998470 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:52.998759 kubelet[3377]: E0805 22:13:52.998488 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.000537 kubelet[3377]: E0805 22:13:53.000392 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.000537 kubelet[3377]: W0805 22:13:53.000409 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.002021 kubelet[3377]: E0805 22:13:53.000428 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.003034 kubelet[3377]: E0805 22:13:53.002634 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.003034 kubelet[3377]: W0805 22:13:53.002648 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.003034 kubelet[3377]: E0805 22:13:53.002671 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.031233 kubelet[3377]: E0805 22:13:53.031200 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.032497 kubelet[3377]: W0805 22:13:53.032247 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.036456 kubelet[3377]: E0805 22:13:53.032286 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.037175 containerd[1968]: time="2024-08-05T22:13:53.034024425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:53.037175 containerd[1968]: time="2024-08-05T22:13:53.034136332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:53.037175 containerd[1968]: time="2024-08-05T22:13:53.034166575Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:53.037175 containerd[1968]: time="2024-08-05T22:13:53.034185839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:53.071240 kubelet[3377]: E0805 22:13:53.070966 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.071240 kubelet[3377]: W0805 22:13:53.070992 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.071240 kubelet[3377]: E0805 22:13:53.071020 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.071589 kubelet[3377]: E0805 22:13:53.071367 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.071589 kubelet[3377]: W0805 22:13:53.071380 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.071589 kubelet[3377]: E0805 22:13:53.071398 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.072153 kubelet[3377]: E0805 22:13:53.071605 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.072153 kubelet[3377]: W0805 22:13:53.071615 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.072153 kubelet[3377]: E0805 22:13:53.071645 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.073449 kubelet[3377]: E0805 22:13:53.072189 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.073449 kubelet[3377]: W0805 22:13:53.072255 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.073449 kubelet[3377]: E0805 22:13:53.072291 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.073449 kubelet[3377]: E0805 22:13:53.072694 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.073449 kubelet[3377]: W0805 22:13:53.072711 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.073449 kubelet[3377]: E0805 22:13:53.072753 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.073449 kubelet[3377]: E0805 22:13:53.073069 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.073449 kubelet[3377]: W0805 22:13:53.073092 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.073449 kubelet[3377]: E0805 22:13:53.073110 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.073449 kubelet[3377]: E0805 22:13:53.073342 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.075183 kubelet[3377]: W0805 22:13:53.073351 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.075183 kubelet[3377]: E0805 22:13:53.073399 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.075183 kubelet[3377]: E0805 22:13:53.074001 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.075183 kubelet[3377]: W0805 22:13:53.074013 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.075183 kubelet[3377]: E0805 22:13:53.074040 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.075183 kubelet[3377]: E0805 22:13:53.074329 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.075183 kubelet[3377]: W0805 22:13:53.074338 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.075183 kubelet[3377]: E0805 22:13:53.074364 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.075183 kubelet[3377]: E0805 22:13:53.074576 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.075183 kubelet[3377]: W0805 22:13:53.074586 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.079214 kubelet[3377]: E0805 22:13:53.074603 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.079214 kubelet[3377]: E0805 22:13:53.074837 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.079214 kubelet[3377]: W0805 22:13:53.074847 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.079214 kubelet[3377]: E0805 22:13:53.074875 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.079214 kubelet[3377]: E0805 22:13:53.075230 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.079214 kubelet[3377]: W0805 22:13:53.075393 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.079214 kubelet[3377]: E0805 22:13:53.075536 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.079214 kubelet[3377]: E0805 22:13:53.075780 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.079214 kubelet[3377]: W0805 22:13:53.075789 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.079214 kubelet[3377]: E0805 22:13:53.075890 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.079675 kubelet[3377]: E0805 22:13:53.076258 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.079675 kubelet[3377]: W0805 22:13:53.076271 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.079675 kubelet[3377]: E0805 22:13:53.076364 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.079675 kubelet[3377]: E0805 22:13:53.076645 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.079675 kubelet[3377]: W0805 22:13:53.076656 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.079675 kubelet[3377]: E0805 22:13:53.076862 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.079675 kubelet[3377]: E0805 22:13:53.077182 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.079675 kubelet[3377]: W0805 22:13:53.077194 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.079675 kubelet[3377]: E0805 22:13:53.077323 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.079675 kubelet[3377]: E0805 22:13:53.077576 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.082749 kubelet[3377]: W0805 22:13:53.077586 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.082749 kubelet[3377]: E0805 22:13:53.077693 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.082749 kubelet[3377]: E0805 22:13:53.077889 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.082749 kubelet[3377]: W0805 22:13:53.077899 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.082749 kubelet[3377]: E0805 22:13:53.078012 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.082749 kubelet[3377]: E0805 22:13:53.078330 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.082749 kubelet[3377]: W0805 22:13:53.078340 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.082749 kubelet[3377]: E0805 22:13:53.078984 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.082749 kubelet[3377]: E0805 22:13:53.079395 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.082749 kubelet[3377]: W0805 22:13:53.079408 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.083059 kubelet[3377]: E0805 22:13:53.079438 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.083059 kubelet[3377]: E0805 22:13:53.079727 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.083059 kubelet[3377]: W0805 22:13:53.079738 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.083059 kubelet[3377]: E0805 22:13:53.079778 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.083059 kubelet[3377]: E0805 22:13:53.080020 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.083059 kubelet[3377]: W0805 22:13:53.080030 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.083059 kubelet[3377]: E0805 22:13:53.080059 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.083059 kubelet[3377]: E0805 22:13:53.080489 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.083059 kubelet[3377]: W0805 22:13:53.080500 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.083059 kubelet[3377]: E0805 22:13:53.080596 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.086277 kubelet[3377]: E0805 22:13:53.081027 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.086277 kubelet[3377]: W0805 22:13:53.081041 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.086277 kubelet[3377]: E0805 22:13:53.081056 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.086277 kubelet[3377]: E0805 22:13:53.081456 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.086277 kubelet[3377]: W0805 22:13:53.081467 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.086277 kubelet[3377]: E0805 22:13:53.081484 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.092431 systemd[1]: Started cri-containerd-fc8e71987f33f6b04334455ae56aeb57038e991455521e36dcf6a202bdeee73f.scope - libcontainer container fc8e71987f33f6b04334455ae56aeb57038e991455521e36dcf6a202bdeee73f. Aug 5 22:13:53.104693 kubelet[3377]: E0805 22:13:53.104609 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:53.104693 kubelet[3377]: W0805 22:13:53.104631 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:53.104693 kubelet[3377]: E0805 22:13:53.104656 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:53.192727 containerd[1968]: time="2024-08-05T22:13:53.192553847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8dbfdf78-8m8gj,Uid:e12e5cbc-5944-4111-99e5-142aa858e836,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc8e71987f33f6b04334455ae56aeb57038e991455521e36dcf6a202bdeee73f\"" Aug 5 22:13:53.196300 containerd[1968]: time="2024-08-05T22:13:53.195740628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:13:53.307201 containerd[1968]: time="2024-08-05T22:13:53.307155518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7fwpf,Uid:cbdcb9cf-5147-484b-9bd0-d3e7889be219,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:53.364392 containerd[1968]: time="2024-08-05T22:13:53.364192504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:53.365384 containerd[1968]: time="2024-08-05T22:13:53.364820135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:53.365384 containerd[1968]: time="2024-08-05T22:13:53.365311003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:53.365766 containerd[1968]: time="2024-08-05T22:13:53.365473277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:53.408582 systemd[1]: Started cri-containerd-6d3549d03f5e2885305685333449581dc8627237d4ff4f0142fad56940398dc5.scope - libcontainer container 6d3549d03f5e2885305685333449581dc8627237d4ff4f0142fad56940398dc5. Aug 5 22:13:53.480292 containerd[1968]: time="2024-08-05T22:13:53.479916675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7fwpf,Uid:cbdcb9cf-5147-484b-9bd0-d3e7889be219,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d3549d03f5e2885305685333449581dc8627237d4ff4f0142fad56940398dc5\"" Aug 5 22:13:54.830100 kubelet[3377]: E0805 22:13:54.829573 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:13:56.285974 containerd[1968]: time="2024-08-05T22:13:56.284997918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:56.287740 containerd[1968]: time="2024-08-05T22:13:56.287693455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:13:56.290184 containerd[1968]: time="2024-08-05T22:13:56.290140275Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:56.295330 containerd[1968]: time="2024-08-05T22:13:56.295284010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:56.296327 containerd[1968]: time="2024-08-05T22:13:56.296294309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.100512721s" Aug 5 22:13:56.296445 containerd[1968]: time="2024-08-05T22:13:56.296432240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:13:56.302230 containerd[1968]: time="2024-08-05T22:13:56.301866401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:13:56.325631 containerd[1968]: time="2024-08-05T22:13:56.325574618Z" level=info msg="CreateContainer within sandbox \"fc8e71987f33f6b04334455ae56aeb57038e991455521e36dcf6a202bdeee73f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:13:56.381595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3367250846.mount: Deactivated successfully. Aug 5 22:13:56.389509 containerd[1968]: time="2024-08-05T22:13:56.389158606Z" level=info msg="CreateContainer within sandbox \"fc8e71987f33f6b04334455ae56aeb57038e991455521e36dcf6a202bdeee73f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"66b94a39327528b9e29f2d39c88aa2e4bf7f02c393d2fff1cebeaefb7a05b4b3\"" Aug 5 22:13:56.392529 containerd[1968]: time="2024-08-05T22:13:56.392490330Z" level=info msg="StartContainer for \"66b94a39327528b9e29f2d39c88aa2e4bf7f02c393d2fff1cebeaefb7a05b4b3\"" Aug 5 22:13:56.498917 systemd[1]: Started cri-containerd-66b94a39327528b9e29f2d39c88aa2e4bf7f02c393d2fff1cebeaefb7a05b4b3.scope - libcontainer container 66b94a39327528b9e29f2d39c88aa2e4bf7f02c393d2fff1cebeaefb7a05b4b3. Aug 5 22:13:56.593593 containerd[1968]: time="2024-08-05T22:13:56.593450371Z" level=info msg="StartContainer for \"66b94a39327528b9e29f2d39c88aa2e4bf7f02c393d2fff1cebeaefb7a05b4b3\" returns successfully" Aug 5 22:13:56.830896 kubelet[3377]: E0805 22:13:56.830503 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:13:57.020555 kubelet[3377]: E0805 22:13:57.014840 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.020555 kubelet[3377]: W0805 22:13:57.014873 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.020555 kubelet[3377]: E0805 22:13:57.015138 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.020555 kubelet[3377]: E0805 22:13:57.015709 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.020555 kubelet[3377]: W0805 22:13:57.019981 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.020555 kubelet[3377]: E0805 22:13:57.020027 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.020555 kubelet[3377]: E0805 22:13:57.020404 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.020555 kubelet[3377]: W0805 22:13:57.020419 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.020555 kubelet[3377]: E0805 22:13:57.020440 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.026945 kubelet[3377]: E0805 22:13:57.020698 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.026945 kubelet[3377]: W0805 22:13:57.020709 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.026945 kubelet[3377]: E0805 22:13:57.020726 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.026945 kubelet[3377]: E0805 22:13:57.021144 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.026945 kubelet[3377]: W0805 22:13:57.021156 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.026945 kubelet[3377]: E0805 22:13:57.021172 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.026945 kubelet[3377]: E0805 22:13:57.021963 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.026945 kubelet[3377]: W0805 22:13:57.021974 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.026945 kubelet[3377]: E0805 22:13:57.021991 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.026945 kubelet[3377]: E0805 22:13:57.022260 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.027845 kubelet[3377]: W0805 22:13:57.022270 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.027845 kubelet[3377]: E0805 22:13:57.022286 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.027845 kubelet[3377]: E0805 22:13:57.022508 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.027845 kubelet[3377]: W0805 22:13:57.022518 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.027845 kubelet[3377]: E0805 22:13:57.022534 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.027845 kubelet[3377]: E0805 22:13:57.022935 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.027845 kubelet[3377]: W0805 22:13:57.022946 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.027845 kubelet[3377]: E0805 22:13:57.022962 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.027845 kubelet[3377]: E0805 22:13:57.023212 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.027845 kubelet[3377]: W0805 22:13:57.023222 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.032043 kubelet[3377]: E0805 22:13:57.025178 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.032043 kubelet[3377]: E0805 22:13:57.029150 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.032043 kubelet[3377]: W0805 22:13:57.029169 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.032043 kubelet[3377]: E0805 22:13:57.029198 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.032043 kubelet[3377]: E0805 22:13:57.030449 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.032043 kubelet[3377]: W0805 22:13:57.030464 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.032043 kubelet[3377]: E0805 22:13:57.031112 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.034056 kubelet[3377]: E0805 22:13:57.032135 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.034056 kubelet[3377]: W0805 22:13:57.032147 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.034056 kubelet[3377]: E0805 22:13:57.032265 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.034056 kubelet[3377]: E0805 22:13:57.033298 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.034056 kubelet[3377]: W0805 22:13:57.033311 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.034056 kubelet[3377]: E0805 22:13:57.033329 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.034056 kubelet[3377]: E0805 22:13:57.033708 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.034056 kubelet[3377]: W0805 22:13:57.033721 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.034056 kubelet[3377]: E0805 22:13:57.033738 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.034486 kubelet[3377]: E0805 22:13:57.034142 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.034486 kubelet[3377]: W0805 22:13:57.034156 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.034486 kubelet[3377]: E0805 22:13:57.034173 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.034486 kubelet[3377]: E0805 22:13:57.034386 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.034486 kubelet[3377]: W0805 22:13:57.034396 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.034486 kubelet[3377]: E0805 22:13:57.034411 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.034832 kubelet[3377]: E0805 22:13:57.034619 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.034832 kubelet[3377]: W0805 22:13:57.034628 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.034832 kubelet[3377]: E0805 22:13:57.034644 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.034974 kubelet[3377]: E0805 22:13:57.034961 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.034974 kubelet[3377]: W0805 22:13:57.034972 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.035051 kubelet[3377]: E0805 22:13:57.034988 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.038511 kubelet[3377]: E0805 22:13:57.035285 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.038511 kubelet[3377]: W0805 22:13:57.035295 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.038511 kubelet[3377]: E0805 22:13:57.035312 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.038511 kubelet[3377]: E0805 22:13:57.036043 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.038511 kubelet[3377]: W0805 22:13:57.036056 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.038511 kubelet[3377]: E0805 22:13:57.036096 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.038511 kubelet[3377]: E0805 22:13:57.036335 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.038511 kubelet[3377]: W0805 22:13:57.036345 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.038511 kubelet[3377]: E0805 22:13:57.036359 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.038511 kubelet[3377]: E0805 22:13:57.037501 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.038967 kubelet[3377]: W0805 22:13:57.037516 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.038967 kubelet[3377]: E0805 22:13:57.037533 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.038967 kubelet[3377]: E0805 22:13:57.038290 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.038967 kubelet[3377]: W0805 22:13:57.038302 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.038967 kubelet[3377]: E0805 22:13:57.038319 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.038967 kubelet[3377]: E0805 22:13:57.038510 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.038967 kubelet[3377]: W0805 22:13:57.038520 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.038967 kubelet[3377]: E0805 22:13:57.038534 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.040737 kubelet[3377]: E0805 22:13:57.039580 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.040737 kubelet[3377]: W0805 22:13:57.039592 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.040737 kubelet[3377]: E0805 22:13:57.039658 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.041538 kubelet[3377]: E0805 22:13:57.041056 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.041538 kubelet[3377]: W0805 22:13:57.041068 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.041538 kubelet[3377]: E0805 22:13:57.041110 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.042134 kubelet[3377]: E0805 22:13:57.041901 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.042134 kubelet[3377]: W0805 22:13:57.041917 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.042134 kubelet[3377]: E0805 22:13:57.041933 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.043805 kubelet[3377]: E0805 22:13:57.043137 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.043805 kubelet[3377]: W0805 22:13:57.043152 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.043805 kubelet[3377]: E0805 22:13:57.043170 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.043805 kubelet[3377]: E0805 22:13:57.043441 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.043805 kubelet[3377]: W0805 22:13:57.043451 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.043805 kubelet[3377]: E0805 22:13:57.043558 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.043805 kubelet[3377]: E0805 22:13:57.043767 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.043805 kubelet[3377]: W0805 22:13:57.043777 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.043805 kubelet[3377]: E0805 22:13:57.043792 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.044794 kubelet[3377]: E0805 22:13:57.044172 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.044794 kubelet[3377]: W0805 22:13:57.044184 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.044794 kubelet[3377]: E0805 22:13:57.044201 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.045229 kubelet[3377]: E0805 22:13:57.045041 3377 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:57.045229 kubelet[3377]: W0805 22:13:57.045056 3377 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:57.045229 kubelet[3377]: E0805 22:13:57.045197 3377 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:57.701056 containerd[1968]: time="2024-08-05T22:13:57.700058190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:57.703536 containerd[1968]: time="2024-08-05T22:13:57.703376372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:13:57.706228 containerd[1968]: time="2024-08-05T22:13:57.706167617Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:57.725911 containerd[1968]: time="2024-08-05T22:13:57.722979531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:57.725911 containerd[1968]: time="2024-08-05T22:13:57.723762316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.421855429s" Aug 5 22:13:57.725911 containerd[1968]: time="2024-08-05T22:13:57.723804028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:13:57.727131 containerd[1968]: time="2024-08-05T22:13:57.727058355Z" level=info msg="CreateContainer within sandbox \"6d3549d03f5e2885305685333449581dc8627237d4ff4f0142fad56940398dc5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:13:57.757850 containerd[1968]: time="2024-08-05T22:13:57.757803569Z" level=info msg="CreateContainer within sandbox \"6d3549d03f5e2885305685333449581dc8627237d4ff4f0142fad56940398dc5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4fba59e231caf21f871f71e3eaa051d607addb7122074fff5e268f551b456175\"" Aug 5 22:13:57.758976 containerd[1968]: time="2024-08-05T22:13:57.758945619Z" level=info msg="StartContainer for \"4fba59e231caf21f871f71e3eaa051d607addb7122074fff5e268f551b456175\"" Aug 5 22:13:57.851113 systemd[1]: Started cri-containerd-4fba59e231caf21f871f71e3eaa051d607addb7122074fff5e268f551b456175.scope - libcontainer container 4fba59e231caf21f871f71e3eaa051d607addb7122074fff5e268f551b456175. Aug 5 22:13:57.937399 containerd[1968]: time="2024-08-05T22:13:57.936316052Z" level=info msg="StartContainer for \"4fba59e231caf21f871f71e3eaa051d607addb7122074fff5e268f551b456175\" returns successfully" Aug 5 22:13:57.961870 systemd[1]: cri-containerd-4fba59e231caf21f871f71e3eaa051d607addb7122074fff5e268f551b456175.scope: Deactivated successfully. Aug 5 22:13:57.996598 kubelet[3377]: I0805 22:13:57.996346 3377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:13:58.033362 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fba59e231caf21f871f71e3eaa051d607addb7122074fff5e268f551b456175-rootfs.mount: Deactivated successfully. Aug 5 22:13:58.047872 kubelet[3377]: I0805 22:13:58.046902 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7c8dbfdf78-8m8gj" podStartSLOduration=2.945030795 podStartE2EDuration="6.046845229s" podCreationTimestamp="2024-08-05 22:13:52 +0000 UTC" firstStartedPulling="2024-08-05 22:13:53.195325737 +0000 UTC m=+19.615018994" lastFinishedPulling="2024-08-05 22:13:56.297140169 +0000 UTC m=+22.716833428" observedRunningTime="2024-08-05 22:13:57.042314251 +0000 UTC m=+23.462007527" watchObservedRunningTime="2024-08-05 22:13:58.046845229 +0000 UTC m=+24.466538503" Aug 5 22:13:58.324179 containerd[1968]: time="2024-08-05T22:13:58.272365282Z" level=info msg="shim disconnected" id=4fba59e231caf21f871f71e3eaa051d607addb7122074fff5e268f551b456175 namespace=k8s.io Aug 5 22:13:58.324179 containerd[1968]: time="2024-08-05T22:13:58.323898438Z" level=warning msg="cleaning up after shim disconnected" id=4fba59e231caf21f871f71e3eaa051d607addb7122074fff5e268f551b456175 namespace=k8s.io Aug 5 22:13:58.324179 containerd[1968]: time="2024-08-05T22:13:58.323923686Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:13:58.360229 containerd[1968]: time="2024-08-05T22:13:58.359367277Z" level=warning msg="cleanup warnings time=\"2024-08-05T22:13:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 22:13:58.829583 kubelet[3377]: E0805 22:13:58.829538 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:13:59.008788 containerd[1968]: time="2024-08-05T22:13:59.007975539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:14:00.828801 kubelet[3377]: E0805 22:14:00.828755 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:14:02.830957 kubelet[3377]: E0805 22:14:02.830890 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:14:04.838255 kubelet[3377]: E0805 22:14:04.838161 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:14:05.944803 containerd[1968]: time="2024-08-05T22:14:05.944748755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:05.947229 containerd[1968]: time="2024-08-05T22:14:05.947097235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:14:05.950573 containerd[1968]: time="2024-08-05T22:14:05.950513632Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:05.954989 containerd[1968]: time="2024-08-05T22:14:05.954787040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:05.956556 containerd[1968]: time="2024-08-05T22:14:05.956480070Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 6.948451523s" Aug 5 22:14:05.956781 containerd[1968]: time="2024-08-05T22:14:05.956684959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:14:05.959755 containerd[1968]: time="2024-08-05T22:14:05.958955656Z" level=info msg="CreateContainer within sandbox \"6d3549d03f5e2885305685333449581dc8627237d4ff4f0142fad56940398dc5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:14:05.983562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2060266117.mount: Deactivated successfully. Aug 5 22:14:05.989623 containerd[1968]: time="2024-08-05T22:14:05.989572465Z" level=info msg="CreateContainer within sandbox \"6d3549d03f5e2885305685333449581dc8627237d4ff4f0142fad56940398dc5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b71b628f36517eff8fd30db771b0da00f93b391b748a9399688ca86d69beccce\"" Aug 5 22:14:05.990521 containerd[1968]: time="2024-08-05T22:14:05.990280623Z" level=info msg="StartContainer for \"b71b628f36517eff8fd30db771b0da00f93b391b748a9399688ca86d69beccce\"" Aug 5 22:14:06.093262 systemd[1]: Started cri-containerd-b71b628f36517eff8fd30db771b0da00f93b391b748a9399688ca86d69beccce.scope - libcontainer container b71b628f36517eff8fd30db771b0da00f93b391b748a9399688ca86d69beccce. Aug 5 22:14:06.192923 containerd[1968]: time="2024-08-05T22:14:06.192842622Z" level=info msg="StartContainer for \"b71b628f36517eff8fd30db771b0da00f93b391b748a9399688ca86d69beccce\" returns successfully" Aug 5 22:14:06.829770 kubelet[3377]: E0805 22:14:06.829461 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:14:07.272200 kubelet[3377]: I0805 22:14:07.271219 3377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:14:07.458187 systemd[1]: cri-containerd-b71b628f36517eff8fd30db771b0da00f93b391b748a9399688ca86d69beccce.scope: Deactivated successfully. Aug 5 22:14:07.518744 kubelet[3377]: I0805 22:14:07.518003 3377 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 22:14:07.544629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b71b628f36517eff8fd30db771b0da00f93b391b748a9399688ca86d69beccce-rootfs.mount: Deactivated successfully. Aug 5 22:14:07.563426 containerd[1968]: time="2024-08-05T22:14:07.563346330Z" level=info msg="shim disconnected" id=b71b628f36517eff8fd30db771b0da00f93b391b748a9399688ca86d69beccce namespace=k8s.io Aug 5 22:14:07.563426 containerd[1968]: time="2024-08-05T22:14:07.563414258Z" level=warning msg="cleaning up after shim disconnected" id=b71b628f36517eff8fd30db771b0da00f93b391b748a9399688ca86d69beccce namespace=k8s.io Aug 5 22:14:07.563426 containerd[1968]: time="2024-08-05T22:14:07.563427859Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:14:07.583335 kubelet[3377]: I0805 22:14:07.582603 3377 topology_manager.go:215] "Topology Admit Handler" podUID="a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5" podNamespace="kube-system" podName="coredns-76f75df574-dqrz8" Aug 5 22:14:07.604511 kubelet[3377]: I0805 22:14:07.603306 3377 topology_manager.go:215] "Topology Admit Handler" podUID="c8231fff-c2fe-47d9-9a81-5eef98071a14" podNamespace="calico-system" podName="calico-kube-controllers-85c545f87d-qxp9x" Aug 5 22:14:07.607586 systemd[1]: Created slice kubepods-burstable-poda8535b5a_2ddc_41b7_b6d1_d9ec3a16dec5.slice - libcontainer container kubepods-burstable-poda8535b5a_2ddc_41b7_b6d1_d9ec3a16dec5.slice. Aug 5 22:14:07.621744 kubelet[3377]: I0805 22:14:07.621327 3377 topology_manager.go:215] "Topology Admit Handler" podUID="5acb70e9-e1b2-4039-8e86-bf988a415a12" podNamespace="kube-system" podName="coredns-76f75df574-t59f8" Aug 5 22:14:07.636210 systemd[1]: Created slice kubepods-besteffort-podc8231fff_c2fe_47d9_9a81_5eef98071a14.slice - libcontainer container kubepods-besteffort-podc8231fff_c2fe_47d9_9a81_5eef98071a14.slice. Aug 5 22:14:07.653572 systemd[1]: Created slice kubepods-burstable-pod5acb70e9_e1b2_4039_8e86_bf988a415a12.slice - libcontainer container kubepods-burstable-pod5acb70e9_e1b2_4039_8e86_bf988a415a12.slice. Aug 5 22:14:07.675757 kubelet[3377]: I0805 22:14:07.675527 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf2rk\" (UniqueName: \"kubernetes.io/projected/a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5-kube-api-access-cf2rk\") pod \"coredns-76f75df574-dqrz8\" (UID: \"a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5\") " pod="kube-system/coredns-76f75df574-dqrz8" Aug 5 22:14:07.678842 kubelet[3377]: I0805 22:14:07.678817 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5-config-volume\") pod \"coredns-76f75df574-dqrz8\" (UID: \"a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5\") " pod="kube-system/coredns-76f75df574-dqrz8" Aug 5 22:14:07.780858 kubelet[3377]: I0805 22:14:07.780807 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7vst\" (UniqueName: \"kubernetes.io/projected/5acb70e9-e1b2-4039-8e86-bf988a415a12-kube-api-access-c7vst\") pod \"coredns-76f75df574-t59f8\" (UID: \"5acb70e9-e1b2-4039-8e86-bf988a415a12\") " pod="kube-system/coredns-76f75df574-t59f8" Aug 5 22:14:07.781027 kubelet[3377]: I0805 22:14:07.780876 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c8231fff-c2fe-47d9-9a81-5eef98071a14-tigera-ca-bundle\") pod \"calico-kube-controllers-85c545f87d-qxp9x\" (UID: \"c8231fff-c2fe-47d9-9a81-5eef98071a14\") " pod="calico-system/calico-kube-controllers-85c545f87d-qxp9x" Aug 5 22:14:07.781027 kubelet[3377]: I0805 22:14:07.780905 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5acb70e9-e1b2-4039-8e86-bf988a415a12-config-volume\") pod \"coredns-76f75df574-t59f8\" (UID: \"5acb70e9-e1b2-4039-8e86-bf988a415a12\") " pod="kube-system/coredns-76f75df574-t59f8" Aug 5 22:14:07.781027 kubelet[3377]: I0805 22:14:07.780960 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kb2s\" (UniqueName: \"kubernetes.io/projected/c8231fff-c2fe-47d9-9a81-5eef98071a14-kube-api-access-8kb2s\") pod \"calico-kube-controllers-85c545f87d-qxp9x\" (UID: \"c8231fff-c2fe-47d9-9a81-5eef98071a14\") " pod="calico-system/calico-kube-controllers-85c545f87d-qxp9x" Aug 5 22:14:07.925844 containerd[1968]: time="2024-08-05T22:14:07.925793811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqrz8,Uid:a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5,Namespace:kube-system,Attempt:0,}" Aug 5 22:14:07.946704 containerd[1968]: time="2024-08-05T22:14:07.946620529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c545f87d-qxp9x,Uid:c8231fff-c2fe-47d9-9a81-5eef98071a14,Namespace:calico-system,Attempt:0,}" Aug 5 22:14:07.961965 containerd[1968]: time="2024-08-05T22:14:07.960753474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t59f8,Uid:5acb70e9-e1b2-4039-8e86-bf988a415a12,Namespace:kube-system,Attempt:0,}" Aug 5 22:14:08.183896 containerd[1968]: time="2024-08-05T22:14:08.182748745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:14:08.431418 containerd[1968]: time="2024-08-05T22:14:08.430059149Z" level=error msg="Failed to destroy network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.457955 containerd[1968]: time="2024-08-05T22:14:08.457202058Z" level=error msg="encountered an error cleaning up failed sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.457955 containerd[1968]: time="2024-08-05T22:14:08.457410230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqrz8,Uid:a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.463804 containerd[1968]: time="2024-08-05T22:14:08.463214174Z" level=error msg="Failed to destroy network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.463804 containerd[1968]: time="2024-08-05T22:14:08.463670566Z" level=error msg="encountered an error cleaning up failed sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.463804 containerd[1968]: time="2024-08-05T22:14:08.463742066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c545f87d-qxp9x,Uid:c8231fff-c2fe-47d9-9a81-5eef98071a14,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.464371 kubelet[3377]: E0805 22:14:08.464341 3377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.464764 kubelet[3377]: E0805 22:14:08.464429 3377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqrz8" Aug 5 22:14:08.464764 kubelet[3377]: E0805 22:14:08.464462 3377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-dqrz8" Aug 5 22:14:08.464764 kubelet[3377]: E0805 22:14:08.464537 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-dqrz8_kube-system(a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-dqrz8_kube-system(a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dqrz8" podUID="a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5" Aug 5 22:14:08.467191 kubelet[3377]: E0805 22:14:08.465498 3377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.467191 kubelet[3377]: E0805 22:14:08.465602 3377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85c545f87d-qxp9x" Aug 5 22:14:08.467191 kubelet[3377]: E0805 22:14:08.465629 3377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85c545f87d-qxp9x" Aug 5 22:14:08.468182 containerd[1968]: time="2024-08-05T22:14:08.465283337Z" level=error msg="Failed to destroy network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.468182 containerd[1968]: time="2024-08-05T22:14:08.466419497Z" level=error msg="encountered an error cleaning up failed sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.468182 containerd[1968]: time="2024-08-05T22:14:08.466607078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t59f8,Uid:5acb70e9-e1b2-4039-8e86-bf988a415a12,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.468703 kubelet[3377]: E0805 22:14:08.465700 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85c545f87d-qxp9x_calico-system(c8231fff-c2fe-47d9-9a81-5eef98071a14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85c545f87d-qxp9x_calico-system(c8231fff-c2fe-47d9-9a81-5eef98071a14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85c545f87d-qxp9x" podUID="c8231fff-c2fe-47d9-9a81-5eef98071a14" Aug 5 22:14:08.468703 kubelet[3377]: E0805 22:14:08.467367 3377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.468703 kubelet[3377]: E0805 22:14:08.467418 3377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-t59f8" Aug 5 22:14:08.468912 kubelet[3377]: E0805 22:14:08.467448 3377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-t59f8" Aug 5 22:14:08.468912 kubelet[3377]: E0805 22:14:08.467515 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-t59f8_kube-system(5acb70e9-e1b2-4039-8e86-bf988a415a12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-t59f8_kube-system(5acb70e9-e1b2-4039-8e86-bf988a415a12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-t59f8" podUID="5acb70e9-e1b2-4039-8e86-bf988a415a12" Aug 5 22:14:08.545609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2-shm.mount: Deactivated successfully. Aug 5 22:14:08.845746 systemd[1]: Created slice kubepods-besteffort-pod3970d43f_5b24_4d33_8e29_668cdfe6b128.slice - libcontainer container kubepods-besteffort-pod3970d43f_5b24_4d33_8e29_668cdfe6b128.slice. Aug 5 22:14:08.853164 containerd[1968]: time="2024-08-05T22:14:08.853120682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxt6z,Uid:3970d43f-5b24-4d33-8e29-668cdfe6b128,Namespace:calico-system,Attempt:0,}" Aug 5 22:14:08.948743 containerd[1968]: time="2024-08-05T22:14:08.948693348Z" level=error msg="Failed to destroy network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.951560 containerd[1968]: time="2024-08-05T22:14:08.951492254Z" level=error msg="encountered an error cleaning up failed sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.952055 containerd[1968]: time="2024-08-05T22:14:08.951583402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxt6z,Uid:3970d43f-5b24-4d33-8e29-668cdfe6b128,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.953404 kubelet[3377]: E0805 22:14:08.951994 3377 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:08.953404 kubelet[3377]: E0805 22:14:08.952054 3377 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxt6z" Aug 5 22:14:08.953404 kubelet[3377]: E0805 22:14:08.952111 3377 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wxt6z" Aug 5 22:14:08.953013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496-shm.mount: Deactivated successfully. Aug 5 22:14:08.953649 kubelet[3377]: E0805 22:14:08.952188 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wxt6z_calico-system(3970d43f-5b24-4d33-8e29-668cdfe6b128)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wxt6z_calico-system(3970d43f-5b24-4d33-8e29-668cdfe6b128)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:14:09.177629 kubelet[3377]: I0805 22:14:09.177514 3377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:09.181361 kubelet[3377]: I0805 22:14:09.181328 3377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:09.200182 kubelet[3377]: I0805 22:14:09.199540 3377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:09.203564 kubelet[3377]: I0805 22:14:09.203519 3377 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:09.212384 containerd[1968]: time="2024-08-05T22:14:09.212090637Z" level=info msg="StopPodSandbox for \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\"" Aug 5 22:14:09.214205 containerd[1968]: time="2024-08-05T22:14:09.213127244Z" level=info msg="StopPodSandbox for \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\"" Aug 5 22:14:09.217302 containerd[1968]: time="2024-08-05T22:14:09.217242088Z" level=info msg="StopPodSandbox for \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\"" Aug 5 22:14:09.217740 containerd[1968]: time="2024-08-05T22:14:09.217713963Z" level=info msg="Ensure that sandbox 5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496 in task-service has been cleanup successfully" Aug 5 22:14:09.218214 containerd[1968]: time="2024-08-05T22:14:09.218184443Z" level=info msg="StopPodSandbox for \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\"" Aug 5 22:14:09.218417 containerd[1968]: time="2024-08-05T22:14:09.218387335Z" level=info msg="Ensure that sandbox 6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c in task-service has been cleanup successfully" Aug 5 22:14:09.218866 containerd[1968]: time="2024-08-05T22:14:09.218832899Z" level=info msg="Ensure that sandbox a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2 in task-service has been cleanup successfully" Aug 5 22:14:09.219234 containerd[1968]: time="2024-08-05T22:14:09.219208719Z" level=info msg="Ensure that sandbox 7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2 in task-service has been cleanup successfully" Aug 5 22:14:09.312574 containerd[1968]: time="2024-08-05T22:14:09.312110893Z" level=error msg="StopPodSandbox for \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\" failed" error="failed to destroy network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:09.313861 kubelet[3377]: E0805 22:14:09.313199 3377 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:09.313861 kubelet[3377]: E0805 22:14:09.313302 3377 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c"} Aug 5 22:14:09.313861 kubelet[3377]: E0805 22:14:09.313432 3377 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8231fff-c2fe-47d9-9a81-5eef98071a14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:09.313861 kubelet[3377]: E0805 22:14:09.313508 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8231fff-c2fe-47d9-9a81-5eef98071a14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85c545f87d-qxp9x" podUID="c8231fff-c2fe-47d9-9a81-5eef98071a14" Aug 5 22:14:09.343227 containerd[1968]: time="2024-08-05T22:14:09.343173861Z" level=error msg="StopPodSandbox for \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\" failed" error="failed to destroy network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:09.343803 kubelet[3377]: E0805 22:14:09.343769 3377 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:09.343970 kubelet[3377]: E0805 22:14:09.343958 3377 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2"} Aug 5 22:14:09.344206 kubelet[3377]: E0805 22:14:09.344137 3377 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:09.344521 kubelet[3377]: E0805 22:14:09.344401 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-dqrz8" podUID="a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5" Aug 5 22:14:09.347335 containerd[1968]: time="2024-08-05T22:14:09.346974484Z" level=error msg="StopPodSandbox for \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\" failed" error="failed to destroy network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:09.347335 containerd[1968]: time="2024-08-05T22:14:09.347254551Z" level=error msg="StopPodSandbox for \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\" failed" error="failed to destroy network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:14:09.347539 kubelet[3377]: E0805 22:14:09.347352 3377 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:09.347539 kubelet[3377]: E0805 22:14:09.347401 3377 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2"} Aug 5 22:14:09.347539 kubelet[3377]: E0805 22:14:09.347448 3377 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5acb70e9-e1b2-4039-8e86-bf988a415a12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:09.347539 kubelet[3377]: E0805 22:14:09.347486 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5acb70e9-e1b2-4039-8e86-bf988a415a12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-t59f8" podUID="5acb70e9-e1b2-4039-8e86-bf988a415a12" Aug 5 22:14:09.347780 kubelet[3377]: E0805 22:14:09.347575 3377 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:09.347780 kubelet[3377]: E0805 22:14:09.347594 3377 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496"} Aug 5 22:14:09.347780 kubelet[3377]: E0805 22:14:09.347634 3377 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3970d43f-5b24-4d33-8e29-668cdfe6b128\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:14:09.347780 kubelet[3377]: E0805 22:14:09.347666 3377 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3970d43f-5b24-4d33-8e29-668cdfe6b128\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wxt6z" podUID="3970d43f-5b24-4d33-8e29-668cdfe6b128" Aug 5 22:14:15.530784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251118874.mount: Deactivated successfully. Aug 5 22:14:15.586226 containerd[1968]: time="2024-08-05T22:14:15.586175385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:15.588015 containerd[1968]: time="2024-08-05T22:14:15.587918490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:14:15.589603 containerd[1968]: time="2024-08-05T22:14:15.589343688Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:15.593345 containerd[1968]: time="2024-08-05T22:14:15.593303176Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:15.594511 containerd[1968]: time="2024-08-05T22:14:15.594457237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.409141536s" Aug 5 22:14:15.594607 containerd[1968]: time="2024-08-05T22:14:15.594518783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:14:15.657414 containerd[1968]: time="2024-08-05T22:14:15.657368897Z" level=info msg="CreateContainer within sandbox \"6d3549d03f5e2885305685333449581dc8627237d4ff4f0142fad56940398dc5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:14:15.684920 containerd[1968]: time="2024-08-05T22:14:15.684864839Z" level=info msg="CreateContainer within sandbox \"6d3549d03f5e2885305685333449581dc8627237d4ff4f0142fad56940398dc5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"23db9d1cf97f85264d414bddce885234554e2f6c7e04bd9202901978358f8740\"" Aug 5 22:14:15.685783 containerd[1968]: time="2024-08-05T22:14:15.685738278Z" level=info msg="StartContainer for \"23db9d1cf97f85264d414bddce885234554e2f6c7e04bd9202901978358f8740\"" Aug 5 22:14:15.748374 systemd[1]: Started cri-containerd-23db9d1cf97f85264d414bddce885234554e2f6c7e04bd9202901978358f8740.scope - libcontainer container 23db9d1cf97f85264d414bddce885234554e2f6c7e04bd9202901978358f8740. Aug 5 22:14:15.883909 containerd[1968]: time="2024-08-05T22:14:15.883233388Z" level=info msg="StartContainer for \"23db9d1cf97f85264d414bddce885234554e2f6c7e04bd9202901978358f8740\" returns successfully" Aug 5 22:14:16.057638 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:14:16.060381 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:14:16.306172 kubelet[3377]: I0805 22:14:16.305739 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-7fwpf" podStartSLOduration=2.162256581 podStartE2EDuration="24.275140675s" podCreationTimestamp="2024-08-05 22:13:52 +0000 UTC" firstStartedPulling="2024-08-05 22:13:53.483402264 +0000 UTC m=+19.903095527" lastFinishedPulling="2024-08-05 22:14:15.596286355 +0000 UTC m=+42.015979621" observedRunningTime="2024-08-05 22:14:16.269801487 +0000 UTC m=+42.689494758" watchObservedRunningTime="2024-08-05 22:14:16.275140675 +0000 UTC m=+42.694833948" Aug 5 22:14:17.245244 kubelet[3377]: I0805 22:14:17.245207 3377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:14:18.496787 systemd-networkd[1809]: vxlan.calico: Link UP Aug 5 22:14:18.499119 systemd-networkd[1809]: vxlan.calico: Gained carrier Aug 5 22:14:18.501763 (udev-worker)[4341]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:18.535048 (udev-worker)[4521]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:18.536349 (udev-worker)[4523]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:14:19.016654 kubelet[3377]: I0805 22:14:19.016612 3377 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:14:19.227780 systemd[1]: run-containerd-runc-k8s.io-23db9d1cf97f85264d414bddce885234554e2f6c7e04bd9202901978358f8740-runc.RsIwFS.mount: Deactivated successfully. Aug 5 22:14:20.524248 systemd-networkd[1809]: vxlan.calico: Gained IPv6LL Aug 5 22:14:20.832288 containerd[1968]: time="2024-08-05T22:14:20.832098419Z" level=info msg="StopPodSandbox for \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\"" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:20.924 [INFO][4622] k8s.go 608: Cleaning up netns ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:20.938 [INFO][4622] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" iface="eth0" netns="/var/run/netns/cni-0935c1d4-c504-f44d-8206-c8bf50f9d31e" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:20.943 [INFO][4622] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" iface="eth0" netns="/var/run/netns/cni-0935c1d4-c504-f44d-8206-c8bf50f9d31e" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:20.943 [INFO][4622] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" iface="eth0" netns="/var/run/netns/cni-0935c1d4-c504-f44d-8206-c8bf50f9d31e" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:20.943 [INFO][4622] k8s.go 615: Releasing IP address(es) ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:20.943 [INFO][4622] utils.go 188: Calico CNI releasing IP address ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:21.127 [INFO][4629] ipam_plugin.go 411: Releasing address using handleID ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" HandleID="k8s-pod-network.6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:21.129 [INFO][4629] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:21.129 [INFO][4629] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:21.173 [WARNING][4629] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" HandleID="k8s-pod-network.6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:21.173 [INFO][4629] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" HandleID="k8s-pod-network.6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:21.182 [INFO][4629] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:21.192883 containerd[1968]: 2024-08-05 22:14:21.187 [INFO][4622] k8s.go 621: Teardown processing complete. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:21.197986 containerd[1968]: time="2024-08-05T22:14:21.194798628Z" level=info msg="TearDown network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\" successfully" Aug 5 22:14:21.197986 containerd[1968]: time="2024-08-05T22:14:21.194859192Z" level=info msg="StopPodSandbox for \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\" returns successfully" Aug 5 22:14:21.206048 systemd[1]: run-netns-cni\x2d0935c1d4\x2dc504\x2df44d\x2d8206\x2dc8bf50f9d31e.mount: Deactivated successfully. Aug 5 22:14:21.239196 containerd[1968]: time="2024-08-05T22:14:21.239140539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c545f87d-qxp9x,Uid:c8231fff-c2fe-47d9-9a81-5eef98071a14,Namespace:calico-system,Attempt:1,}" Aug 5 22:14:21.528346 systemd-networkd[1809]: calie47ae8dd660: Link UP Aug 5 22:14:21.528601 systemd-networkd[1809]: calie47ae8dd660: Gained carrier Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.352 [INFO][4636] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0 calico-kube-controllers-85c545f87d- calico-system c8231fff-c2fe-47d9-9a81-5eef98071a14 720 0 2024-08-05 22:13:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85c545f87d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-76 calico-kube-controllers-85c545f87d-qxp9x eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie47ae8dd660 [] []}} ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Namespace="calico-system" Pod="calico-kube-controllers-85c545f87d-qxp9x" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.353 [INFO][4636] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Namespace="calico-system" Pod="calico-kube-controllers-85c545f87d-qxp9x" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.434 [INFO][4648] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" HandleID="k8s-pod-network.9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.462 [INFO][4648] ipam_plugin.go 264: Auto assigning IP ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" HandleID="k8s-pod-network.9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f50f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-76", "pod":"calico-kube-controllers-85c545f87d-qxp9x", "timestamp":"2024-08-05 22:14:21.434711574 +0000 UTC"}, Hostname:"ip-172-31-23-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.462 [INFO][4648] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.462 [INFO][4648] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.462 [INFO][4648] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-76' Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.464 [INFO][4648] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" host="ip-172-31-23-76" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.476 [INFO][4648] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-76" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.487 [INFO][4648] ipam.go 489: Trying affinity for 192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.492 [INFO][4648] ipam.go 155: Attempting to load block cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.497 [INFO][4648] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.497 [INFO][4648] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" host="ip-172-31-23-76" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.499 [INFO][4648] ipam.go 1685: Creating new handle: k8s-pod-network.9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666 Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.509 [INFO][4648] ipam.go 1203: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" host="ip-172-31-23-76" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.515 [INFO][4648] ipam.go 1216: Successfully claimed IPs: [192.168.43.65/26] block=192.168.43.64/26 handle="k8s-pod-network.9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" host="ip-172-31-23-76" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.515 [INFO][4648] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.65/26] handle="k8s-pod-network.9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" host="ip-172-31-23-76" Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.517 [INFO][4648] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:21.558387 containerd[1968]: 2024-08-05 22:14:21.517 [INFO][4648] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.43.65/26] IPv6=[] ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" HandleID="k8s-pod-network.9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.560465 containerd[1968]: 2024-08-05 22:14:21.523 [INFO][4636] k8s.go 386: Populated endpoint ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Namespace="calico-system" Pod="calico-kube-controllers-85c545f87d-qxp9x" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0", GenerateName:"calico-kube-controllers-85c545f87d-", Namespace:"calico-system", SelfLink:"", UID:"c8231fff-c2fe-47d9-9a81-5eef98071a14", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c545f87d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"", Pod:"calico-kube-controllers-85c545f87d-qxp9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie47ae8dd660", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:21.560465 containerd[1968]: 2024-08-05 22:14:21.523 [INFO][4636] k8s.go 387: Calico CNI using IPs: [192.168.43.65/32] ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Namespace="calico-system" Pod="calico-kube-controllers-85c545f87d-qxp9x" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.560465 containerd[1968]: 2024-08-05 22:14:21.523 [INFO][4636] dataplane_linux.go 68: Setting the host side veth name to calie47ae8dd660 ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Namespace="calico-system" Pod="calico-kube-controllers-85c545f87d-qxp9x" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.560465 containerd[1968]: 2024-08-05 22:14:21.532 [INFO][4636] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Namespace="calico-system" Pod="calico-kube-controllers-85c545f87d-qxp9x" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.560465 containerd[1968]: 2024-08-05 22:14:21.533 [INFO][4636] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Namespace="calico-system" Pod="calico-kube-controllers-85c545f87d-qxp9x" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0", GenerateName:"calico-kube-controllers-85c545f87d-", Namespace:"calico-system", SelfLink:"", UID:"c8231fff-c2fe-47d9-9a81-5eef98071a14", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c545f87d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666", Pod:"calico-kube-controllers-85c545f87d-qxp9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie47ae8dd660", MAC:"7e:3e:03:70:b8:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:21.560465 containerd[1968]: 2024-08-05 22:14:21.552 [INFO][4636] k8s.go 500: Wrote updated endpoint to datastore ContainerID="9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666" Namespace="calico-system" Pod="calico-kube-controllers-85c545f87d-qxp9x" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:21.636950 containerd[1968]: time="2024-08-05T22:14:21.636733404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:21.636950 containerd[1968]: time="2024-08-05T22:14:21.636791794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:21.636950 containerd[1968]: time="2024-08-05T22:14:21.636824751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:21.636950 containerd[1968]: time="2024-08-05T22:14:21.636838990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:21.679913 systemd[1]: Started cri-containerd-9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666.scope - libcontainer container 9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666. Aug 5 22:14:21.762259 containerd[1968]: time="2024-08-05T22:14:21.762201624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85c545f87d-qxp9x,Uid:c8231fff-c2fe-47d9-9a81-5eef98071a14,Namespace:calico-system,Attempt:1,} returns sandbox id \"9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666\"" Aug 5 22:14:21.765849 containerd[1968]: time="2024-08-05T22:14:21.765026921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:14:22.848685 containerd[1968]: time="2024-08-05T22:14:22.848617431Z" level=info msg="StopPodSandbox for \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\"" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.008 [INFO][4724] k8s.go 608: Cleaning up netns ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.009 [INFO][4724] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" iface="eth0" netns="/var/run/netns/cni-ec7d6de7-c4f7-5d0a-dfdc-80776c22de3b" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.016 [INFO][4724] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" iface="eth0" netns="/var/run/netns/cni-ec7d6de7-c4f7-5d0a-dfdc-80776c22de3b" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.020 [INFO][4724] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" iface="eth0" netns="/var/run/netns/cni-ec7d6de7-c4f7-5d0a-dfdc-80776c22de3b" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.020 [INFO][4724] k8s.go 615: Releasing IP address(es) ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.020 [INFO][4724] utils.go 188: Calico CNI releasing IP address ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.108 [INFO][4730] ipam_plugin.go 411: Releasing address using handleID ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" HandleID="k8s-pod-network.7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.108 [INFO][4730] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.108 [INFO][4730] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.130 [WARNING][4730] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" HandleID="k8s-pod-network.7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.131 [INFO][4730] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" HandleID="k8s-pod-network.7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.135 [INFO][4730] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:23.148394 containerd[1968]: 2024-08-05 22:14:23.142 [INFO][4724] k8s.go 621: Teardown processing complete. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:23.152395 containerd[1968]: time="2024-08-05T22:14:23.151231887Z" level=info msg="TearDown network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\" successfully" Aug 5 22:14:23.152395 containerd[1968]: time="2024-08-05T22:14:23.151273518Z" level=info msg="StopPodSandbox for \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\" returns successfully" Aug 5 22:14:23.154256 containerd[1968]: time="2024-08-05T22:14:23.152797710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqrz8,Uid:a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5,Namespace:kube-system,Attempt:1,}" Aug 5 22:14:23.158379 systemd[1]: run-netns-cni\x2dec7d6de7\x2dc4f7\x2d5d0a\x2ddfdc\x2d80776c22de3b.mount: Deactivated successfully. Aug 5 22:14:23.220576 systemd-networkd[1809]: calie47ae8dd660: Gained IPv6LL Aug 5 22:14:23.264933 systemd[1]: Started sshd@7-172.31.23.76:22-139.178.89.65:54338.service - OpenSSH per-connection server daemon (139.178.89.65:54338). Aug 5 22:14:23.538346 sshd[4745]: Accepted publickey for core from 139.178.89.65 port 54338 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:23.544007 sshd[4745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:23.555367 systemd-logind[1945]: New session 8 of user core. Aug 5 22:14:23.561546 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:14:23.716604 systemd-networkd[1809]: cali7bc7a158465: Link UP Aug 5 22:14:23.740126 systemd-networkd[1809]: cali7bc7a158465: Gained carrier Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.377 [INFO][4736] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0 coredns-76f75df574- kube-system a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5 761 0 2024-08-05 22:13:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-76 coredns-76f75df574-dqrz8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7bc7a158465 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Namespace="kube-system" Pod="coredns-76f75df574-dqrz8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.377 [INFO][4736] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Namespace="kube-system" Pod="coredns-76f75df574-dqrz8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.464 [INFO][4750] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" HandleID="k8s-pod-network.f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.495 [INFO][4750] ipam_plugin.go 264: Auto assigning IP ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" HandleID="k8s-pod-network.f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a1130), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-76", "pod":"coredns-76f75df574-dqrz8", "timestamp":"2024-08-05 22:14:23.464831154 +0000 UTC"}, Hostname:"ip-172-31-23-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.495 [INFO][4750] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.495 [INFO][4750] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.496 [INFO][4750] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-76' Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.505 [INFO][4750] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" host="ip-172-31-23-76" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.521 [INFO][4750] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-76" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.540 [INFO][4750] ipam.go 489: Trying affinity for 192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.558 [INFO][4750] ipam.go 155: Attempting to load block cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.572 [INFO][4750] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.572 [INFO][4750] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" host="ip-172-31-23-76" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.596 [INFO][4750] ipam.go 1685: Creating new handle: k8s-pod-network.f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.650 [INFO][4750] ipam.go 1203: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" host="ip-172-31-23-76" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.677 [INFO][4750] ipam.go 1216: Successfully claimed IPs: [192.168.43.66/26] block=192.168.43.64/26 handle="k8s-pod-network.f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" host="ip-172-31-23-76" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.678 [INFO][4750] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.66/26] handle="k8s-pod-network.f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" host="ip-172-31-23-76" Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.682 [INFO][4750] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:23.811862 containerd[1968]: 2024-08-05 22:14:23.682 [INFO][4750] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.43.66/26] IPv6=[] ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" HandleID="k8s-pod-network.f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.815507 containerd[1968]: 2024-08-05 22:14:23.697 [INFO][4736] k8s.go 386: Populated endpoint ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Namespace="kube-system" Pod="coredns-76f75df574-dqrz8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"", Pod:"coredns-76f75df574-dqrz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bc7a158465", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:23.815507 containerd[1968]: 2024-08-05 22:14:23.698 [INFO][4736] k8s.go 387: Calico CNI using IPs: [192.168.43.66/32] ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Namespace="kube-system" Pod="coredns-76f75df574-dqrz8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.815507 containerd[1968]: 2024-08-05 22:14:23.698 [INFO][4736] dataplane_linux.go 68: Setting the host side veth name to cali7bc7a158465 ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Namespace="kube-system" Pod="coredns-76f75df574-dqrz8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.815507 containerd[1968]: 2024-08-05 22:14:23.739 [INFO][4736] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Namespace="kube-system" Pod="coredns-76f75df574-dqrz8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.815507 containerd[1968]: 2024-08-05 22:14:23.759 [INFO][4736] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Namespace="kube-system" Pod="coredns-76f75df574-dqrz8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d", Pod:"coredns-76f75df574-dqrz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bc7a158465", MAC:"f6:a6:5e:28:82:3e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:23.815507 containerd[1968]: 2024-08-05 22:14:23.798 [INFO][4736] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d" Namespace="kube-system" Pod="coredns-76f75df574-dqrz8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:23.915605 containerd[1968]: time="2024-08-05T22:14:23.915280925Z" level=info msg="StopPodSandbox for \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\"" Aug 5 22:14:24.042935 containerd[1968]: time="2024-08-05T22:14:24.042231473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:24.042935 containerd[1968]: time="2024-08-05T22:14:24.042379771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:24.042935 containerd[1968]: time="2024-08-05T22:14:24.042410863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:24.042935 containerd[1968]: time="2024-08-05T22:14:24.042453099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:24.200423 sshd[4745]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:24.229262 systemd[1]: sshd@7-172.31.23.76:22-139.178.89.65:54338.service: Deactivated successfully. Aug 5 22:14:24.239677 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:14:24.247701 systemd-logind[1945]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:14:24.263979 systemd-logind[1945]: Removed session 8. Aug 5 22:14:24.318566 systemd[1]: Started cri-containerd-f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d.scope - libcontainer container f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d. Aug 5 22:14:24.588104 containerd[1968]: time="2024-08-05T22:14:24.587653805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-dqrz8,Uid:a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5,Namespace:kube-system,Attempt:1,} returns sandbox id \"f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d\"" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.381 [INFO][4809] k8s.go 608: Cleaning up netns ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.385 [INFO][4809] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" iface="eth0" netns="/var/run/netns/cni-aeb625a0-ff58-d512-ae5a-b6c8f3667071" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.386 [INFO][4809] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" iface="eth0" netns="/var/run/netns/cni-aeb625a0-ff58-d512-ae5a-b6c8f3667071" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.386 [INFO][4809] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" iface="eth0" netns="/var/run/netns/cni-aeb625a0-ff58-d512-ae5a-b6c8f3667071" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.386 [INFO][4809] k8s.go 615: Releasing IP address(es) ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.386 [INFO][4809] utils.go 188: Calico CNI releasing IP address ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.503 [INFO][4852] ipam_plugin.go 411: Releasing address using handleID ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" HandleID="k8s-pod-network.a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.503 [INFO][4852] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.504 [INFO][4852] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.546 [WARNING][4852] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" HandleID="k8s-pod-network.a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.546 [INFO][4852] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" HandleID="k8s-pod-network.a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.571 [INFO][4852] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:24.592919 containerd[1968]: 2024-08-05 22:14:24.578 [INFO][4809] k8s.go 621: Teardown processing complete. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:24.599752 containerd[1968]: time="2024-08-05T22:14:24.593118031Z" level=info msg="TearDown network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\" successfully" Aug 5 22:14:24.599752 containerd[1968]: time="2024-08-05T22:14:24.593190683Z" level=info msg="StopPodSandbox for \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\" returns successfully" Aug 5 22:14:24.605890 containerd[1968]: time="2024-08-05T22:14:24.605839583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t59f8,Uid:5acb70e9-e1b2-4039-8e86-bf988a415a12,Namespace:kube-system,Attempt:1,}" Aug 5 22:14:24.606656 systemd[1]: run-netns-cni\x2daeb625a0\x2dff58\x2dd512\x2dae5a\x2db6c8f3667071.mount: Deactivated successfully. Aug 5 22:14:24.654475 containerd[1968]: time="2024-08-05T22:14:24.618060791Z" level=info msg="CreateContainer within sandbox \"f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:14:24.747060 containerd[1968]: time="2024-08-05T22:14:24.747016216Z" level=info msg="CreateContainer within sandbox \"f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc8ee536a77d58b2e9bdeb5ca9bc3186de8e8f52b5977c26b2fa57769a60d290\"" Aug 5 22:14:24.749121 containerd[1968]: time="2024-08-05T22:14:24.748753506Z" level=info msg="StartContainer for \"fc8ee536a77d58b2e9bdeb5ca9bc3186de8e8f52b5977c26b2fa57769a60d290\"" Aug 5 22:14:24.829894 containerd[1968]: time="2024-08-05T22:14:24.829849598Z" level=info msg="StopPodSandbox for \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\"" Aug 5 22:14:24.890307 systemd[1]: Started cri-containerd-fc8ee536a77d58b2e9bdeb5ca9bc3186de8e8f52b5977c26b2fa57769a60d290.scope - libcontainer container fc8ee536a77d58b2e9bdeb5ca9bc3186de8e8f52b5977c26b2fa57769a60d290. Aug 5 22:14:24.941163 systemd-networkd[1809]: cali7bc7a158465: Gained IPv6LL Aug 5 22:14:25.122513 containerd[1968]: time="2024-08-05T22:14:25.122462917Z" level=info msg="StartContainer for \"fc8ee536a77d58b2e9bdeb5ca9bc3186de8e8f52b5977c26b2fa57769a60d290\" returns successfully" Aug 5 22:14:25.305626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount614301941.mount: Deactivated successfully. Aug 5 22:14:25.354232 systemd-networkd[1809]: cali9703fc35f53: Link UP Aug 5 22:14:25.354550 systemd-networkd[1809]: cali9703fc35f53: Gained carrier Aug 5 22:14:25.416094 kubelet[3377]: I0805 22:14:25.415509 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-dqrz8" podStartSLOduration=40.415448399 podStartE2EDuration="40.415448399s" podCreationTimestamp="2024-08-05 22:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:14:25.394382928 +0000 UTC m=+51.814076199" watchObservedRunningTime="2024-08-05 22:14:25.415448399 +0000 UTC m=+51.835141673" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:24.893 [INFO][4865] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0 coredns-76f75df574- kube-system 5acb70e9-e1b2-4039-8e86-bf988a415a12 775 0 2024-08-05 22:13:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-76 coredns-76f75df574-t59f8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9703fc35f53 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Namespace="kube-system" Pod="coredns-76f75df574-t59f8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:24.893 [INFO][4865] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Namespace="kube-system" Pod="coredns-76f75df574-t59f8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.171 [INFO][4922] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" HandleID="k8s-pod-network.f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.204 [INFO][4922] ipam_plugin.go 264: Auto assigning IP ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" HandleID="k8s-pod-network.f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b23b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-76", "pod":"coredns-76f75df574-t59f8", "timestamp":"2024-08-05 22:14:25.171267018 +0000 UTC"}, Hostname:"ip-172-31-23-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.205 [INFO][4922] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.205 [INFO][4922] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.205 [INFO][4922] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-76' Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.219 [INFO][4922] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" host="ip-172-31-23-76" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.241 [INFO][4922] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-76" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.261 [INFO][4922] ipam.go 489: Trying affinity for 192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.274 [INFO][4922] ipam.go 155: Attempting to load block cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.283 [INFO][4922] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.283 [INFO][4922] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" host="ip-172-31-23-76" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.289 [INFO][4922] ipam.go 1685: Creating new handle: k8s-pod-network.f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.308 [INFO][4922] ipam.go 1203: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" host="ip-172-31-23-76" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.320 [INFO][4922] ipam.go 1216: Successfully claimed IPs: [192.168.43.67/26] block=192.168.43.64/26 handle="k8s-pod-network.f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" host="ip-172-31-23-76" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.320 [INFO][4922] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.67/26] handle="k8s-pod-network.f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" host="ip-172-31-23-76" Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.326 [INFO][4922] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:25.429931 containerd[1968]: 2024-08-05 22:14:25.327 [INFO][4922] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.43.67/26] IPv6=[] ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" HandleID="k8s-pod-network.f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:25.435272 containerd[1968]: 2024-08-05 22:14:25.338 [INFO][4865] k8s.go 386: Populated endpoint ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Namespace="kube-system" Pod="coredns-76f75df574-t59f8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5acb70e9-e1b2-4039-8e86-bf988a415a12", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"", Pod:"coredns-76f75df574-t59f8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9703fc35f53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:25.435272 containerd[1968]: 2024-08-05 22:14:25.338 [INFO][4865] k8s.go 387: Calico CNI using IPs: [192.168.43.67/32] ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Namespace="kube-system" Pod="coredns-76f75df574-t59f8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:25.435272 containerd[1968]: 2024-08-05 22:14:25.338 [INFO][4865] dataplane_linux.go 68: Setting the host side veth name to cali9703fc35f53 ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Namespace="kube-system" Pod="coredns-76f75df574-t59f8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:25.435272 containerd[1968]: 2024-08-05 22:14:25.363 [INFO][4865] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Namespace="kube-system" Pod="coredns-76f75df574-t59f8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:25.435272 containerd[1968]: 2024-08-05 22:14:25.370 [INFO][4865] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Namespace="kube-system" Pod="coredns-76f75df574-t59f8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5acb70e9-e1b2-4039-8e86-bf988a415a12", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c", Pod:"coredns-76f75df574-t59f8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9703fc35f53", MAC:"32:8e:37:b4:05:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:25.435272 containerd[1968]: 2024-08-05 22:14:25.421 [INFO][4865] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c" Namespace="kube-system" Pod="coredns-76f75df574-t59f8" WorkloadEndpoint="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.072 [INFO][4910] k8s.go 608: Cleaning up netns ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.074 [INFO][4910] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" iface="eth0" netns="/var/run/netns/cni-cb71c876-dc44-b1f5-63d5-4224c4e22059" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.076 [INFO][4910] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" iface="eth0" netns="/var/run/netns/cni-cb71c876-dc44-b1f5-63d5-4224c4e22059" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.079 [INFO][4910] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" iface="eth0" netns="/var/run/netns/cni-cb71c876-dc44-b1f5-63d5-4224c4e22059" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.079 [INFO][4910] k8s.go 615: Releasing IP address(es) ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.079 [INFO][4910] utils.go 188: Calico CNI releasing IP address ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.275 [INFO][4936] ipam_plugin.go 411: Releasing address using handleID ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" HandleID="k8s-pod-network.5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.275 [INFO][4936] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.326 [INFO][4936] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.399 [WARNING][4936] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" HandleID="k8s-pod-network.5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.399 [INFO][4936] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" HandleID="k8s-pod-network.5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.415 [INFO][4936] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:25.447589 containerd[1968]: 2024-08-05 22:14:25.426 [INFO][4910] k8s.go 621: Teardown processing complete. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:25.447589 containerd[1968]: time="2024-08-05T22:14:25.443467907Z" level=info msg="TearDown network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\" successfully" Aug 5 22:14:25.447589 containerd[1968]: time="2024-08-05T22:14:25.447459568Z" level=info msg="StopPodSandbox for \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\" returns successfully" Aug 5 22:14:25.449327 containerd[1968]: time="2024-08-05T22:14:25.449031557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxt6z,Uid:3970d43f-5b24-4d33-8e29-668cdfe6b128,Namespace:calico-system,Attempt:1,}" Aug 5 22:14:25.451065 systemd[1]: run-netns-cni\x2dcb71c876\x2ddc44\x2db1f5\x2d63d5\x2d4224c4e22059.mount: Deactivated successfully. Aug 5 22:14:25.575752 containerd[1968]: time="2024-08-05T22:14:25.570448246Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:25.575752 containerd[1968]: time="2024-08-05T22:14:25.570517003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:25.575752 containerd[1968]: time="2024-08-05T22:14:25.570541441Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:25.575752 containerd[1968]: time="2024-08-05T22:14:25.570557102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:25.720334 systemd[1]: Started cri-containerd-f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c.scope - libcontainer container f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c. Aug 5 22:14:26.128733 containerd[1968]: time="2024-08-05T22:14:26.127575674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-t59f8,Uid:5acb70e9-e1b2-4039-8e86-bf988a415a12,Namespace:kube-system,Attempt:1,} returns sandbox id \"f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c\"" Aug 5 22:14:26.163539 containerd[1968]: time="2024-08-05T22:14:26.162979544Z" level=info msg="CreateContainer within sandbox \"f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:14:26.222312 systemd-networkd[1809]: cali820b8889a67: Link UP Aug 5 22:14:26.239024 systemd-networkd[1809]: cali820b8889a67: Gained carrier Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:25.815 [INFO][4965] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0 csi-node-driver- calico-system 3970d43f-5b24-4d33-8e29-668cdfe6b128 779 0 2024-08-05 22:13:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-23-76 csi-node-driver-wxt6z eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali820b8889a67 [] []}} ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Namespace="calico-system" Pod="csi-node-driver-wxt6z" WorkloadEndpoint="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:25.816 [INFO][4965] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Namespace="calico-system" Pod="csi-node-driver-wxt6z" WorkloadEndpoint="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:25.945 [INFO][5013] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" HandleID="k8s-pod-network.6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.043 [INFO][5013] ipam_plugin.go 264: Auto assigning IP ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" HandleID="k8s-pod-network.6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fd240), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-76", "pod":"csi-node-driver-wxt6z", "timestamp":"2024-08-05 22:14:25.945745586 +0000 UTC"}, Hostname:"ip-172-31-23-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.044 [INFO][5013] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.044 [INFO][5013] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.045 [INFO][5013] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-76' Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.057 [INFO][5013] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" host="ip-172-31-23-76" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.091 [INFO][5013] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-76" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.139 [INFO][5013] ipam.go 489: Trying affinity for 192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.145 [INFO][5013] ipam.go 155: Attempting to load block cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.155 [INFO][5013] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.156 [INFO][5013] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" host="ip-172-31-23-76" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.166 [INFO][5013] ipam.go 1685: Creating new handle: k8s-pod-network.6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7 Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.187 [INFO][5013] ipam.go 1203: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" host="ip-172-31-23-76" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.209 [INFO][5013] ipam.go 1216: Successfully claimed IPs: [192.168.43.68/26] block=192.168.43.64/26 handle="k8s-pod-network.6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" host="ip-172-31-23-76" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.209 [INFO][5013] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.68/26] handle="k8s-pod-network.6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" host="ip-172-31-23-76" Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.209 [INFO][5013] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:26.280286 containerd[1968]: 2024-08-05 22:14:26.209 [INFO][5013] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.43.68/26] IPv6=[] ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" HandleID="k8s-pod-network.6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:26.281330 containerd[1968]: 2024-08-05 22:14:26.218 [INFO][4965] k8s.go 386: Populated endpoint ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Namespace="calico-system" Pod="csi-node-driver-wxt6z" WorkloadEndpoint="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3970d43f-5b24-4d33-8e29-668cdfe6b128", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"", Pod:"csi-node-driver-wxt6z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali820b8889a67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:26.281330 containerd[1968]: 2024-08-05 22:14:26.218 [INFO][4965] k8s.go 387: Calico CNI using IPs: [192.168.43.68/32] ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Namespace="calico-system" Pod="csi-node-driver-wxt6z" WorkloadEndpoint="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:26.281330 containerd[1968]: 2024-08-05 22:14:26.218 [INFO][4965] dataplane_linux.go 68: Setting the host side veth name to cali820b8889a67 ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Namespace="calico-system" Pod="csi-node-driver-wxt6z" WorkloadEndpoint="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:26.281330 containerd[1968]: 2024-08-05 22:14:26.224 [INFO][4965] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Namespace="calico-system" Pod="csi-node-driver-wxt6z" WorkloadEndpoint="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:26.281330 containerd[1968]: 2024-08-05 22:14:26.227 [INFO][4965] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Namespace="calico-system" Pod="csi-node-driver-wxt6z" WorkloadEndpoint="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3970d43f-5b24-4d33-8e29-668cdfe6b128", ResourceVersion:"779", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7", Pod:"csi-node-driver-wxt6z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali820b8889a67", MAC:"32:ce:e9:66:85:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:26.281330 containerd[1968]: 2024-08-05 22:14:26.266 [INFO][4965] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7" Namespace="calico-system" Pod="csi-node-driver-wxt6z" WorkloadEndpoint="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:26.299160 containerd[1968]: time="2024-08-05T22:14:26.299108943Z" level=info msg="CreateContainer within sandbox \"f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6ba1791822bb47868368e29a97ab577a7d5338377a12a704a7278b8213f574a3\"" Aug 5 22:14:26.306856 containerd[1968]: time="2024-08-05T22:14:26.306011036Z" level=info msg="StartContainer for \"6ba1791822bb47868368e29a97ab577a7d5338377a12a704a7278b8213f574a3\"" Aug 5 22:14:26.309667 systemd[1]: run-containerd-runc-k8s.io-f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c-runc.d8Ke13.mount: Deactivated successfully. Aug 5 22:14:26.434883 containerd[1968]: time="2024-08-05T22:14:26.432656089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:26.435890 containerd[1968]: time="2024-08-05T22:14:26.435447840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:26.456504 containerd[1968]: time="2024-08-05T22:14:26.436047335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:26.456504 containerd[1968]: time="2024-08-05T22:14:26.436305854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:26.495387 systemd[1]: Started cri-containerd-6ba1791822bb47868368e29a97ab577a7d5338377a12a704a7278b8213f574a3.scope - libcontainer container 6ba1791822bb47868368e29a97ab577a7d5338377a12a704a7278b8213f574a3. Aug 5 22:14:26.540281 systemd-networkd[1809]: cali9703fc35f53: Gained IPv6LL Aug 5 22:14:26.578640 systemd[1]: Started cri-containerd-6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7.scope - libcontainer container 6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7. Aug 5 22:14:26.739777 containerd[1968]: time="2024-08-05T22:14:26.739499124Z" level=info msg="StartContainer for \"6ba1791822bb47868368e29a97ab577a7d5338377a12a704a7278b8213f574a3\" returns successfully" Aug 5 22:14:26.888476 containerd[1968]: time="2024-08-05T22:14:26.888428715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxt6z,Uid:3970d43f-5b24-4d33-8e29-668cdfe6b128,Namespace:calico-system,Attempt:1,} returns sandbox id \"6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7\"" Aug 5 22:14:27.304542 systemd[1]: run-containerd-runc-k8s.io-6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7-runc.3hxdgR.mount: Deactivated successfully. Aug 5 22:14:27.307231 containerd[1968]: time="2024-08-05T22:14:27.307161995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:27.319210 containerd[1968]: time="2024-08-05T22:14:27.319138940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:14:27.328875 containerd[1968]: time="2024-08-05T22:14:27.328798902Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:27.338447 containerd[1968]: time="2024-08-05T22:14:27.337837535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:27.342599 containerd[1968]: time="2024-08-05T22:14:27.342493483Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 5.577269821s" Aug 5 22:14:27.342599 containerd[1968]: time="2024-08-05T22:14:27.342559360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:14:27.344817 containerd[1968]: time="2024-08-05T22:14:27.344659475Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:14:27.374534 containerd[1968]: time="2024-08-05T22:14:27.373691708Z" level=info msg="CreateContainer within sandbox \"9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:14:27.419531 containerd[1968]: time="2024-08-05T22:14:27.416499330Z" level=info msg="CreateContainer within sandbox \"9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d22ab1df858b6054a751e6826bf446e608761f400d388b75cf789ac7f968a95d\"" Aug 5 22:14:27.423772 containerd[1968]: time="2024-08-05T22:14:27.421584503Z" level=info msg="StartContainer for \"d22ab1df858b6054a751e6826bf446e608761f400d388b75cf789ac7f968a95d\"" Aug 5 22:14:27.584311 systemd[1]: Started cri-containerd-d22ab1df858b6054a751e6826bf446e608761f400d388b75cf789ac7f968a95d.scope - libcontainer container d22ab1df858b6054a751e6826bf446e608761f400d388b75cf789ac7f968a95d. Aug 5 22:14:27.604643 kubelet[3377]: I0805 22:14:27.604592 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-t59f8" podStartSLOduration=42.604470093 podStartE2EDuration="42.604470093s" podCreationTimestamp="2024-08-05 22:13:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:14:27.553306715 +0000 UTC m=+53.973000014" watchObservedRunningTime="2024-08-05 22:14:27.604470093 +0000 UTC m=+54.024163370" Aug 5 22:14:27.741990 containerd[1968]: time="2024-08-05T22:14:27.741913239Z" level=info msg="StartContainer for \"d22ab1df858b6054a751e6826bf446e608761f400d388b75cf789ac7f968a95d\" returns successfully" Aug 5 22:14:27.885383 systemd-networkd[1809]: cali820b8889a67: Gained IPv6LL Aug 5 22:14:28.643720 kubelet[3377]: I0805 22:14:28.643666 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85c545f87d-qxp9x" podStartSLOduration=31.064011121 podStartE2EDuration="36.643611222s" podCreationTimestamp="2024-08-05 22:13:52 +0000 UTC" firstStartedPulling="2024-08-05 22:14:21.764733969 +0000 UTC m=+48.184427222" lastFinishedPulling="2024-08-05 22:14:27.344334064 +0000 UTC m=+53.764027323" observedRunningTime="2024-08-05 22:14:28.609804027 +0000 UTC m=+55.029497312" watchObservedRunningTime="2024-08-05 22:14:28.643611222 +0000 UTC m=+55.063304497" Aug 5 22:14:29.043392 containerd[1968]: time="2024-08-05T22:14:29.041378159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:29.046224 containerd[1968]: time="2024-08-05T22:14:29.046158961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:14:29.050129 containerd[1968]: time="2024-08-05T22:14:29.048989236Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:29.054758 containerd[1968]: time="2024-08-05T22:14:29.054700097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:29.057022 containerd[1968]: time="2024-08-05T22:14:29.056972098Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.712265469s" Aug 5 22:14:29.057372 containerd[1968]: time="2024-08-05T22:14:29.057345447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:14:29.061257 containerd[1968]: time="2024-08-05T22:14:29.061141955Z" level=info msg="CreateContainer within sandbox \"6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:14:29.185112 containerd[1968]: time="2024-08-05T22:14:29.185049219Z" level=info msg="CreateContainer within sandbox \"6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cdc7598b764639a65058d3b7ddabcd8de5108a0e2e0260c8d70bd29e4f810053\"" Aug 5 22:14:29.187825 containerd[1968]: time="2024-08-05T22:14:29.186257491Z" level=info msg="StartContainer for \"cdc7598b764639a65058d3b7ddabcd8de5108a0e2e0260c8d70bd29e4f810053\"" Aug 5 22:14:29.265425 systemd[1]: Started sshd@8-172.31.23.76:22-139.178.89.65:54346.service - OpenSSH per-connection server daemon (139.178.89.65:54346). Aug 5 22:14:29.316052 systemd[1]: Started cri-containerd-cdc7598b764639a65058d3b7ddabcd8de5108a0e2e0260c8d70bd29e4f810053.scope - libcontainer container cdc7598b764639a65058d3b7ddabcd8de5108a0e2e0260c8d70bd29e4f810053. Aug 5 22:14:29.455971 containerd[1968]: time="2024-08-05T22:14:29.455906040Z" level=info msg="StartContainer for \"cdc7598b764639a65058d3b7ddabcd8de5108a0e2e0260c8d70bd29e4f810053\" returns successfully" Aug 5 22:14:29.487172 containerd[1968]: time="2024-08-05T22:14:29.485865296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:14:29.586580 sshd[5215]: Accepted publickey for core from 139.178.89.65 port 54346 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:29.596358 sshd[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:29.667498 systemd-logind[1945]: New session 9 of user core. Aug 5 22:14:29.672579 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:14:30.455195 sshd[5215]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:30.464397 systemd[1]: sshd@8-172.31.23.76:22-139.178.89.65:54346.service: Deactivated successfully. Aug 5 22:14:30.470007 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:14:30.475336 systemd-logind[1945]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:14:30.477028 systemd-logind[1945]: Removed session 9. Aug 5 22:14:30.745435 ntpd[1937]: Listen normally on 7 vxlan.calico 192.168.43.64:123 Aug 5 22:14:30.746609 ntpd[1937]: 5 Aug 22:14:30 ntpd[1937]: Listen normally on 7 vxlan.calico 192.168.43.64:123 Aug 5 22:14:30.746609 ntpd[1937]: 5 Aug 22:14:30 ntpd[1937]: Listen normally on 8 vxlan.calico [fe80::6480:73ff:feb7:47d6%4]:123 Aug 5 22:14:30.746609 ntpd[1937]: 5 Aug 22:14:30 ntpd[1937]: Listen normally on 9 calie47ae8dd660 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 22:14:30.746609 ntpd[1937]: 5 Aug 22:14:30 ntpd[1937]: Listen normally on 10 cali7bc7a158465 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:14:30.746609 ntpd[1937]: 5 Aug 22:14:30 ntpd[1937]: Listen normally on 11 cali9703fc35f53 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:14:30.746609 ntpd[1937]: 5 Aug 22:14:30 ntpd[1937]: Listen normally on 12 cali820b8889a67 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:14:30.745523 ntpd[1937]: Listen normally on 8 vxlan.calico [fe80::6480:73ff:feb7:47d6%4]:123 Aug 5 22:14:30.745577 ntpd[1937]: Listen normally on 9 calie47ae8dd660 [fe80::ecee:eeff:feee:eeee%7]:123 Aug 5 22:14:30.745614 ntpd[1937]: Listen normally on 10 cali7bc7a158465 [fe80::ecee:eeff:feee:eeee%8]:123 Aug 5 22:14:30.745650 ntpd[1937]: Listen normally on 11 cali9703fc35f53 [fe80::ecee:eeff:feee:eeee%9]:123 Aug 5 22:14:30.745687 ntpd[1937]: Listen normally on 12 cali820b8889a67 [fe80::ecee:eeff:feee:eeee%10]:123 Aug 5 22:14:31.585833 containerd[1968]: time="2024-08-05T22:14:31.585780315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:31.589291 containerd[1968]: time="2024-08-05T22:14:31.589217569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:14:31.592200 containerd[1968]: time="2024-08-05T22:14:31.592114375Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:31.604166 containerd[1968]: time="2024-08-05T22:14:31.603460734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:31.607294 containerd[1968]: time="2024-08-05T22:14:31.607239627Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.121072532s" Aug 5 22:14:31.607587 containerd[1968]: time="2024-08-05T22:14:31.607485820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:14:31.611988 containerd[1968]: time="2024-08-05T22:14:31.611733033Z" level=info msg="CreateContainer within sandbox \"6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:14:31.728242 containerd[1968]: time="2024-08-05T22:14:31.727038407Z" level=info msg="CreateContainer within sandbox \"6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"08edcdb8eeafd14f35557bfa9576cc23ad6c00d15f14a0f043b0a3ab201da73f\"" Aug 5 22:14:31.730285 containerd[1968]: time="2024-08-05T22:14:31.728465765Z" level=info msg="StartContainer for \"08edcdb8eeafd14f35557bfa9576cc23ad6c00d15f14a0f043b0a3ab201da73f\"" Aug 5 22:14:31.812425 systemd[1]: Started cri-containerd-08edcdb8eeafd14f35557bfa9576cc23ad6c00d15f14a0f043b0a3ab201da73f.scope - libcontainer container 08edcdb8eeafd14f35557bfa9576cc23ad6c00d15f14a0f043b0a3ab201da73f. Aug 5 22:14:31.908164 containerd[1968]: time="2024-08-05T22:14:31.907949587Z" level=info msg="StartContainer for \"08edcdb8eeafd14f35557bfa9576cc23ad6c00d15f14a0f043b0a3ab201da73f\" returns successfully" Aug 5 22:14:32.182848 kubelet[3377]: I0805 22:14:32.182707 3377 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:14:32.186166 kubelet[3377]: I0805 22:14:32.185626 3377 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:14:33.864221 containerd[1968]: time="2024-08-05T22:14:33.864163706Z" level=info msg="StopPodSandbox for \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\"" Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.935 [WARNING][5307] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d", Pod:"coredns-76f75df574-dqrz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bc7a158465", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.936 [INFO][5307] k8s.go 608: Cleaning up netns ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.936 [INFO][5307] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" iface="eth0" netns="" Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.936 [INFO][5307] k8s.go 615: Releasing IP address(es) ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.936 [INFO][5307] utils.go 188: Calico CNI releasing IP address ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.970 [INFO][5313] ipam_plugin.go 411: Releasing address using handleID ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" HandleID="k8s-pod-network.7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.970 [INFO][5313] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.970 [INFO][5313] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.977 [WARNING][5313] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" HandleID="k8s-pod-network.7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.978 [INFO][5313] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" HandleID="k8s-pod-network.7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.981 [INFO][5313] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:33.988432 containerd[1968]: 2024-08-05 22:14:33.984 [INFO][5307] k8s.go 621: Teardown processing complete. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:33.990797 containerd[1968]: time="2024-08-05T22:14:33.988477621Z" level=info msg="TearDown network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\" successfully" Aug 5 22:14:33.990797 containerd[1968]: time="2024-08-05T22:14:33.988507513Z" level=info msg="StopPodSandbox for \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\" returns successfully" Aug 5 22:14:33.994103 containerd[1968]: time="2024-08-05T22:14:33.994051539Z" level=info msg="RemovePodSandbox for \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\"" Aug 5 22:14:33.998538 containerd[1968]: time="2024-08-05T22:14:33.997983528Z" level=info msg="Forcibly stopping sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\"" Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.128 [WARNING][5331] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"a8535b5a-2ddc-41b7-b6d1-d9ec3a16dec5", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"f45da406ca00ab40135fd30da9342ac0ec76c7640e10e92e7fba5ca188a9909d", Pod:"coredns-76f75df574-dqrz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bc7a158465", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.128 [INFO][5331] k8s.go 608: Cleaning up netns ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.128 [INFO][5331] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" iface="eth0" netns="" Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.128 [INFO][5331] k8s.go 615: Releasing IP address(es) ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.128 [INFO][5331] utils.go 188: Calico CNI releasing IP address ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.187 [INFO][5337] ipam_plugin.go 411: Releasing address using handleID ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" HandleID="k8s-pod-network.7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.187 [INFO][5337] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.187 [INFO][5337] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.194 [WARNING][5337] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" HandleID="k8s-pod-network.7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.194 [INFO][5337] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" HandleID="k8s-pod-network.7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--dqrz8-eth0" Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.196 [INFO][5337] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:34.201491 containerd[1968]: 2024-08-05 22:14:34.199 [INFO][5331] k8s.go 621: Teardown processing complete. ContainerID="7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2" Aug 5 22:14:34.204238 containerd[1968]: time="2024-08-05T22:14:34.203153212Z" level=info msg="TearDown network for sandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\" successfully" Aug 5 22:14:34.217673 containerd[1968]: time="2024-08-05T22:14:34.217614497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:14:34.217866 containerd[1968]: time="2024-08-05T22:14:34.217714409Z" level=info msg="RemovePodSandbox \"7eb9fadee3b4b9393ca7b6176fa139f53f7d51d08a2e5305112a24e809eef1b2\" returns successfully" Aug 5 22:14:34.218712 containerd[1968]: time="2024-08-05T22:14:34.218679368Z" level=info msg="StopPodSandbox for \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\"" Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.264 [WARNING][5355] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0", GenerateName:"calico-kube-controllers-85c545f87d-", Namespace:"calico-system", SelfLink:"", UID:"c8231fff-c2fe-47d9-9a81-5eef98071a14", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c545f87d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666", Pod:"calico-kube-controllers-85c545f87d-qxp9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie47ae8dd660", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.264 [INFO][5355] k8s.go 608: Cleaning up netns ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.264 [INFO][5355] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" iface="eth0" netns="" Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.264 [INFO][5355] k8s.go 615: Releasing IP address(es) ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.264 [INFO][5355] utils.go 188: Calico CNI releasing IP address ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.293 [INFO][5361] ipam_plugin.go 411: Releasing address using handleID ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" HandleID="k8s-pod-network.6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.293 [INFO][5361] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.293 [INFO][5361] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.299 [WARNING][5361] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" HandleID="k8s-pod-network.6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.299 [INFO][5361] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" HandleID="k8s-pod-network.6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.302 [INFO][5361] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:34.306198 containerd[1968]: 2024-08-05 22:14:34.303 [INFO][5355] k8s.go 621: Teardown processing complete. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:34.309210 containerd[1968]: time="2024-08-05T22:14:34.306235009Z" level=info msg="TearDown network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\" successfully" Aug 5 22:14:34.309210 containerd[1968]: time="2024-08-05T22:14:34.306264912Z" level=info msg="StopPodSandbox for \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\" returns successfully" Aug 5 22:14:34.309210 containerd[1968]: time="2024-08-05T22:14:34.306712993Z" level=info msg="RemovePodSandbox for \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\"" Aug 5 22:14:34.309210 containerd[1968]: time="2024-08-05T22:14:34.306740254Z" level=info msg="Forcibly stopping sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\"" Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.364 [WARNING][5379] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0", GenerateName:"calico-kube-controllers-85c545f87d-", Namespace:"calico-system", SelfLink:"", UID:"c8231fff-c2fe-47d9-9a81-5eef98071a14", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85c545f87d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"9ac7d1cf192a7fca2d3c36f91fd9803b523fd6e574e61ee3c9bce514aa6cb666", Pod:"calico-kube-controllers-85c545f87d-qxp9x", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie47ae8dd660", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.364 [INFO][5379] k8s.go 608: Cleaning up netns ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.364 [INFO][5379] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" iface="eth0" netns="" Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.364 [INFO][5379] k8s.go 615: Releasing IP address(es) ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.364 [INFO][5379] utils.go 188: Calico CNI releasing IP address ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.393 [INFO][5385] ipam_plugin.go 411: Releasing address using handleID ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" HandleID="k8s-pod-network.6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.393 [INFO][5385] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.393 [INFO][5385] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.401 [WARNING][5385] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" HandleID="k8s-pod-network.6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.401 [INFO][5385] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" HandleID="k8s-pod-network.6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Workload="ip--172--31--23--76-k8s-calico--kube--controllers--85c545f87d--qxp9x-eth0" Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.404 [INFO][5385] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:34.408479 containerd[1968]: 2024-08-05 22:14:34.406 [INFO][5379] k8s.go 621: Teardown processing complete. ContainerID="6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c" Aug 5 22:14:34.410869 containerd[1968]: time="2024-08-05T22:14:34.408529183Z" level=info msg="TearDown network for sandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\" successfully" Aug 5 22:14:34.414854 containerd[1968]: time="2024-08-05T22:14:34.414637791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:14:34.414854 containerd[1968]: time="2024-08-05T22:14:34.414720496Z" level=info msg="RemovePodSandbox \"6a26c4ffc77e565cbdd9ce4eb97f032d157c17aa9ba3848fe3436db13a6ccc7c\" returns successfully" Aug 5 22:14:34.415831 containerd[1968]: time="2024-08-05T22:14:34.415375081Z" level=info msg="StopPodSandbox for \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\"" Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.471 [WARNING][5403] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5acb70e9-e1b2-4039-8e86-bf988a415a12", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c", Pod:"coredns-76f75df574-t59f8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9703fc35f53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.471 [INFO][5403] k8s.go 608: Cleaning up netns ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.471 [INFO][5403] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" iface="eth0" netns="" Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.471 [INFO][5403] k8s.go 615: Releasing IP address(es) ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.471 [INFO][5403] utils.go 188: Calico CNI releasing IP address ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.498 [INFO][5409] ipam_plugin.go 411: Releasing address using handleID ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" HandleID="k8s-pod-network.a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.499 [INFO][5409] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.500 [INFO][5409] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.506 [WARNING][5409] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" HandleID="k8s-pod-network.a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.506 [INFO][5409] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" HandleID="k8s-pod-network.a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.509 [INFO][5409] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:34.514729 containerd[1968]: 2024-08-05 22:14:34.511 [INFO][5403] k8s.go 621: Teardown processing complete. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:34.516988 containerd[1968]: time="2024-08-05T22:14:34.514757059Z" level=info msg="TearDown network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\" successfully" Aug 5 22:14:34.516988 containerd[1968]: time="2024-08-05T22:14:34.515040445Z" level=info msg="StopPodSandbox for \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\" returns successfully" Aug 5 22:14:34.519121 containerd[1968]: time="2024-08-05T22:14:34.519044095Z" level=info msg="RemovePodSandbox for \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\"" Aug 5 22:14:34.519237 containerd[1968]: time="2024-08-05T22:14:34.519125602Z" level=info msg="Forcibly stopping sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\"" Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.594 [WARNING][5428] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"5acb70e9-e1b2-4039-8e86-bf988a415a12", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"f5518391dbe5339ad38c90fa34238f0b398d1f2ac23eafcbeae33bbdd2ca1f3c", Pod:"coredns-76f75df574-t59f8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9703fc35f53", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.597 [INFO][5428] k8s.go 608: Cleaning up netns ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.598 [INFO][5428] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" iface="eth0" netns="" Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.598 [INFO][5428] k8s.go 615: Releasing IP address(es) ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.598 [INFO][5428] utils.go 188: Calico CNI releasing IP address ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.642 [INFO][5436] ipam_plugin.go 411: Releasing address using handleID ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" HandleID="k8s-pod-network.a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.642 [INFO][5436] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.643 [INFO][5436] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.651 [WARNING][5436] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" HandleID="k8s-pod-network.a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.652 [INFO][5436] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" HandleID="k8s-pod-network.a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Workload="ip--172--31--23--76-k8s-coredns--76f75df574--t59f8-eth0" Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.654 [INFO][5436] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:34.660057 containerd[1968]: 2024-08-05 22:14:34.657 [INFO][5428] k8s.go 621: Teardown processing complete. ContainerID="a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2" Aug 5 22:14:34.660754 containerd[1968]: time="2024-08-05T22:14:34.660164926Z" level=info msg="TearDown network for sandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\" successfully" Aug 5 22:14:34.668445 containerd[1968]: time="2024-08-05T22:14:34.668370478Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:14:34.668445 containerd[1968]: time="2024-08-05T22:14:34.668472255Z" level=info msg="RemovePodSandbox \"a7ffe260849308ab8ace5999e906b158fdc902e954b18314963a7c71161ab7f2\" returns successfully" Aug 5 22:14:34.670962 containerd[1968]: time="2024-08-05T22:14:34.670929205Z" level=info msg="StopPodSandbox for \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\"" Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.737 [WARNING][5454] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3970d43f-5b24-4d33-8e29-668cdfe6b128", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7", Pod:"csi-node-driver-wxt6z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali820b8889a67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.737 [INFO][5454] k8s.go 608: Cleaning up netns ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.738 [INFO][5454] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" iface="eth0" netns="" Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.738 [INFO][5454] k8s.go 615: Releasing IP address(es) ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.738 [INFO][5454] utils.go 188: Calico CNI releasing IP address ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.765 [INFO][5461] ipam_plugin.go 411: Releasing address using handleID ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" HandleID="k8s-pod-network.5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.765 [INFO][5461] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.765 [INFO][5461] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.773 [WARNING][5461] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" HandleID="k8s-pod-network.5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.773 [INFO][5461] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" HandleID="k8s-pod-network.5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.776 [INFO][5461] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:34.781986 containerd[1968]: 2024-08-05 22:14:34.779 [INFO][5454] k8s.go 621: Teardown processing complete. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:34.781986 containerd[1968]: time="2024-08-05T22:14:34.781949992Z" level=info msg="TearDown network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\" successfully" Aug 5 22:14:34.781986 containerd[1968]: time="2024-08-05T22:14:34.781980992Z" level=info msg="StopPodSandbox for \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\" returns successfully" Aug 5 22:14:34.783147 containerd[1968]: time="2024-08-05T22:14:34.782653541Z" level=info msg="RemovePodSandbox for \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\"" Aug 5 22:14:34.783147 containerd[1968]: time="2024-08-05T22:14:34.782689668Z" level=info msg="Forcibly stopping sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\"" Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.839 [WARNING][5480] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3970d43f-5b24-4d33-8e29-668cdfe6b128", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"6a995aab6a2d2c7db56cb94105c748f1732a7e0a2cf440f302c298e0cfd2a6b7", Pod:"csi-node-driver-wxt6z", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.43.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali820b8889a67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.840 [INFO][5480] k8s.go 608: Cleaning up netns ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.840 [INFO][5480] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" iface="eth0" netns="" Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.840 [INFO][5480] k8s.go 615: Releasing IP address(es) ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.840 [INFO][5480] utils.go 188: Calico CNI releasing IP address ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.882 [INFO][5486] ipam_plugin.go 411: Releasing address using handleID ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" HandleID="k8s-pod-network.5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.882 [INFO][5486] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.882 [INFO][5486] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.888 [WARNING][5486] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" HandleID="k8s-pod-network.5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.888 [INFO][5486] ipam_plugin.go 439: Releasing address using workloadID ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" HandleID="k8s-pod-network.5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Workload="ip--172--31--23--76-k8s-csi--node--driver--wxt6z-eth0" Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.890 [INFO][5486] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:34.895143 containerd[1968]: 2024-08-05 22:14:34.893 [INFO][5480] k8s.go 621: Teardown processing complete. ContainerID="5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496" Aug 5 22:14:34.896686 containerd[1968]: time="2024-08-05T22:14:34.895190907Z" level=info msg="TearDown network for sandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\" successfully" Aug 5 22:14:34.900649 containerd[1968]: time="2024-08-05T22:14:34.900453944Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:14:34.900806 containerd[1968]: time="2024-08-05T22:14:34.900672404Z" level=info msg="RemovePodSandbox \"5338df0f355e59e2626a632c1e536a635a508a1ff726d1d5480a517a57bab496\" returns successfully" Aug 5 22:14:35.498672 systemd[1]: Started sshd@9-172.31.23.76:22-139.178.89.65:57750.service - OpenSSH per-connection server daemon (139.178.89.65:57750). Aug 5 22:14:35.741895 sshd[5494]: Accepted publickey for core from 139.178.89.65 port 57750 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:35.745426 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:35.756868 systemd-logind[1945]: New session 10 of user core. Aug 5 22:14:35.760293 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:14:36.069618 sshd[5494]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:36.076411 systemd[1]: sshd@9-172.31.23.76:22-139.178.89.65:57750.service: Deactivated successfully. Aug 5 22:14:36.080754 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:14:36.082129 systemd-logind[1945]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:14:36.083364 systemd-logind[1945]: Removed session 10. Aug 5 22:14:38.006068 systemd[1]: run-containerd-runc-k8s.io-d22ab1df858b6054a751e6826bf446e608761f400d388b75cf789ac7f968a95d-runc.y9vVhe.mount: Deactivated successfully. Aug 5 22:14:41.113895 systemd[1]: Started sshd@10-172.31.23.76:22-139.178.89.65:48126.service - OpenSSH per-connection server daemon (139.178.89.65:48126). Aug 5 22:14:41.311776 sshd[5542]: Accepted publickey for core from 139.178.89.65 port 48126 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:41.313627 sshd[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:41.319736 systemd-logind[1945]: New session 11 of user core. Aug 5 22:14:41.323296 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:14:41.543498 sshd[5542]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:41.549777 systemd-logind[1945]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:14:41.550889 systemd[1]: sshd@10-172.31.23.76:22-139.178.89.65:48126.service: Deactivated successfully. Aug 5 22:14:41.553601 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:14:41.555190 systemd-logind[1945]: Removed session 11. Aug 5 22:14:41.582499 systemd[1]: Started sshd@11-172.31.23.76:22-139.178.89.65:48130.service - OpenSSH per-connection server daemon (139.178.89.65:48130). Aug 5 22:14:41.755066 sshd[5556]: Accepted publickey for core from 139.178.89.65 port 48130 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:41.756522 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:41.762014 systemd-logind[1945]: New session 12 of user core. Aug 5 22:14:41.768304 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:14:42.202701 sshd[5556]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:42.213198 systemd[1]: sshd@11-172.31.23.76:22-139.178.89.65:48130.service: Deactivated successfully. Aug 5 22:14:42.219411 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:14:42.223186 systemd-logind[1945]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:14:42.263263 systemd[1]: Started sshd@12-172.31.23.76:22-139.178.89.65:48132.service - OpenSSH per-connection server daemon (139.178.89.65:48132). Aug 5 22:14:42.265419 systemd-logind[1945]: Removed session 12. Aug 5 22:14:42.491060 sshd[5567]: Accepted publickey for core from 139.178.89.65 port 48132 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:42.491722 sshd[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:42.498659 systemd-logind[1945]: New session 13 of user core. Aug 5 22:14:42.510404 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:14:42.807470 sshd[5567]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:42.814294 systemd[1]: sshd@12-172.31.23.76:22-139.178.89.65:48132.service: Deactivated successfully. Aug 5 22:14:42.816805 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:14:42.818052 systemd-logind[1945]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:14:42.819549 systemd-logind[1945]: Removed session 13. Aug 5 22:14:47.848499 systemd[1]: Started sshd@13-172.31.23.76:22-139.178.89.65:48140.service - OpenSSH per-connection server daemon (139.178.89.65:48140). Aug 5 22:14:48.042719 sshd[5589]: Accepted publickey for core from 139.178.89.65 port 48140 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:48.044656 sshd[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:48.057750 systemd-logind[1945]: New session 14 of user core. Aug 5 22:14:48.067960 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:14:48.424648 sshd[5589]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:48.437106 systemd[1]: sshd@13-172.31.23.76:22-139.178.89.65:48140.service: Deactivated successfully. Aug 5 22:14:48.448907 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:14:48.455993 systemd-logind[1945]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:14:48.462188 systemd-logind[1945]: Removed session 14. Aug 5 22:14:49.057184 systemd[1]: run-containerd-runc-k8s.io-23db9d1cf97f85264d414bddce885234554e2f6c7e04bd9202901978358f8740-runc.Yb5wSE.mount: Deactivated successfully. Aug 5 22:14:53.467553 systemd[1]: Started sshd@14-172.31.23.76:22-139.178.89.65:53836.service - OpenSSH per-connection server daemon (139.178.89.65:53836). Aug 5 22:14:53.717344 sshd[5627]: Accepted publickey for core from 139.178.89.65 port 53836 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:53.722141 sshd[5627]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:53.760821 systemd-logind[1945]: New session 15 of user core. Aug 5 22:14:53.766309 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:14:54.105620 sshd[5627]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:54.116688 systemd-logind[1945]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:14:54.116966 systemd[1]: sshd@14-172.31.23.76:22-139.178.89.65:53836.service: Deactivated successfully. Aug 5 22:14:54.120179 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:14:54.125284 systemd-logind[1945]: Removed session 15. Aug 5 22:14:59.147351 systemd[1]: Started sshd@15-172.31.23.76:22-139.178.89.65:53844.service - OpenSSH per-connection server daemon (139.178.89.65:53844). Aug 5 22:14:59.333121 sshd[5645]: Accepted publickey for core from 139.178.89.65 port 53844 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:14:59.344148 sshd[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:59.357736 systemd-logind[1945]: New session 16 of user core. Aug 5 22:14:59.365034 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:14:59.972828 sshd[5645]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:59.992634 systemd[1]: sshd@15-172.31.23.76:22-139.178.89.65:53844.service: Deactivated successfully. Aug 5 22:15:00.074477 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:15:00.136793 systemd-logind[1945]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:15:00.143408 systemd-logind[1945]: Removed session 16. Aug 5 22:15:05.003665 systemd[1]: Started sshd@16-172.31.23.76:22-139.178.89.65:52138.service - OpenSSH per-connection server daemon (139.178.89.65:52138). Aug 5 22:15:05.233386 sshd[5663]: Accepted publickey for core from 139.178.89.65 port 52138 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:05.235281 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:05.248550 systemd-logind[1945]: New session 17 of user core. Aug 5 22:15:05.263372 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:15:05.842944 sshd[5663]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:05.879774 systemd[1]: sshd@16-172.31.23.76:22-139.178.89.65:52138.service: Deactivated successfully. Aug 5 22:15:05.882588 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:15:05.885743 systemd-logind[1945]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:15:05.894786 systemd[1]: Started sshd@17-172.31.23.76:22-139.178.89.65:52146.service - OpenSSH per-connection server daemon (139.178.89.65:52146). Aug 5 22:15:05.901562 systemd-logind[1945]: Removed session 17. Aug 5 22:15:06.076158 sshd[5676]: Accepted publickey for core from 139.178.89.65 port 52146 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:06.078291 sshd[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:06.085979 systemd-logind[1945]: New session 18 of user core. Aug 5 22:15:06.090300 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:15:06.965408 sshd[5676]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:06.971982 systemd-logind[1945]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:15:06.973678 systemd[1]: sshd@17-172.31.23.76:22-139.178.89.65:52146.service: Deactivated successfully. Aug 5 22:15:06.981262 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:15:06.982919 systemd-logind[1945]: Removed session 18. Aug 5 22:15:07.010436 systemd[1]: Started sshd@18-172.31.23.76:22-139.178.89.65:52156.service - OpenSSH per-connection server daemon (139.178.89.65:52156). Aug 5 22:15:07.234378 sshd[5689]: Accepted publickey for core from 139.178.89.65 port 52156 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:07.237156 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:07.263678 systemd-logind[1945]: New session 19 of user core. Aug 5 22:15:07.271055 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:15:07.576202 kubelet[3377]: I0805 22:15:07.575238 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-wxt6z" podStartSLOduration=70.861529705 podStartE2EDuration="1m15.575178751s" podCreationTimestamp="2024-08-05 22:13:52 +0000 UTC" firstStartedPulling="2024-08-05 22:14:26.894346676 +0000 UTC m=+53.314039938" lastFinishedPulling="2024-08-05 22:14:31.60799572 +0000 UTC m=+58.027688984" observedRunningTime="2024-08-05 22:14:32.670969428 +0000 UTC m=+59.090662707" watchObservedRunningTime="2024-08-05 22:15:07.575178751 +0000 UTC m=+93.994872027" Aug 5 22:15:07.579974 kubelet[3377]: I0805 22:15:07.579233 3377 topology_manager.go:215] "Topology Admit Handler" podUID="70404419-91a2-4351-ae6e-f77c3e39cfd0" podNamespace="calico-apiserver" podName="calico-apiserver-678b5d44f7-fmvh8" Aug 5 22:15:07.678027 kubelet[3377]: I0805 22:15:07.675419 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/70404419-91a2-4351-ae6e-f77c3e39cfd0-calico-apiserver-certs\") pod \"calico-apiserver-678b5d44f7-fmvh8\" (UID: \"70404419-91a2-4351-ae6e-f77c3e39cfd0\") " pod="calico-apiserver/calico-apiserver-678b5d44f7-fmvh8" Aug 5 22:15:07.677063 systemd[1]: Created slice kubepods-besteffort-pod70404419_91a2_4351_ae6e_f77c3e39cfd0.slice - libcontainer container kubepods-besteffort-pod70404419_91a2_4351_ae6e_f77c3e39cfd0.slice. Aug 5 22:15:07.678422 kubelet[3377]: I0805 22:15:07.678178 3377 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wqwf\" (UniqueName: \"kubernetes.io/projected/70404419-91a2-4351-ae6e-f77c3e39cfd0-kube-api-access-6wqwf\") pod \"calico-apiserver-678b5d44f7-fmvh8\" (UID: \"70404419-91a2-4351-ae6e-f77c3e39cfd0\") " pod="calico-apiserver/calico-apiserver-678b5d44f7-fmvh8" Aug 5 22:15:07.784882 kubelet[3377]: E0805 22:15:07.784601 3377 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:15:07.806966 kubelet[3377]: E0805 22:15:07.804958 3377 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70404419-91a2-4351-ae6e-f77c3e39cfd0-calico-apiserver-certs podName:70404419-91a2-4351-ae6e-f77c3e39cfd0 nodeName:}" failed. No retries permitted until 2024-08-05 22:15:08.296314405 +0000 UTC m=+94.716007681 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/70404419-91a2-4351-ae6e-f77c3e39cfd0-calico-apiserver-certs") pod "calico-apiserver-678b5d44f7-fmvh8" (UID: "70404419-91a2-4351-ae6e-f77c3e39cfd0") : secret "calico-apiserver-certs" not found Aug 5 22:15:08.614641 containerd[1968]: time="2024-08-05T22:15:08.613392511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678b5d44f7-fmvh8,Uid:70404419-91a2-4351-ae6e-f77c3e39cfd0,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:15:08.955780 systemd-networkd[1809]: cali24488acf602: Link UP Aug 5 22:15:08.961951 systemd-networkd[1809]: cali24488acf602: Gained carrier Aug 5 22:15:08.968935 (udev-worker)[5747]: Network interface NamePolicy= disabled on kernel command line. Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.783 [INFO][5729] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0 calico-apiserver-678b5d44f7- calico-apiserver 70404419-91a2-4351-ae6e-f77c3e39cfd0 1060 0 2024-08-05 22:15:07 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:678b5d44f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-76 calico-apiserver-678b5d44f7-fmvh8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali24488acf602 [] []}} ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Namespace="calico-apiserver" Pod="calico-apiserver-678b5d44f7-fmvh8" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.784 [INFO][5729] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Namespace="calico-apiserver" Pod="calico-apiserver-678b5d44f7-fmvh8" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.867 [INFO][5740] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" HandleID="k8s-pod-network.dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Workload="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.892 [INFO][5740] ipam_plugin.go 264: Auto assigning IP ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" HandleID="k8s-pod-network.dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Workload="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333ee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-76", "pod":"calico-apiserver-678b5d44f7-fmvh8", "timestamp":"2024-08-05 22:15:08.867588915 +0000 UTC"}, Hostname:"ip-172-31-23-76", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.892 [INFO][5740] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.892 [INFO][5740] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.892 [INFO][5740] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-76' Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.900 [INFO][5740] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" host="ip-172-31-23-76" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.909 [INFO][5740] ipam.go 372: Looking up existing affinities for host host="ip-172-31-23-76" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.918 [INFO][5740] ipam.go 489: Trying affinity for 192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.921 [INFO][5740] ipam.go 155: Attempting to load block cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.924 [INFO][5740] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.64/26 host="ip-172-31-23-76" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.924 [INFO][5740] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.64/26 handle="k8s-pod-network.dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" host="ip-172-31-23-76" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.927 [INFO][5740] ipam.go 1685: Creating new handle: k8s-pod-network.dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56 Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.934 [INFO][5740] ipam.go 1203: Writing block in order to claim IPs block=192.168.43.64/26 handle="k8s-pod-network.dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" host="ip-172-31-23-76" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.944 [INFO][5740] ipam.go 1216: Successfully claimed IPs: [192.168.43.69/26] block=192.168.43.64/26 handle="k8s-pod-network.dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" host="ip-172-31-23-76" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.945 [INFO][5740] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.69/26] handle="k8s-pod-network.dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" host="ip-172-31-23-76" Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.945 [INFO][5740] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:15:08.985685 containerd[1968]: 2024-08-05 22:15:08.945 [INFO][5740] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.43.69/26] IPv6=[] ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" HandleID="k8s-pod-network.dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Workload="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" Aug 5 22:15:08.992527 containerd[1968]: 2024-08-05 22:15:08.948 [INFO][5729] k8s.go 386: Populated endpoint ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Namespace="calico-apiserver" Pod="calico-apiserver-678b5d44f7-fmvh8" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0", GenerateName:"calico-apiserver-678b5d44f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"70404419-91a2-4351-ae6e-f77c3e39cfd0", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 15, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678b5d44f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"", Pod:"calico-apiserver-678b5d44f7-fmvh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24488acf602", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:15:08.992527 containerd[1968]: 2024-08-05 22:15:08.948 [INFO][5729] k8s.go 387: Calico CNI using IPs: [192.168.43.69/32] ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Namespace="calico-apiserver" Pod="calico-apiserver-678b5d44f7-fmvh8" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" Aug 5 22:15:08.992527 containerd[1968]: 2024-08-05 22:15:08.948 [INFO][5729] dataplane_linux.go 68: Setting the host side veth name to cali24488acf602 ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Namespace="calico-apiserver" Pod="calico-apiserver-678b5d44f7-fmvh8" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" Aug 5 22:15:08.992527 containerd[1968]: 2024-08-05 22:15:08.960 [INFO][5729] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Namespace="calico-apiserver" Pod="calico-apiserver-678b5d44f7-fmvh8" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" Aug 5 22:15:08.992527 containerd[1968]: 2024-08-05 22:15:08.967 [INFO][5729] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Namespace="calico-apiserver" Pod="calico-apiserver-678b5d44f7-fmvh8" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0", GenerateName:"calico-apiserver-678b5d44f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"70404419-91a2-4351-ae6e-f77c3e39cfd0", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 15, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"678b5d44f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-76", ContainerID:"dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56", Pod:"calico-apiserver-678b5d44f7-fmvh8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali24488acf602", MAC:"fe:cb:47:b1:c0:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:15:08.992527 containerd[1968]: 2024-08-05 22:15:08.981 [INFO][5729] k8s.go 500: Wrote updated endpoint to datastore ContainerID="dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56" Namespace="calico-apiserver" Pod="calico-apiserver-678b5d44f7-fmvh8" WorkloadEndpoint="ip--172--31--23--76-k8s-calico--apiserver--678b5d44f7--fmvh8-eth0" Aug 5 22:15:09.103671 containerd[1968]: time="2024-08-05T22:15:09.103550148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:15:09.103960 containerd[1968]: time="2024-08-05T22:15:09.103872368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:15:09.103960 containerd[1968]: time="2024-08-05T22:15:09.103937251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:15:09.104402 containerd[1968]: time="2024-08-05T22:15:09.104010925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:15:09.193150 systemd[1]: run-containerd-runc-k8s.io-dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56-runc.Uxke22.mount: Deactivated successfully. Aug 5 22:15:09.221345 systemd[1]: Started cri-containerd-dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56.scope - libcontainer container dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56. Aug 5 22:15:09.552006 containerd[1968]: time="2024-08-05T22:15:09.551961233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-678b5d44f7-fmvh8,Uid:70404419-91a2-4351-ae6e-f77c3e39cfd0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56\"" Aug 5 22:15:09.557113 containerd[1968]: time="2024-08-05T22:15:09.557064572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:15:10.535026 sshd[5689]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:10.542306 systemd[1]: sshd@18-172.31.23.76:22-139.178.89.65:52156.service: Deactivated successfully. Aug 5 22:15:10.549904 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:15:10.551702 systemd-logind[1945]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:15:10.576177 systemd[1]: Started sshd@19-172.31.23.76:22-139.178.89.65:52164.service - OpenSSH per-connection server daemon (139.178.89.65:52164). Aug 5 22:15:10.581960 systemd-logind[1945]: Removed session 19. Aug 5 22:15:10.764427 systemd-networkd[1809]: cali24488acf602: Gained IPv6LL Aug 5 22:15:10.779693 sshd[5814]: Accepted publickey for core from 139.178.89.65 port 52164 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:10.780068 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:10.789055 systemd-logind[1945]: New session 20 of user core. Aug 5 22:15:10.806833 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:15:12.487281 sshd[5814]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:12.505250 systemd[1]: sshd@19-172.31.23.76:22-139.178.89.65:52164.service: Deactivated successfully. Aug 5 22:15:12.514330 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:15:12.520278 systemd-logind[1945]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:15:12.549053 systemd[1]: Started sshd@20-172.31.23.76:22-139.178.89.65:54724.service - OpenSSH per-connection server daemon (139.178.89.65:54724). Aug 5 22:15:12.553760 systemd-logind[1945]: Removed session 20. Aug 5 22:15:12.846892 sshd[5829]: Accepted publickey for core from 139.178.89.65 port 54724 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:12.849173 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:12.873434 systemd-logind[1945]: New session 21 of user core. Aug 5 22:15:12.877203 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:15:13.317109 sshd[5829]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:13.326679 systemd[1]: sshd@20-172.31.23.76:22-139.178.89.65:54724.service: Deactivated successfully. Aug 5 22:15:13.332517 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:15:13.334657 systemd-logind[1945]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:15:13.337930 systemd-logind[1945]: Removed session 21. Aug 5 22:15:13.745278 ntpd[1937]: Listen normally on 13 cali24488acf602 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:15:13.746636 ntpd[1937]: 5 Aug 22:15:13 ntpd[1937]: Listen normally on 13 cali24488acf602 [fe80::ecee:eeff:feee:eeee%11]:123 Aug 5 22:15:13.914052 containerd[1968]: time="2024-08-05T22:15:13.913944663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:15:13.916742 containerd[1968]: time="2024-08-05T22:15:13.916422036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:15:13.919364 containerd[1968]: time="2024-08-05T22:15:13.919275380Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:15:13.926414 containerd[1968]: time="2024-08-05T22:15:13.924867236Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:15:13.926414 containerd[1968]: time="2024-08-05T22:15:13.925979414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 4.368853076s" Aug 5 22:15:13.926414 containerd[1968]: time="2024-08-05T22:15:13.926102098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:15:13.937943 containerd[1968]: time="2024-08-05T22:15:13.937849275Z" level=info msg="CreateContainer within sandbox \"dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:15:13.962558 containerd[1968]: time="2024-08-05T22:15:13.962507570Z" level=info msg="CreateContainer within sandbox \"dd820dc2e8d0e2ca9cc37294fe2e0de4eee46885fbd26cb115e41515a4ed6c56\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf63df4a2eed997b09261aeab9e9ac07b7375a212b35ae5abd48c81aa66d17dc\"" Aug 5 22:15:13.963630 containerd[1968]: time="2024-08-05T22:15:13.963134034Z" level=info msg="StartContainer for \"bf63df4a2eed997b09261aeab9e9ac07b7375a212b35ae5abd48c81aa66d17dc\"" Aug 5 22:15:14.056812 systemd[1]: run-containerd-runc-k8s.io-bf63df4a2eed997b09261aeab9e9ac07b7375a212b35ae5abd48c81aa66d17dc-runc.sPiEl6.mount: Deactivated successfully. Aug 5 22:15:14.070499 systemd[1]: Started cri-containerd-bf63df4a2eed997b09261aeab9e9ac07b7375a212b35ae5abd48c81aa66d17dc.scope - libcontainer container bf63df4a2eed997b09261aeab9e9ac07b7375a212b35ae5abd48c81aa66d17dc. Aug 5 22:15:14.165116 containerd[1968]: time="2024-08-05T22:15:14.164810591Z" level=info msg="StartContainer for \"bf63df4a2eed997b09261aeab9e9ac07b7375a212b35ae5abd48c81aa66d17dc\" returns successfully" Aug 5 22:15:15.920857 kubelet[3377]: I0805 22:15:15.920461 3377 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-678b5d44f7-fmvh8" podStartSLOduration=4.548563167 podStartE2EDuration="8.91917663s" podCreationTimestamp="2024-08-05 22:15:07 +0000 UTC" firstStartedPulling="2024-08-05 22:15:09.556275316 +0000 UTC m=+95.975968569" lastFinishedPulling="2024-08-05 22:15:13.926888771 +0000 UTC m=+100.346582032" observedRunningTime="2024-08-05 22:15:14.92779601 +0000 UTC m=+101.347489283" watchObservedRunningTime="2024-08-05 22:15:15.91917663 +0000 UTC m=+102.338869904" Aug 5 22:15:18.358145 systemd[1]: Started sshd@21-172.31.23.76:22-139.178.89.65:54728.service - OpenSSH per-connection server daemon (139.178.89.65:54728). Aug 5 22:15:18.559169 sshd[5897]: Accepted publickey for core from 139.178.89.65 port 54728 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:18.561336 sshd[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:18.568478 systemd-logind[1945]: New session 22 of user core. Aug 5 22:15:18.572307 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:15:19.080762 sshd[5897]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:19.097784 systemd[1]: sshd@21-172.31.23.76:22-139.178.89.65:54728.service: Deactivated successfully. Aug 5 22:15:19.129311 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:15:19.145171 systemd-logind[1945]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:15:19.154946 systemd-logind[1945]: Removed session 22. Aug 5 22:15:19.208648 systemd[1]: run-containerd-runc-k8s.io-23db9d1cf97f85264d414bddce885234554e2f6c7e04bd9202901978358f8740-runc.9hNxXF.mount: Deactivated successfully. Aug 5 22:15:24.122265 systemd[1]: Started sshd@22-172.31.23.76:22-139.178.89.65:48698.service - OpenSSH per-connection server daemon (139.178.89.65:48698). Aug 5 22:15:24.322749 sshd[5940]: Accepted publickey for core from 139.178.89.65 port 48698 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:24.326525 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:24.354580 systemd-logind[1945]: New session 23 of user core. Aug 5 22:15:24.363445 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:15:24.620297 sshd[5940]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:24.624058 systemd[1]: sshd@22-172.31.23.76:22-139.178.89.65:48698.service: Deactivated successfully. Aug 5 22:15:24.627009 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:15:24.629562 systemd-logind[1945]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:15:24.630936 systemd-logind[1945]: Removed session 23. Aug 5 22:15:29.655556 systemd[1]: Started sshd@23-172.31.23.76:22-139.178.89.65:48714.service - OpenSSH per-connection server daemon (139.178.89.65:48714). Aug 5 22:15:29.876921 sshd[5958]: Accepted publickey for core from 139.178.89.65 port 48714 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:29.879634 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:29.894598 systemd-logind[1945]: New session 24 of user core. Aug 5 22:15:29.903465 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:15:30.311427 sshd[5958]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:30.322581 systemd[1]: sshd@23-172.31.23.76:22-139.178.89.65:48714.service: Deactivated successfully. Aug 5 22:15:30.331831 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:15:30.339765 systemd-logind[1945]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:15:30.345472 systemd-logind[1945]: Removed session 24. Aug 5 22:15:35.348146 systemd[1]: Started sshd@24-172.31.23.76:22-139.178.89.65:45062.service - OpenSSH per-connection server daemon (139.178.89.65:45062). Aug 5 22:15:35.562062 sshd[5997]: Accepted publickey for core from 139.178.89.65 port 45062 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:35.563595 sshd[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:35.580890 systemd-logind[1945]: New session 25 of user core. Aug 5 22:15:35.588127 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:15:35.942284 sshd[5997]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:35.961116 systemd[1]: sshd@24-172.31.23.76:22-139.178.89.65:45062.service: Deactivated successfully. Aug 5 22:15:35.972631 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:15:35.975253 systemd-logind[1945]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:15:35.977300 systemd-logind[1945]: Removed session 25. Aug 5 22:15:40.997508 systemd[1]: Started sshd@25-172.31.23.76:22-139.178.89.65:48716.service - OpenSSH per-connection server daemon (139.178.89.65:48716). Aug 5 22:15:41.193951 sshd[6035]: Accepted publickey for core from 139.178.89.65 port 48716 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:41.194826 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:41.202999 systemd-logind[1945]: New session 26 of user core. Aug 5 22:15:41.211441 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:15:41.482246 sshd[6035]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:41.487614 systemd[1]: sshd@25-172.31.23.76:22-139.178.89.65:48716.service: Deactivated successfully. Aug 5 22:15:41.490725 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:15:41.491951 systemd-logind[1945]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:15:41.493495 systemd-logind[1945]: Removed session 26. Aug 5 22:15:46.519643 systemd[1]: Started sshd@26-172.31.23.76:22-139.178.89.65:48718.service - OpenSSH per-connection server daemon (139.178.89.65:48718). Aug 5 22:15:46.713162 sshd[6055]: Accepted publickey for core from 139.178.89.65 port 48718 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:46.716105 sshd[6055]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:46.724569 systemd-logind[1945]: New session 27 of user core. Aug 5 22:15:46.736983 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 22:15:47.342279 sshd[6055]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:47.348956 systemd[1]: sshd@26-172.31.23.76:22-139.178.89.65:48718.service: Deactivated successfully. Aug 5 22:15:47.349317 systemd-logind[1945]: Session 27 logged out. Waiting for processes to exit. Aug 5 22:15:47.352710 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 22:15:47.353947 systemd-logind[1945]: Removed session 27. Aug 5 22:15:49.068095 systemd[1]: run-containerd-runc-k8s.io-23db9d1cf97f85264d414bddce885234554e2f6c7e04bd9202901978358f8740-runc.RsKf6V.mount: Deactivated successfully. Aug 5 22:15:52.379578 systemd[1]: Started sshd@27-172.31.23.76:22-139.178.89.65:53134.service - OpenSSH per-connection server daemon (139.178.89.65:53134). Aug 5 22:15:52.568697 sshd[6095]: Accepted publickey for core from 139.178.89.65 port 53134 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:52.570533 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:52.577228 systemd-logind[1945]: New session 28 of user core. Aug 5 22:15:52.582246 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 22:15:52.887277 sshd[6095]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:52.892524 systemd-logind[1945]: Session 28 logged out. Waiting for processes to exit. Aug 5 22:15:52.893909 systemd[1]: sshd@27-172.31.23.76:22-139.178.89.65:53134.service: Deactivated successfully. Aug 5 22:15:52.896008 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 22:15:52.897164 systemd-logind[1945]: Removed session 28. Aug 5 22:15:57.934205 systemd[1]: Started sshd@28-172.31.23.76:22-139.178.89.65:53144.service - OpenSSH per-connection server daemon (139.178.89.65:53144). Aug 5 22:15:58.111399 sshd[6128]: Accepted publickey for core from 139.178.89.65 port 53144 ssh2: RSA SHA256:SP54icD4w17r3+qK9knkReOo23qWXud3XbiRe2zAwCs Aug 5 22:15:58.113791 sshd[6128]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:58.123104 systemd-logind[1945]: New session 29 of user core. Aug 5 22:15:58.131311 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 5 22:15:58.492649 sshd[6128]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:58.497644 systemd[1]: sshd@28-172.31.23.76:22-139.178.89.65:53144.service: Deactivated successfully. Aug 5 22:15:58.500292 systemd[1]: session-29.scope: Deactivated successfully. Aug 5 22:15:58.502681 systemd-logind[1945]: Session 29 logged out. Waiting for processes to exit. Aug 5 22:15:58.504126 systemd-logind[1945]: Removed session 29. Aug 5 22:16:08.040672 systemd[1]: run-containerd-runc-k8s.io-d22ab1df858b6054a751e6826bf446e608761f400d388b75cf789ac7f968a95d-runc.enp3Br.mount: Deactivated successfully. Aug 5 22:16:12.915230 systemd[1]: cri-containerd-295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5.scope: Deactivated successfully. Aug 5 22:16:12.915597 systemd[1]: cri-containerd-295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5.scope: Consumed 3.571s CPU time, 24.0M memory peak, 0B memory swap peak. Aug 5 22:16:13.009778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5-rootfs.mount: Deactivated successfully. Aug 5 22:16:13.017723 containerd[1968]: time="2024-08-05T22:16:13.005408580Z" level=info msg="shim disconnected" id=295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5 namespace=k8s.io Aug 5 22:16:13.017723 containerd[1968]: time="2024-08-05T22:16:13.017718900Z" level=warning msg="cleaning up after shim disconnected" id=295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5 namespace=k8s.io Aug 5 22:16:13.018425 containerd[1968]: time="2024-08-05T22:16:13.017743466Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:16:13.185565 kubelet[3377]: I0805 22:16:13.185428 3377 scope.go:117] "RemoveContainer" containerID="295920e2631181431aa296b3c2d25bbe6cb53c27ac0f3a7ef6844f92133e47a5" Aug 5 22:16:13.218704 containerd[1968]: time="2024-08-05T22:16:13.218647839Z" level=info msg="CreateContainer within sandbox \"02c22eda51579900bd96ac3760bf1658df0e6b54568d9f29cf2cf838413cf8de\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Aug 5 22:16:13.250947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount583312605.mount: Deactivated successfully. Aug 5 22:16:13.258280 containerd[1968]: time="2024-08-05T22:16:13.258231464Z" level=info msg="CreateContainer within sandbox \"02c22eda51579900bd96ac3760bf1658df0e6b54568d9f29cf2cf838413cf8de\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6c45b86fd37db4b1444209a756ac2e2d2e684f58b3b140eb74dae08171dd9f00\"" Aug 5 22:16:13.258968 containerd[1968]: time="2024-08-05T22:16:13.258924110Z" level=info msg="StartContainer for \"6c45b86fd37db4b1444209a756ac2e2d2e684f58b3b140eb74dae08171dd9f00\"" Aug 5 22:16:13.316572 systemd[1]: cri-containerd-4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c.scope: Deactivated successfully. Aug 5 22:16:13.317180 systemd[1]: cri-containerd-4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c.scope: Consumed 6.725s CPU time. Aug 5 22:16:13.339297 systemd[1]: Started cri-containerd-6c45b86fd37db4b1444209a756ac2e2d2e684f58b3b140eb74dae08171dd9f00.scope - libcontainer container 6c45b86fd37db4b1444209a756ac2e2d2e684f58b3b140eb74dae08171dd9f00. Aug 5 22:16:13.372660 containerd[1968]: time="2024-08-05T22:16:13.372396158Z" level=info msg="shim disconnected" id=4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c namespace=k8s.io Aug 5 22:16:13.372660 containerd[1968]: time="2024-08-05T22:16:13.372466440Z" level=warning msg="cleaning up after shim disconnected" id=4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c namespace=k8s.io Aug 5 22:16:13.372660 containerd[1968]: time="2024-08-05T22:16:13.372480765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:16:13.446830 containerd[1968]: time="2024-08-05T22:16:13.446615095Z" level=info msg="StartContainer for \"6c45b86fd37db4b1444209a756ac2e2d2e684f58b3b140eb74dae08171dd9f00\" returns successfully" Aug 5 22:16:14.015670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c-rootfs.mount: Deactivated successfully. Aug 5 22:16:14.178609 kubelet[3377]: I0805 22:16:14.178574 3377 scope.go:117] "RemoveContainer" containerID="4c64638f88a1529b02a943c50b10b93aaac7c48d182c8d4a2414adb48af8255c" Aug 5 22:16:14.185896 containerd[1968]: time="2024-08-05T22:16:14.185843237Z" level=info msg="CreateContainer within sandbox \"2ba300cca9c3b727f32e12bdf329b896438a3660d35822ebb2d4dab28fed5431\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 5 22:16:14.228380 containerd[1968]: time="2024-08-05T22:16:14.227909816Z" level=info msg="CreateContainer within sandbox \"2ba300cca9c3b727f32e12bdf329b896438a3660d35822ebb2d4dab28fed5431\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"008d80f82974e01e94420ec2df237a97867b2c98d07815f6f2e8cdd3e7c16c3a\"" Aug 5 22:16:14.228553 containerd[1968]: time="2024-08-05T22:16:14.228517387Z" level=info msg="StartContainer for \"008d80f82974e01e94420ec2df237a97867b2c98d07815f6f2e8cdd3e7c16c3a\"" Aug 5 22:16:14.332807 systemd[1]: Started cri-containerd-008d80f82974e01e94420ec2df237a97867b2c98d07815f6f2e8cdd3e7c16c3a.scope - libcontainer container 008d80f82974e01e94420ec2df237a97867b2c98d07815f6f2e8cdd3e7c16c3a. Aug 5 22:16:14.502206 containerd[1968]: time="2024-08-05T22:16:14.502070923Z" level=info msg="StartContainer for \"008d80f82974e01e94420ec2df237a97867b2c98d07815f6f2e8cdd3e7c16c3a\" returns successfully" Aug 5 22:16:17.094935 kubelet[3377]: E0805 22:16:17.094879 3377 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-76?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Aug 5 22:16:17.950840 systemd[1]: cri-containerd-91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502.scope: Deactivated successfully. Aug 5 22:16:17.951552 systemd[1]: cri-containerd-91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502.scope: Consumed 1.389s CPU time, 16.2M memory peak, 0B memory swap peak. Aug 5 22:16:17.985316 containerd[1968]: time="2024-08-05T22:16:17.985230966Z" level=info msg="shim disconnected" id=91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502 namespace=k8s.io Aug 5 22:16:17.985316 containerd[1968]: time="2024-08-05T22:16:17.985311251Z" level=warning msg="cleaning up after shim disconnected" id=91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502 namespace=k8s.io Aug 5 22:16:17.985316 containerd[1968]: time="2024-08-05T22:16:17.985323386Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:16:17.987473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502-rootfs.mount: Deactivated successfully. Aug 5 22:16:18.199700 kubelet[3377]: I0805 22:16:18.199673 3377 scope.go:117] "RemoveContainer" containerID="91d7f87e6a72a46617141fa28aa28147119fd36632012f862b07a4b307692502" Aug 5 22:16:18.203501 containerd[1968]: time="2024-08-05T22:16:18.203347761Z" level=info msg="CreateContainer within sandbox \"ada22606e0b1f38c444f1530a5f555cd44ba1a2913c286af3605d71fb28f5503\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Aug 5 22:16:18.263955 containerd[1968]: time="2024-08-05T22:16:18.263695468Z" level=info msg="CreateContainer within sandbox \"ada22606e0b1f38c444f1530a5f555cd44ba1a2913c286af3605d71fb28f5503\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f35a5a683375fdc79aae87c5e898102da9d2f23b671b269a737f0d0a74014613\"" Aug 5 22:16:18.272277 containerd[1968]: time="2024-08-05T22:16:18.272231292Z" level=info msg="StartContainer for \"f35a5a683375fdc79aae87c5e898102da9d2f23b671b269a737f0d0a74014613\"" Aug 5 22:16:18.424409 systemd[1]: Started cri-containerd-f35a5a683375fdc79aae87c5e898102da9d2f23b671b269a737f0d0a74014613.scope - libcontainer container f35a5a683375fdc79aae87c5e898102da9d2f23b671b269a737f0d0a74014613. Aug 5 22:16:18.525152 containerd[1968]: time="2024-08-05T22:16:18.524902759Z" level=info msg="StartContainer for \"f35a5a683375fdc79aae87c5e898102da9d2f23b671b269a737f0d0a74014613\" returns successfully" Aug 5 22:16:27.109229 kubelet[3377]: E0805 22:16:27.109054 3377 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.76:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-76?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Aug 5 22:16:30.477235 systemd[1]: run-containerd-runc-k8s.io-d22ab1df858b6054a751e6826bf446e608761f400d388b75cf789ac7f968a95d-runc.KCkjhe.mount: Deactivated successfully.