Jun 25 18:43:51.187614 kernel: Linux version 6.6.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Tue Jun 25 17:21:28 -00 2024 Jun 25 18:43:51.187656 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:43:51.187673 kernel: BIOS-provided physical RAM map: Jun 25 18:43:51.187685 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jun 25 18:43:51.187697 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jun 25 18:43:51.187709 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jun 25 18:43:51.187727 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jun 25 18:43:51.187740 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jun 25 18:43:51.187752 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jun 25 18:43:51.187764 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jun 25 18:43:51.187777 kernel: NX (Execute Disable) protection: active Jun 25 18:43:51.187788 kernel: APIC: Static calls initialized Jun 25 18:43:51.187799 kernel: SMBIOS 2.7 present. Jun 25 18:43:51.187812 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jun 25 18:43:51.187831 kernel: Hypervisor detected: KVM Jun 25 18:43:51.187845 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 25 18:43:51.187860 kernel: kvm-clock: using sched offset of 6433266129 cycles Jun 25 18:43:51.187875 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 25 18:43:51.187889 kernel: tsc: Detected 2499.998 MHz processor Jun 25 18:43:51.187903 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 25 18:43:51.187918 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 25 18:43:51.187935 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jun 25 18:43:51.187950 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jun 25 18:43:51.187964 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 25 18:43:51.187978 kernel: Using GB pages for direct mapping Jun 25 18:43:51.187992 kernel: ACPI: Early table checksum verification disabled Jun 25 18:43:51.188006 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jun 25 18:43:51.188020 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jun 25 18:43:51.188034 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 25 18:43:51.188049 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jun 25 18:43:51.188065 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jun 25 18:43:51.188079 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 18:43:51.188094 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 25 18:43:51.188130 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jun 25 18:43:51.188143 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 25 18:43:51.188155 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jun 25 18:43:51.188167 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jun 25 18:43:51.188227 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jun 25 18:43:51.188246 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jun 25 18:43:51.188371 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jun 25 18:43:51.188507 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jun 25 18:43:51.188524 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jun 25 18:43:51.188539 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jun 25 18:43:51.188555 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jun 25 18:43:51.188574 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jun 25 18:43:51.188590 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jun 25 18:43:51.188605 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jun 25 18:43:51.188621 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jun 25 18:43:51.188637 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jun 25 18:43:51.188652 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jun 25 18:43:51.188717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jun 25 18:43:51.188733 kernel: NUMA: Initialized distance table, cnt=1 Jun 25 18:43:51.188748 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jun 25 18:43:51.188768 kernel: Zone ranges: Jun 25 18:43:51.188783 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 25 18:43:51.188799 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jun 25 18:43:51.188814 kernel: Normal empty Jun 25 18:43:51.188830 kernel: Movable zone start for each node Jun 25 18:43:51.188845 kernel: Early memory node ranges Jun 25 18:43:51.188861 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jun 25 18:43:51.188877 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jun 25 18:43:51.188892 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jun 25 18:43:51.188911 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 25 18:43:51.188926 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 25 18:43:51.188942 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jun 25 18:43:51.188957 kernel: ACPI: PM-Timer IO Port: 0xb008 Jun 25 18:43:51.188973 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 25 18:43:51.188989 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jun 25 18:43:51.189004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 25 18:43:51.189020 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 25 18:43:51.189035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 25 18:43:51.189054 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 25 18:43:51.189069 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 25 18:43:51.189085 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 25 18:43:51.189112 kernel: TSC deadline timer available Jun 25 18:43:51.189127 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jun 25 18:43:51.189142 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 25 18:43:51.189158 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jun 25 18:43:51.189173 kernel: Booting paravirtualized kernel on KVM Jun 25 18:43:51.189189 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 25 18:43:51.189205 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jun 25 18:43:51.189223 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jun 25 18:43:51.189239 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jun 25 18:43:51.189254 kernel: pcpu-alloc: [0] 0 1 Jun 25 18:43:51.189269 kernel: kvm-guest: PV spinlocks enabled Jun 25 18:43:51.189282 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 25 18:43:51.189299 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:43:51.189315 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 18:43:51.189330 kernel: random: crng init done Jun 25 18:43:51.189348 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 18:43:51.189363 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jun 25 18:43:51.189444 kernel: Fallback order for Node 0: 0 Jun 25 18:43:51.189461 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jun 25 18:43:51.189476 kernel: Policy zone: DMA32 Jun 25 18:43:51.189492 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 18:43:51.189509 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2302K rwdata, 22636K rodata, 49384K init, 1964K bss, 131296K reserved, 0K cma-reserved) Jun 25 18:43:51.189525 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 18:43:51.189544 kernel: Kernel/User page tables isolation: enabled Jun 25 18:43:51.189557 kernel: ftrace: allocating 37650 entries in 148 pages Jun 25 18:43:51.189571 kernel: ftrace: allocated 148 pages with 3 groups Jun 25 18:43:51.189586 kernel: Dynamic Preempt: voluntary Jun 25 18:43:51.189656 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 18:43:51.189685 kernel: rcu: RCU event tracing is enabled. Jun 25 18:43:51.189701 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 18:43:51.189717 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 18:43:51.189732 kernel: Rude variant of Tasks RCU enabled. Jun 25 18:43:51.190203 kernel: Tracing variant of Tasks RCU enabled. Jun 25 18:43:51.190265 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 18:43:51.190283 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 18:43:51.190298 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jun 25 18:43:51.190314 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 18:43:51.190330 kernel: Console: colour VGA+ 80x25 Jun 25 18:43:51.190346 kernel: printk: console [ttyS0] enabled Jun 25 18:43:51.190361 kernel: ACPI: Core revision 20230628 Jun 25 18:43:51.190377 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jun 25 18:43:51.190393 kernel: APIC: Switch to symmetric I/O mode setup Jun 25 18:43:51.190412 kernel: x2apic enabled Jun 25 18:43:51.190428 kernel: APIC: Switched APIC routing to: physical x2apic Jun 25 18:43:51.190455 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jun 25 18:43:51.190529 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jun 25 18:43:51.190547 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jun 25 18:43:51.190563 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jun 25 18:43:51.190580 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 25 18:43:51.190596 kernel: Spectre V2 : Mitigation: Retpolines Jun 25 18:43:51.190659 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jun 25 18:43:51.190675 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jun 25 18:43:51.190692 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jun 25 18:43:51.190709 kernel: RETBleed: Vulnerable Jun 25 18:43:51.190730 kernel: Speculative Store Bypass: Vulnerable Jun 25 18:43:51.190746 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:43:51.190762 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jun 25 18:43:51.190779 kernel: GDS: Unknown: Dependent on hypervisor status Jun 25 18:43:51.190795 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 25 18:43:51.190812 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 25 18:43:51.190832 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 25 18:43:51.190848 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jun 25 18:43:51.190864 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jun 25 18:43:51.190881 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jun 25 18:43:51.190897 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jun 25 18:43:51.190914 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jun 25 18:43:51.190937 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jun 25 18:43:51.190953 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 25 18:43:51.190970 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jun 25 18:43:51.190986 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jun 25 18:43:51.191002 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jun 25 18:43:51.191021 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jun 25 18:43:51.191037 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jun 25 18:43:51.191053 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jun 25 18:43:51.191069 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jun 25 18:43:51.191085 kernel: Freeing SMP alternatives memory: 32K Jun 25 18:43:51.191113 kernel: pid_max: default: 32768 minimum: 301 Jun 25 18:43:51.191126 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jun 25 18:43:51.191141 kernel: SELinux: Initializing. Jun 25 18:43:51.191157 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:43:51.191173 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jun 25 18:43:51.191188 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jun 25 18:43:51.191204 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:51.191281 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:51.191299 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jun 25 18:43:51.191314 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jun 25 18:43:51.191328 kernel: signal: max sigframe size: 3632 Jun 25 18:43:51.191389 kernel: rcu: Hierarchical SRCU implementation. Jun 25 18:43:51.191408 kernel: rcu: Max phase no-delay instances is 400. Jun 25 18:43:51.191424 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jun 25 18:43:51.191441 kernel: smp: Bringing up secondary CPUs ... Jun 25 18:43:51.191457 kernel: smpboot: x86: Booting SMP configuration: Jun 25 18:43:51.191477 kernel: .... node #0, CPUs: #1 Jun 25 18:43:51.191494 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jun 25 18:43:51.191511 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jun 25 18:43:51.191528 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 18:43:51.191543 kernel: smpboot: Max logical packages: 1 Jun 25 18:43:51.191560 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jun 25 18:43:51.191576 kernel: devtmpfs: initialized Jun 25 18:43:51.191592 kernel: x86/mm: Memory block size: 128MB Jun 25 18:43:51.191612 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 18:43:51.191629 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 18:43:51.191644 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 18:43:51.191661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 18:43:51.191677 kernel: audit: initializing netlink subsys (disabled) Jun 25 18:43:51.191693 kernel: audit: type=2000 audit(1719341029.652:1): state=initialized audit_enabled=0 res=1 Jun 25 18:43:51.191709 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 18:43:51.191726 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 25 18:43:51.191742 kernel: cpuidle: using governor menu Jun 25 18:43:51.191762 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 18:43:51.191778 kernel: dca service started, version 1.12.1 Jun 25 18:43:51.191794 kernel: PCI: Using configuration type 1 for base access Jun 25 18:43:51.191807 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 25 18:43:51.191821 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 18:43:51.191834 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 18:43:51.191847 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 18:43:51.191860 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 18:43:51.191873 kernel: ACPI: Added _OSI(Module Device) Jun 25 18:43:51.191890 kernel: ACPI: Added _OSI(Processor Device) Jun 25 18:43:51.191902 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 18:43:51.191920 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 18:43:51.191940 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jun 25 18:43:51.191959 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jun 25 18:43:51.191978 kernel: ACPI: Interpreter enabled Jun 25 18:43:51.191996 kernel: ACPI: PM: (supports S0 S5) Jun 25 18:43:51.192011 kernel: ACPI: Using IOAPIC for interrupt routing Jun 25 18:43:51.192026 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 25 18:43:51.192045 kernel: PCI: Using E820 reservations for host bridge windows Jun 25 18:43:51.192061 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jun 25 18:43:51.192077 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 25 18:43:51.192425 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jun 25 18:43:51.192577 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jun 25 18:43:51.192712 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jun 25 18:43:51.192732 kernel: acpiphp: Slot [3] registered Jun 25 18:43:51.192752 kernel: acpiphp: Slot [4] registered Jun 25 18:43:51.192768 kernel: acpiphp: Slot [5] registered Jun 25 18:43:51.192784 kernel: acpiphp: Slot [6] registered Jun 25 18:43:51.192800 kernel: acpiphp: Slot [7] registered Jun 25 18:43:51.192815 kernel: acpiphp: Slot [8] registered Jun 25 18:43:51.192831 kernel: acpiphp: Slot [9] registered Jun 25 18:43:51.192847 kernel: acpiphp: Slot [10] registered Jun 25 18:43:51.192863 kernel: acpiphp: Slot [11] registered Jun 25 18:43:51.192879 kernel: acpiphp: Slot [12] registered Jun 25 18:43:51.192894 kernel: acpiphp: Slot [13] registered Jun 25 18:43:51.192913 kernel: acpiphp: Slot [14] registered Jun 25 18:43:51.192929 kernel: acpiphp: Slot [15] registered Jun 25 18:43:51.192945 kernel: acpiphp: Slot [16] registered Jun 25 18:43:51.192960 kernel: acpiphp: Slot [17] registered Jun 25 18:43:51.192976 kernel: acpiphp: Slot [18] registered Jun 25 18:43:51.192992 kernel: acpiphp: Slot [19] registered Jun 25 18:43:51.193008 kernel: acpiphp: Slot [20] registered Jun 25 18:43:51.193024 kernel: acpiphp: Slot [21] registered Jun 25 18:43:51.193039 kernel: acpiphp: Slot [22] registered Jun 25 18:43:51.193058 kernel: acpiphp: Slot [23] registered Jun 25 18:43:51.193073 kernel: acpiphp: Slot [24] registered Jun 25 18:43:51.193585 kernel: acpiphp: Slot [25] registered Jun 25 18:43:51.193604 kernel: acpiphp: Slot [26] registered Jun 25 18:43:51.193620 kernel: acpiphp: Slot [27] registered Jun 25 18:43:51.193636 kernel: acpiphp: Slot [28] registered Jun 25 18:43:51.193652 kernel: acpiphp: Slot [29] registered Jun 25 18:43:51.193668 kernel: acpiphp: Slot [30] registered Jun 25 18:43:51.193683 kernel: acpiphp: Slot [31] registered Jun 25 18:43:51.193699 kernel: PCI host bridge to bus 0000:00 Jun 25 18:43:51.194004 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 25 18:43:51.194213 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 25 18:43:51.194343 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 25 18:43:51.194465 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jun 25 18:43:51.194587 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 25 18:43:51.194752 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jun 25 18:43:51.194898 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jun 25 18:43:51.195061 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jun 25 18:43:51.195213 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jun 25 18:43:51.195349 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jun 25 18:43:51.195483 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jun 25 18:43:51.195700 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jun 25 18:43:51.195839 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jun 25 18:43:51.196029 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jun 25 18:43:51.196181 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jun 25 18:43:51.196318 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jun 25 18:43:51.196461 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jun 25 18:43:51.196690 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jun 25 18:43:51.196830 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jun 25 18:43:51.196959 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 25 18:43:51.197127 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 25 18:43:51.197263 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jun 25 18:43:51.197403 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 25 18:43:51.197622 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jun 25 18:43:51.197644 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 25 18:43:51.197662 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 25 18:43:51.197679 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 25 18:43:51.197700 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 25 18:43:51.197717 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jun 25 18:43:51.197804 kernel: iommu: Default domain type: Translated Jun 25 18:43:51.197874 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 25 18:43:51.197892 kernel: PCI: Using ACPI for IRQ routing Jun 25 18:43:51.197908 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 25 18:43:51.197926 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jun 25 18:43:51.197942 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jun 25 18:43:51.198328 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jun 25 18:43:51.198594 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jun 25 18:43:51.200032 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 25 18:43:51.200063 kernel: vgaarb: loaded Jun 25 18:43:51.200080 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jun 25 18:43:51.200096 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jun 25 18:43:51.200130 kernel: clocksource: Switched to clocksource kvm-clock Jun 25 18:43:51.200145 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 18:43:51.200160 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 18:43:51.200245 kernel: pnp: PnP ACPI init Jun 25 18:43:51.200263 kernel: pnp: PnP ACPI: found 5 devices Jun 25 18:43:51.200281 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 25 18:43:51.200296 kernel: NET: Registered PF_INET protocol family Jun 25 18:43:51.200310 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 18:43:51.200326 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jun 25 18:43:51.200342 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 18:43:51.200359 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jun 25 18:43:51.200376 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jun 25 18:43:51.200396 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jun 25 18:43:51.200413 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:43:51.200429 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jun 25 18:43:51.200446 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 18:43:51.200462 kernel: NET: Registered PF_XDP protocol family Jun 25 18:43:51.200615 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 25 18:43:51.200763 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 25 18:43:51.200904 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 25 18:43:51.201153 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jun 25 18:43:51.201301 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jun 25 18:43:51.201400 kernel: PCI: CLS 0 bytes, default 64 Jun 25 18:43:51.201416 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jun 25 18:43:51.201480 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jun 25 18:43:51.201497 kernel: clocksource: Switched to clocksource tsc Jun 25 18:43:51.201511 kernel: Initialise system trusted keyrings Jun 25 18:43:51.201526 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jun 25 18:43:51.201540 kernel: Key type asymmetric registered Jun 25 18:43:51.201561 kernel: Asymmetric key parser 'x509' registered Jun 25 18:43:51.201574 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jun 25 18:43:51.201590 kernel: io scheduler mq-deadline registered Jun 25 18:43:51.201605 kernel: io scheduler kyber registered Jun 25 18:43:51.201618 kernel: io scheduler bfq registered Jun 25 18:43:51.201632 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 25 18:43:51.201646 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 18:43:51.201660 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 25 18:43:51.201674 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 25 18:43:51.201692 kernel: i8042: Warning: Keylock active Jun 25 18:43:51.201706 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 25 18:43:51.201721 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 25 18:43:51.201956 kernel: rtc_cmos 00:00: RTC can wake from S4 Jun 25 18:43:51.202088 kernel: rtc_cmos 00:00: registered as rtc0 Jun 25 18:43:51.202335 kernel: rtc_cmos 00:00: setting system clock to 2024-06-25T18:43:50 UTC (1719341030) Jun 25 18:43:51.202459 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jun 25 18:43:51.202483 kernel: intel_pstate: CPU model not supported Jun 25 18:43:51.202499 kernel: NET: Registered PF_INET6 protocol family Jun 25 18:43:51.202514 kernel: Segment Routing with IPv6 Jun 25 18:43:51.202529 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 18:43:51.202544 kernel: NET: Registered PF_PACKET protocol family Jun 25 18:43:51.202559 kernel: Key type dns_resolver registered Jun 25 18:43:51.202573 kernel: IPI shorthand broadcast: enabled Jun 25 18:43:51.202588 kernel: sched_clock: Marking stable (787003580, 286291461)->(1186439572, -113144531) Jun 25 18:43:51.202651 kernel: registered taskstats version 1 Jun 25 18:43:51.202668 kernel: Loading compiled-in X.509 certificates Jun 25 18:43:51.202784 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.35-flatcar: 60204e9db5f484c670a1c92aec37e9a0c4d3ae90' Jun 25 18:43:51.202803 kernel: Key type .fscrypt registered Jun 25 18:43:51.202817 kernel: Key type fscrypt-provisioning registered Jun 25 18:43:51.202832 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 18:43:51.202847 kernel: ima: Allocated hash algorithm: sha1 Jun 25 18:43:51.202862 kernel: ima: No architecture policies found Jun 25 18:43:51.202877 kernel: clk: Disabling unused clocks Jun 25 18:43:51.202891 kernel: Freeing unused kernel image (initmem) memory: 49384K Jun 25 18:43:51.202910 kernel: Write protecting the kernel read-only data: 36864k Jun 25 18:43:51.202931 kernel: Freeing unused kernel image (rodata/data gap) memory: 1940K Jun 25 18:43:51.202946 kernel: Run /init as init process Jun 25 18:43:51.202961 kernel: with arguments: Jun 25 18:43:51.202976 kernel: /init Jun 25 18:43:51.202990 kernel: with environment: Jun 25 18:43:51.203004 kernel: HOME=/ Jun 25 18:43:51.203018 kernel: TERM=linux Jun 25 18:43:51.203032 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 18:43:51.203050 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:43:51.203072 systemd[1]: Detected virtualization amazon. Jun 25 18:43:51.203332 systemd[1]: Detected architecture x86-64. Jun 25 18:43:51.203351 systemd[1]: Running in initrd. Jun 25 18:43:51.203367 systemd[1]: No hostname configured, using default hostname. Jun 25 18:43:51.203386 systemd[1]: Hostname set to . Jun 25 18:43:51.203402 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:43:51.203418 systemd[1]: Queued start job for default target initrd.target. Jun 25 18:43:51.203435 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:51.203451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:51.203469 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 25 18:43:51.203485 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:43:51.203501 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 25 18:43:51.203520 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 25 18:43:51.203539 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 25 18:43:51.203556 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 25 18:43:51.203572 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:51.203589 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:51.203606 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:43:51.203622 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:43:51.203641 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 25 18:43:51.203657 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:43:51.203673 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:43:51.203690 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:43:51.203706 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:43:51.203722 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:43:51.203738 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:43:51.203755 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:51.203771 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:51.203790 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:51.203806 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:43:51.203822 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 25 18:43:51.203839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:43:51.203855 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 18:43:51.203871 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 18:43:51.203888 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:43:51.203907 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:43:51.203923 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:51.203940 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 25 18:43:51.203956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:51.203972 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 18:43:51.204139 systemd-journald[178]: Collecting audit messages is disabled. Jun 25 18:43:51.204259 systemd-journald[178]: Journal started Jun 25 18:43:51.204305 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2b80842dd2d4dacc61c88699365320) is 4.8M, max 38.6M, 33.8M free. Jun 25 18:43:51.209695 systemd-modules-load[179]: Inserted module 'overlay' Jun 25 18:43:51.376602 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:43:51.376648 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 18:43:51.376676 kernel: Bridge firewalling registered Jun 25 18:43:51.376698 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:43:51.286381 systemd-modules-load[179]: Inserted module 'br_netfilter' Jun 25 18:43:51.374202 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:51.382415 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:51.398015 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:51.403580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:43:51.416704 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:43:51.418232 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:43:51.437291 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:43:51.448793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:51.460379 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:51.476346 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:43:51.480233 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:51.484010 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:51.493447 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 18:43:51.520234 dracut-cmdline[214]: dracut-dracut-053 Jun 25 18:43:51.526390 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=4483672da8ac4c95f5ee13a489103440a13110ce1f63977ab5a6a33d0c137bf8 Jun 25 18:43:51.547579 systemd-resolved[210]: Positive Trust Anchors: Jun 25 18:43:51.547604 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:43:51.547654 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:43:51.564823 systemd-resolved[210]: Defaulting to hostname 'linux'. Jun 25 18:43:51.568049 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:43:51.569781 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:51.660140 kernel: SCSI subsystem initialized Jun 25 18:43:51.676141 kernel: Loading iSCSI transport class v2.0-870. Jun 25 18:43:51.703608 kernel: iscsi: registered transport (tcp) Jun 25 18:43:51.742310 kernel: iscsi: registered transport (qla4xxx) Jun 25 18:43:51.742399 kernel: QLogic iSCSI HBA Driver Jun 25 18:43:51.794652 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 18:43:51.803522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 18:43:51.859778 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 18:43:51.859866 kernel: device-mapper: uevent: version 1.0.3 Jun 25 18:43:51.859887 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jun 25 18:43:51.912205 kernel: raid6: avx512x4 gen() 15426 MB/s Jun 25 18:43:51.929153 kernel: raid6: avx512x2 gen() 15307 MB/s Jun 25 18:43:51.946145 kernel: raid6: avx512x1 gen() 15021 MB/s Jun 25 18:43:51.963209 kernel: raid6: avx2x4 gen() 12011 MB/s Jun 25 18:43:51.980143 kernel: raid6: avx2x2 gen() 15862 MB/s Jun 25 18:43:51.997142 kernel: raid6: avx2x1 gen() 10924 MB/s Jun 25 18:43:51.997245 kernel: raid6: using algorithm avx2x2 gen() 15862 MB/s Jun 25 18:43:52.015129 kernel: raid6: .... xor() 13422 MB/s, rmw enabled Jun 25 18:43:52.015216 kernel: raid6: using avx512x2 recovery algorithm Jun 25 18:43:52.045135 kernel: xor: automatically using best checksumming function avx Jun 25 18:43:52.298131 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 25 18:43:52.311796 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:43:52.319334 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:52.349338 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jun 25 18:43:52.354827 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:52.364760 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 18:43:52.381432 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jun 25 18:43:52.450998 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:43:52.459324 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:43:52.537843 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:52.551588 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 18:43:52.623343 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 18:43:52.625990 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:43:52.629007 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:52.630480 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:43:52.654200 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 18:43:52.709993 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:43:52.751159 kernel: cryptd: max_cpu_qlen set to 1000 Jun 25 18:43:52.778508 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 25 18:43:52.812156 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 25 18:43:52.812465 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 25 18:43:52.812645 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jun 25 18:43:52.812668 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jun 25 18:43:52.814348 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 25 18:43:52.814526 kernel: AVX2 version of gcm_enc/dec engaged. Jun 25 18:43:52.814556 kernel: AES CTR mode by8 optimization enabled Jun 25 18:43:52.814577 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:30:59:da:ec:eb Jun 25 18:43:52.814742 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 18:43:52.814763 kernel: GPT:9289727 != 16777215 Jun 25 18:43:52.814783 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 18:43:52.814803 kernel: GPT:9289727 != 16777215 Jun 25 18:43:52.814821 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:43:52.814839 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:43:52.788140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:43:52.788314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:52.790719 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:52.797790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:43:52.798034 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:52.801018 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:52.808875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:43:52.813925 (udev-worker)[444]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:43:52.922130 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (445) Jun 25 18:43:52.931136 kernel: BTRFS: device fsid 329ce27e-ea89-47b5-8f8b-f762c8412eb0 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (444) Jun 25 18:43:53.018248 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 25 18:43:53.021323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:53.041949 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 25 18:43:53.042189 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 25 18:43:53.056546 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 25 18:43:53.072046 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 18:43:53.088667 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 18:43:53.091633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 18:43:53.107305 disk-uuid[615]: Primary Header is updated. Jun 25 18:43:53.107305 disk-uuid[615]: Secondary Entries is updated. Jun 25 18:43:53.107305 disk-uuid[615]: Secondary Header is updated. Jun 25 18:43:53.114154 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:43:53.123960 kernel: GPT:disk_guids don't match. Jun 25 18:43:53.124120 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 18:43:53.124146 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:43:53.125904 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:53.134137 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:43:54.133287 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 18:43:54.134199 disk-uuid[616]: The operation has completed successfully. Jun 25 18:43:54.285733 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 18:43:54.285858 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 18:43:54.315332 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 18:43:54.322003 sh[967]: Success Jun 25 18:43:54.337127 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jun 25 18:43:54.439317 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 18:43:54.447237 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 18:43:54.450772 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 18:43:54.485816 kernel: BTRFS info (device dm-0): first mount of filesystem 329ce27e-ea89-47b5-8f8b-f762c8412eb0 Jun 25 18:43:54.485895 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:54.485915 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 18:43:54.485936 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 18:43:54.486391 kernel: BTRFS info (device dm-0): using free space tree Jun 25 18:43:54.542132 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 18:43:54.544176 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 18:43:54.546644 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 18:43:54.554331 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 18:43:54.556610 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 18:43:54.579437 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:54.579506 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:54.579534 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:43:54.589713 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:43:54.604181 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:54.604330 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 18:43:54.616465 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 18:43:54.623375 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 18:43:54.680278 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:43:54.697386 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:43:54.737772 systemd-networkd[1160]: lo: Link UP Jun 25 18:43:54.737785 systemd-networkd[1160]: lo: Gained carrier Jun 25 18:43:54.739511 systemd-networkd[1160]: Enumeration completed Jun 25 18:43:54.739950 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:54.739954 systemd-networkd[1160]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:43:54.740915 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:43:54.752721 systemd[1]: Reached target network.target - Network. Jun 25 18:43:54.755511 systemd-networkd[1160]: eth0: Link UP Jun 25 18:43:54.755522 systemd-networkd[1160]: eth0: Gained carrier Jun 25 18:43:54.755538 systemd-networkd[1160]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:43:54.786565 systemd-networkd[1160]: eth0: DHCPv4 address 172.31.31.33/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 18:43:54.922765 ignition[1096]: Ignition 2.19.0 Jun 25 18:43:54.922780 ignition[1096]: Stage: fetch-offline Jun 25 18:43:54.923089 ignition[1096]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:54.923124 ignition[1096]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:43:54.923465 ignition[1096]: Ignition finished successfully Jun 25 18:43:54.928733 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:43:54.934540 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 18:43:54.954558 ignition[1171]: Ignition 2.19.0 Jun 25 18:43:54.954570 ignition[1171]: Stage: fetch Jun 25 18:43:54.954938 ignition[1171]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:54.954948 ignition[1171]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:43:54.955789 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:43:54.963857 ignition[1171]: PUT result: OK Jun 25 18:43:54.966038 ignition[1171]: parsed url from cmdline: "" Jun 25 18:43:54.966127 ignition[1171]: no config URL provided Jun 25 18:43:54.966136 ignition[1171]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 18:43:54.966149 ignition[1171]: no config at "/usr/lib/ignition/user.ign" Jun 25 18:43:54.966167 ignition[1171]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:43:54.968680 ignition[1171]: PUT result: OK Jun 25 18:43:54.968816 ignition[1171]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 25 18:43:54.971619 ignition[1171]: GET result: OK Jun 25 18:43:54.971727 ignition[1171]: parsing config with SHA512: fdb355736039d9dc0580b5f7161dfba61ec5efa771e04e289110a7b8f505e823042c97f63ad89355eab24174fd17164315c829f66b06cd88d77ec8e967735bc3 Jun 25 18:43:54.995345 unknown[1171]: fetched base config from "system" Jun 25 18:43:54.995862 ignition[1171]: fetch: fetch complete Jun 25 18:43:54.995359 unknown[1171]: fetched base config from "system" Jun 25 18:43:54.995867 ignition[1171]: fetch: fetch passed Jun 25 18:43:54.995369 unknown[1171]: fetched user config from "aws" Jun 25 18:43:54.995918 ignition[1171]: Ignition finished successfully Jun 25 18:43:54.998511 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 18:43:55.010328 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 18:43:55.034307 ignition[1178]: Ignition 2.19.0 Jun 25 18:43:55.034321 ignition[1178]: Stage: kargs Jun 25 18:43:55.035257 ignition[1178]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:55.035273 ignition[1178]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:43:55.035404 ignition[1178]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:43:55.038760 ignition[1178]: PUT result: OK Jun 25 18:43:55.042528 ignition[1178]: kargs: kargs passed Jun 25 18:43:55.042604 ignition[1178]: Ignition finished successfully Jun 25 18:43:55.045752 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 18:43:55.050780 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 18:43:55.076350 ignition[1185]: Ignition 2.19.0 Jun 25 18:43:55.076366 ignition[1185]: Stage: disks Jun 25 18:43:55.077309 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:55.077325 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:43:55.077450 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:43:55.079128 ignition[1185]: PUT result: OK Jun 25 18:43:55.089882 ignition[1185]: disks: disks passed Jun 25 18:43:55.089942 ignition[1185]: Ignition finished successfully Jun 25 18:43:55.093556 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 18:43:55.094013 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 18:43:55.096856 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:43:55.098238 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:43:55.100750 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:43:55.104229 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:43:55.114303 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 18:43:55.138234 systemd-fsck[1194]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 18:43:55.145351 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 18:43:55.152251 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 18:43:55.287126 kernel: EXT4-fs (nvme0n1p9): mounted filesystem ed685e11-963b-427a-9b96-a4691c40e909 r/w with ordered data mode. Quota mode: none. Jun 25 18:43:55.287676 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 18:43:55.289272 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 18:43:55.301242 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:43:55.304203 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 18:43:55.304605 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 18:43:55.304645 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 18:43:55.304666 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:43:55.319272 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1213) Jun 25 18:43:55.321739 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:55.321788 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:55.321801 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:43:55.324038 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 18:43:55.329418 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:43:55.329602 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 18:43:55.331759 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:43:55.449691 initrd-setup-root[1237]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 18:43:55.457093 initrd-setup-root[1244]: cut: /sysroot/etc/group: No such file or directory Jun 25 18:43:55.463030 initrd-setup-root[1251]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 18:43:55.468407 initrd-setup-root[1258]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 18:43:55.608824 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 18:43:55.623267 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 18:43:55.626301 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 18:43:55.642944 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 18:43:55.646218 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:55.688668 ignition[1330]: INFO : Ignition 2.19.0 Jun 25 18:43:55.688668 ignition[1330]: INFO : Stage: mount Jun 25 18:43:55.691681 ignition[1330]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:55.693980 ignition[1330]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:43:55.693980 ignition[1330]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:43:55.697677 ignition[1330]: INFO : PUT result: OK Jun 25 18:43:55.699314 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 18:43:55.704003 ignition[1330]: INFO : mount: mount passed Jun 25 18:43:55.704846 ignition[1330]: INFO : Ignition finished successfully Jun 25 18:43:55.705920 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 18:43:55.711239 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 18:43:55.739375 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 18:43:55.775171 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1343) Jun 25 18:43:55.777126 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e6704e83-f8c1-4f1f-ad66-682b94c5899a Jun 25 18:43:55.777189 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jun 25 18:43:55.778130 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 18:43:55.783175 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 18:43:55.787044 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 18:43:55.814131 ignition[1360]: INFO : Ignition 2.19.0 Jun 25 18:43:55.814131 ignition[1360]: INFO : Stage: files Jun 25 18:43:55.818519 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:55.818519 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:43:55.818519 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:43:55.825921 ignition[1360]: INFO : PUT result: OK Jun 25 18:43:55.829412 ignition[1360]: DEBUG : files: compiled without relabeling support, skipping Jun 25 18:43:55.831250 ignition[1360]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 18:43:55.831250 ignition[1360]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 18:43:55.850861 ignition[1360]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 18:43:55.852849 ignition[1360]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 18:43:55.854575 ignition[1360]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 18:43:55.853375 unknown[1360]: wrote ssh authorized keys file for user: core Jun 25 18:43:55.859643 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:43:55.859643 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 18:43:55.859643 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:43:55.859643 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 25 18:43:55.962807 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 18:43:56.426125 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 25 18:43:56.428930 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 18:43:56.428930 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 18:43:56.428930 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:43:56.428930 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 18:43:56.428930 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:43:56.428930 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 18:43:56.428930 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:43:56.447410 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 18:43:56.447410 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:43:56.447410 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 18:43:56.447410 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:43:56.447410 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:43:56.447410 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:43:56.447410 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jun 25 18:43:56.533483 systemd-networkd[1160]: eth0: Gained IPv6LL Jun 25 18:43:56.884913 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 18:43:57.253403 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jun 25 18:43:57.253403 ignition[1360]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 18:43:57.259180 ignition[1360]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 18:43:57.263062 ignition[1360]: INFO : files: files passed Jun 25 18:43:57.263062 ignition[1360]: INFO : Ignition finished successfully Jun 25 18:43:57.286518 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 18:43:57.302490 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 18:43:57.308080 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 18:43:57.311640 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 18:43:57.311788 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 18:43:57.335295 initrd-setup-root-after-ignition[1389]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:57.335295 initrd-setup-root-after-ignition[1389]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:57.339559 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 18:43:57.344335 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:43:57.344840 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 18:43:57.353302 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 18:43:57.393201 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 18:43:57.393336 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 18:43:57.395751 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 18:43:57.397942 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 18:43:57.401397 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 18:43:57.409463 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 18:43:57.432148 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:43:57.438498 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 18:43:57.460026 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:43:57.460308 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:57.464174 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 18:43:57.466339 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 18:43:57.466520 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 18:43:57.471853 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 18:43:57.474166 systemd[1]: Stopped target basic.target - Basic System. Jun 25 18:43:57.474418 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 18:43:57.474851 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 18:43:57.475188 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 18:43:57.475341 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 18:43:57.475515 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 18:43:57.475798 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 18:43:57.475955 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 18:43:57.476310 systemd[1]: Stopped target swap.target - Swaps. Jun 25 18:43:57.476852 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 18:43:57.477061 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 18:43:57.477819 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:43:57.478008 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:57.478424 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 25 18:43:57.497677 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:57.518987 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 18:43:57.519587 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 18:43:57.523211 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 18:43:57.524372 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 18:43:57.527460 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 18:43:57.527579 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 18:43:57.536370 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 18:43:57.544764 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 18:43:57.546510 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 18:43:57.548796 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:43:57.550226 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 18:43:57.550463 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 18:43:57.558500 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 18:43:57.558592 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 18:43:57.571265 ignition[1413]: INFO : Ignition 2.19.0 Jun 25 18:43:57.572686 ignition[1413]: INFO : Stage: umount Jun 25 18:43:57.572686 ignition[1413]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 18:43:57.572686 ignition[1413]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 18:43:57.572686 ignition[1413]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 18:43:57.577790 ignition[1413]: INFO : PUT result: OK Jun 25 18:43:57.582138 ignition[1413]: INFO : umount: umount passed Jun 25 18:43:57.584018 ignition[1413]: INFO : Ignition finished successfully Jun 25 18:43:57.587214 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 18:43:57.587518 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 18:43:57.591027 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 18:43:57.591124 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 18:43:57.598037 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 18:43:57.598143 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 18:43:57.600291 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 18:43:57.600363 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 18:43:57.602548 systemd[1]: Stopped target network.target - Network. Jun 25 18:43:57.604297 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 18:43:57.604380 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 18:43:57.608959 systemd[1]: Stopped target paths.target - Path Units. Jun 25 18:43:57.612186 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 18:43:57.616255 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:57.619853 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 18:43:57.619959 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 18:43:57.624410 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 18:43:57.624500 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 18:43:57.629656 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 18:43:57.629771 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 18:43:57.633737 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 18:43:57.633820 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 18:43:57.641410 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 25 18:43:57.643973 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 25 18:43:57.645771 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 18:43:57.648364 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 18:43:57.653401 systemd-networkd[1160]: eth0: DHCPv6 lease lost Jun 25 18:43:57.660923 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 18:43:57.672973 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 18:43:57.681402 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 18:43:57.685273 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 18:43:57.685405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 18:43:57.692344 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 18:43:57.692415 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:57.703263 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 18:43:57.706399 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 18:43:57.706564 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 18:43:57.712745 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 18:43:57.712828 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:43:57.717523 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 18:43:57.717602 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:57.741863 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 18:43:57.741945 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:43:57.745753 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:43:57.766621 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 18:43:57.766848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:43:57.769845 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 18:43:57.769936 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:57.772674 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 18:43:57.772732 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:57.774757 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 18:43:57.774838 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 18:43:57.776262 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 18:43:57.776327 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 18:43:57.778420 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 18:43:57.779420 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 18:43:57.792330 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 18:43:57.795487 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 18:43:57.797269 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:43:57.799265 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jun 25 18:43:57.799340 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:43:57.800917 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 18:43:57.800982 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:57.804258 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 18:43:57.804320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:43:57.813557 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 18:43:57.813892 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 18:43:57.831573 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 18:43:57.831770 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 18:43:57.915010 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 18:43:57.915233 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 18:43:57.918032 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 18:43:57.920142 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 18:43:57.920211 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 18:43:57.931301 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 18:43:57.957748 systemd[1]: Switching root. Jun 25 18:43:57.997307 systemd-journald[178]: Journal stopped Jun 25 18:43:59.910500 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jun 25 18:43:59.910592 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 18:43:59.910621 kernel: SELinux: policy capability open_perms=1 Jun 25 18:43:59.910642 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 18:43:59.910664 kernel: SELinux: policy capability always_check_network=0 Jun 25 18:43:59.910688 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 18:43:59.910709 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 18:43:59.910734 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 18:43:59.910759 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 18:43:59.910780 kernel: audit: type=1403 audit(1719341038.452:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 18:43:59.910815 systemd[1]: Successfully loaded SELinux policy in 38.053ms. Jun 25 18:43:59.910849 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.442ms. Jun 25 18:43:59.910873 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jun 25 18:43:59.910897 systemd[1]: Detected virtualization amazon. Jun 25 18:43:59.910920 systemd[1]: Detected architecture x86-64. Jun 25 18:43:59.910945 systemd[1]: Detected first boot. Jun 25 18:43:59.910967 systemd[1]: Initializing machine ID from VM UUID. Jun 25 18:43:59.910989 zram_generator::config[1472]: No configuration found. Jun 25 18:43:59.911012 systemd[1]: Populated /etc with preset unit settings. Jun 25 18:43:59.911033 systemd[1]: Queued start job for default target multi-user.target. Jun 25 18:43:59.911057 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 25 18:43:59.911080 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 18:43:59.918172 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 18:43:59.918227 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 18:43:59.918258 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 18:43:59.918281 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 18:43:59.918304 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 18:43:59.918325 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 18:43:59.918347 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 18:43:59.918370 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 25 18:43:59.918393 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 18:43:59.918415 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 18:43:59.918440 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 18:43:59.918463 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 18:43:59.918486 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 25 18:43:59.918508 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 25 18:43:59.918531 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 25 18:43:59.918553 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 18:43:59.918576 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 18:43:59.918598 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 18:43:59.918621 systemd[1]: Reached target slices.target - Slice Units. Jun 25 18:43:59.918845 systemd[1]: Reached target swap.target - Swaps. Jun 25 18:43:59.918876 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 18:43:59.918899 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 18:43:59.918922 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 18:43:59.918944 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 18:43:59.918967 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 18:43:59.918989 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 18:43:59.919011 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 18:43:59.919035 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 18:43:59.919061 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 18:43:59.919083 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 18:43:59.923187 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 18:43:59.923232 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:59.923256 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 18:43:59.923279 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 18:43:59.923301 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 18:43:59.923323 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 18:43:59.923353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:43:59.923375 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 18:43:59.923397 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 18:43:59.923420 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:43:59.923441 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:43:59.923463 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:43:59.923484 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 18:43:59.923505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:43:59.923528 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 18:43:59.923553 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 18:43:59.923576 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jun 25 18:43:59.923598 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 18:43:59.923620 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 18:43:59.923642 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 18:43:59.923664 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 18:43:59.923686 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 18:43:59.923708 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:43:59.923732 kernel: fuse: init (API version 7.39) Jun 25 18:43:59.923758 kernel: loop: module loaded Jun 25 18:43:59.923779 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 18:43:59.923802 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 18:43:59.923823 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 18:43:59.923846 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 18:43:59.923868 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 18:43:59.923889 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 18:43:59.923911 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 18:43:59.923936 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 18:43:59.923958 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 18:43:59.923980 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:43:59.924002 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:43:59.924024 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:43:59.924046 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:43:59.924071 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 18:43:59.924093 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 18:43:59.938619 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:43:59.938652 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:43:59.938674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 18:43:59.938695 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 18:43:59.938716 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 18:43:59.938826 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 18:43:59.938887 systemd-journald[1565]: Collecting audit messages is disabled. Jun 25 18:43:59.939090 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 18:43:59.939136 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 18:43:59.939168 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 18:43:59.940163 systemd-journald[1565]: Journal started Jun 25 18:43:59.940221 systemd-journald[1565]: Runtime Journal (/run/log/journal/ec2b80842dd2d4dacc61c88699365320) is 4.8M, max 38.6M, 33.8M free. Jun 25 18:43:59.948120 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 18:43:59.970172 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:44:00.014200 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 25 18:44:00.019125 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:44:00.021779 kernel: ACPI: bus type drm_connector registered Jun 25 18:44:00.040461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 18:44:00.054185 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 25 18:44:00.056984 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 18:44:00.062936 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 18:44:00.070487 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:44:00.070939 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:44:00.072857 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 18:44:00.074473 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 18:44:00.096171 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 25 18:44:00.098857 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 18:44:00.143002 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 18:44:00.164498 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 18:44:00.183475 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 18:44:00.189841 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 18:44:00.217220 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jun 25 18:44:00.217246 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jun 25 18:44:00.233505 systemd-journald[1565]: Time spent on flushing to /var/log/journal/ec2b80842dd2d4dacc61c88699365320 is 41.180ms for 957 entries. Jun 25 18:44:00.233505 systemd-journald[1565]: System Journal (/var/log/journal/ec2b80842dd2d4dacc61c88699365320) is 8.0M, max 195.6M, 187.6M free. Jun 25 18:44:00.292607 systemd-journald[1565]: Received client request to flush runtime journal. Jun 25 18:44:00.237525 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 25 18:44:00.252501 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 18:44:00.265070 udevadm[1634]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 18:44:00.295993 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 18:44:00.329163 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 18:44:00.344364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 18:44:00.369907 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Jun 25 18:44:00.369937 systemd-tmpfiles[1646]: ACLs are not supported, ignoring. Jun 25 18:44:00.377330 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 18:44:01.014268 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 18:44:01.024492 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 18:44:01.142289 systemd-udevd[1652]: Using default interface naming scheme 'v255'. Jun 25 18:44:01.174592 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 18:44:01.192576 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 18:44:01.278611 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 18:44:01.433149 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1667) Jun 25 18:44:01.437628 (udev-worker)[1653]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:44:01.474798 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 25 18:44:01.499359 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 18:44:01.584386 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 25 18:44:01.597129 kernel: ACPI: button: Power Button [PWRF] Jun 25 18:44:01.602496 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jun 25 18:44:01.612171 kernel: ACPI: button: Sleep Button [SLPF] Jun 25 18:44:01.689411 systemd-networkd[1657]: lo: Link UP Jun 25 18:44:01.689456 systemd-networkd[1657]: lo: Gained carrier Jun 25 18:44:01.699664 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1653) Jun 25 18:44:01.698006 systemd-networkd[1657]: Enumeration completed Jun 25 18:44:01.700967 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:44:01.700976 systemd-networkd[1657]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 18:44:01.703901 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 18:44:01.717875 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jun 25 18:44:01.717386 systemd-networkd[1657]: eth0: Link UP Jun 25 18:44:01.717650 systemd-networkd[1657]: eth0: Gained carrier Jun 25 18:44:01.717681 systemd-networkd[1657]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 18:44:01.722897 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 18:44:01.728622 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jun 25 18:44:01.735282 systemd-networkd[1657]: eth0: DHCPv4 address 172.31.31.33/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 18:44:01.797181 kernel: mousedev: PS/2 mouse device common for all mice Jun 25 18:44:01.844556 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 25 18:44:01.978433 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 18:44:01.993223 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 18:44:02.179067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 25 18:44:02.192745 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 18:44:02.228298 lvm[1776]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:44:02.268974 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 18:44:02.271056 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 18:44:02.282641 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 18:44:02.292789 lvm[1779]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 18:44:02.334392 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 18:44:02.338064 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 18:44:02.343602 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 18:44:02.343701 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 18:44:02.345325 systemd[1]: Reached target machines.target - Containers. Jun 25 18:44:02.350646 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jun 25 18:44:02.358571 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 25 18:44:02.372482 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 18:44:02.375192 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:44:02.380881 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 18:44:02.409724 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 18:44:02.418965 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 18:44:02.424940 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 25 18:44:02.532571 kernel: loop0: detected capacity change from 0 to 60984 Jun 25 18:44:02.535519 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 18:44:02.547686 kernel: block loop0: the capability attribute has been deprecated. Jun 25 18:44:02.628148 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 18:44:02.633362 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 18:44:02.676131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 18:44:02.697138 kernel: loop1: detected capacity change from 0 to 139760 Jun 25 18:44:02.789150 kernel: loop2: detected capacity change from 0 to 209816 Jun 25 18:44:02.866270 kernel: loop3: detected capacity change from 0 to 80568 Jun 25 18:44:02.869485 systemd-networkd[1657]: eth0: Gained IPv6LL Jun 25 18:44:02.874379 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 18:44:02.924519 kernel: loop4: detected capacity change from 0 to 60984 Jun 25 18:44:02.972146 kernel: loop5: detected capacity change from 0 to 139760 Jun 25 18:44:03.032131 kernel: loop6: detected capacity change from 0 to 209816 Jun 25 18:44:03.063324 kernel: loop7: detected capacity change from 0 to 80568 Jun 25 18:44:03.093258 (sd-merge)[1803]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 25 18:44:03.094331 (sd-merge)[1803]: Merged extensions into '/usr'. Jun 25 18:44:03.101717 systemd[1]: Reloading requested from client PID 1787 ('systemd-sysext') (unit systemd-sysext.service)... Jun 25 18:44:03.101924 systemd[1]: Reloading... Jun 25 18:44:03.299841 zram_generator::config[1828]: No configuration found. Jun 25 18:44:03.509970 ldconfig[1783]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 18:44:03.536014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:03.613062 systemd[1]: Reloading finished in 509 ms. Jun 25 18:44:03.631045 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 18:44:03.632995 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 18:44:03.642335 systemd[1]: Starting ensure-sysext.service... Jun 25 18:44:03.645336 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 18:44:03.658222 systemd[1]: Reloading requested from client PID 1884 ('systemctl') (unit ensure-sysext.service)... Jun 25 18:44:03.658246 systemd[1]: Reloading... Jun 25 18:44:03.700981 systemd-tmpfiles[1885]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 18:44:03.701529 systemd-tmpfiles[1885]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 18:44:03.715650 systemd-tmpfiles[1885]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 18:44:03.716191 systemd-tmpfiles[1885]: ACLs are not supported, ignoring. Jun 25 18:44:03.716295 systemd-tmpfiles[1885]: ACLs are not supported, ignoring. Jun 25 18:44:03.721468 systemd-tmpfiles[1885]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:44:03.721486 systemd-tmpfiles[1885]: Skipping /boot Jun 25 18:44:03.738629 systemd-tmpfiles[1885]: Detected autofs mount point /boot during canonicalization of boot. Jun 25 18:44:03.738645 systemd-tmpfiles[1885]: Skipping /boot Jun 25 18:44:03.811156 zram_generator::config[1911]: No configuration found. Jun 25 18:44:03.972714 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:04.075079 systemd[1]: Reloading finished in 416 ms. Jun 25 18:44:04.101057 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 18:44:04.110294 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:04.124456 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 18:44:04.131346 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 18:44:04.151333 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 18:44:04.168317 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 18:44:04.201854 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:04.202521 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:44:04.215951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:44:04.238661 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:44:04.256984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:44:04.259748 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:44:04.260370 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:04.271574 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:44:04.271813 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:44:04.292087 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:44:04.297444 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:44:04.316989 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:44:04.318577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:44:04.346727 augenrules[1997]: No rules Jun 25 18:44:04.347824 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:04.377600 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:04.385738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 18:44:04.390339 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 18:44:04.406473 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 18:44:04.411228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 18:44:04.430351 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 18:44:04.432380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 18:44:04.432479 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 18:44:04.435264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 25 18:44:04.439417 systemd[1]: Finished ensure-sysext.service. Jun 25 18:44:04.456622 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 18:44:04.460618 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 18:44:04.466333 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 18:44:04.468812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 18:44:04.469048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 18:44:04.472962 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 18:44:04.473269 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 18:44:04.475547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 18:44:04.476319 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 18:44:04.478801 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 18:44:04.479340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 18:44:04.492640 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 18:44:04.493432 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 18:44:04.503376 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 18:44:04.504980 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 18:44:04.518297 systemd-resolved[1971]: Positive Trust Anchors: Jun 25 18:44:04.518314 systemd-resolved[1971]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 18:44:04.518378 systemd-resolved[1971]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jun 25 18:44:04.520584 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 18:44:04.528212 systemd-resolved[1971]: Defaulting to hostname 'linux'. Jun 25 18:44:04.530515 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 18:44:04.531938 systemd[1]: Reached target network.target - Network. Jun 25 18:44:04.533533 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 18:44:04.535265 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 18:44:04.536833 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 18:44:04.538218 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 18:44:04.539706 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 18:44:04.541363 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 18:44:04.543048 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 18:44:04.545094 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 18:44:04.547146 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 18:44:04.547183 systemd[1]: Reached target paths.target - Path Units. Jun 25 18:44:04.548384 systemd[1]: Reached target timers.target - Timer Units. Jun 25 18:44:04.550854 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 18:44:04.554591 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 18:44:04.559277 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 18:44:04.565663 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 18:44:04.567138 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 18:44:04.568437 systemd[1]: Reached target basic.target - Basic System. Jun 25 18:44:04.569744 systemd[1]: System is tainted: cgroupsv1 Jun 25 18:44:04.569798 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:44:04.569828 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 18:44:04.575288 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 18:44:04.586561 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 18:44:04.593519 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 18:44:04.601287 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 18:44:04.658420 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 18:44:04.660017 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 18:44:04.675410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:04.687149 jq[2039]: false Jun 25 18:44:04.692240 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 18:44:04.717846 systemd[1]: Started ntpd.service - Network Time Service. Jun 25 18:44:04.742594 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 18:44:04.752362 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 18:44:04.776476 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 18:44:04.801348 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 18:44:04.810327 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 18:44:04.836593 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 18:44:04.838548 dbus-daemon[2038]: [system] SELinux support is enabled Jun 25 18:44:04.840044 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 18:44:04.860185 extend-filesystems[2041]: Found loop4 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found loop5 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found loop6 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found loop7 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found nvme0n1 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found nvme0n1p1 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found nvme0n1p2 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found nvme0n1p3 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found usr Jun 25 18:44:04.860185 extend-filesystems[2041]: Found nvme0n1p4 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found nvme0n1p6 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found nvme0n1p7 Jun 25 18:44:04.860185 extend-filesystems[2041]: Found nvme0n1p9 Jun 25 18:44:04.860185 extend-filesystems[2041]: Checking size of /dev/nvme0n1p9 Jun 25 18:44:04.874343 dbus-daemon[2038]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1657 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 18:44:04.865489 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 18:44:04.890392 extend-filesystems[2041]: Resized partition /dev/nvme0n1p9 Jun 25 18:44:04.891660 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 18:44:04.896690 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 18:44:04.898047 extend-filesystems[2075]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 18:44:04.915043 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 25 18:44:04.918598 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 18:44:04.919153 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 18:44:04.952724 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 18:44:04.965471 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 18:44:04.982814 jq[2074]: true Jun 25 18:44:04.987667 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 18:44:04.988126 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 18:44:04.991940 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 18:44:05.013767 ntpd[2047]: ntpd 4.2.8p17@1.4004-o Tue Jun 25 16:52:45 UTC 2024 (1): Starting Jun 25 18:44:05.021713 jq[2090]: true Jun 25 18:44:05.021966 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: ntpd 4.2.8p17@1.4004-o Tue Jun 25 16:52:45 UTC 2024 (1): Starting Jun 25 18:44:05.021966 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 25 18:44:05.021966 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: ---------------------------------------------------- Jun 25 18:44:05.021966 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: ntp-4 is maintained by Network Time Foundation, Jun 25 18:44:05.021966 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 25 18:44:05.021966 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: corporation. Support and training for ntp-4 are Jun 25 18:44:05.021966 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: available at https://www.nwtime.org/support Jun 25 18:44:05.021966 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: ---------------------------------------------------- Jun 25 18:44:05.014166 ntpd[2047]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: proto: precision = 0.069 usec (-24) Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: basedate set to 2024-06-13 Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: gps base set to 2024-06-16 (week 2319) Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: Listen and drop on 0 v6wildcard [::]:123 Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: Listen normally on 2 lo 127.0.0.1:123 Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: Listen normally on 3 eth0 172.31.31.33:123 Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: Listen normally on 4 lo [::1]:123 Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: Listen normally on 5 eth0 [fe80::430:59ff:feda:eceb%2]:123 Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: Listening on routing socket on fd #22 for interface updates Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:44:05.049872 ntpd[2047]: 25 Jun 18:44:05 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:44:05.014177 ntpd[2047]: ---------------------------------------------------- Jun 25 18:44:05.014187 ntpd[2047]: ntp-4 is maintained by Network Time Foundation, Jun 25 18:44:05.014197 ntpd[2047]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jun 25 18:44:05.014206 ntpd[2047]: corporation. Support and training for ntp-4 are Jun 25 18:44:05.014216 ntpd[2047]: available at https://www.nwtime.org/support Jun 25 18:44:05.014226 ntpd[2047]: ---------------------------------------------------- Jun 25 18:44:05.030774 ntpd[2047]: proto: precision = 0.069 usec (-24) Jun 25 18:44:05.031847 ntpd[2047]: basedate set to 2024-06-13 Jun 25 18:44:05.031868 ntpd[2047]: gps base set to 2024-06-16 (week 2319) Jun 25 18:44:05.036086 ntpd[2047]: Listen and drop on 0 v6wildcard [::]:123 Jun 25 18:44:05.036177 ntpd[2047]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jun 25 18:44:05.036392 ntpd[2047]: Listen normally on 2 lo 127.0.0.1:123 Jun 25 18:44:05.036429 ntpd[2047]: Listen normally on 3 eth0 172.31.31.33:123 Jun 25 18:44:05.036470 ntpd[2047]: Listen normally on 4 lo [::1]:123 Jun 25 18:44:05.036512 ntpd[2047]: Listen normally on 5 eth0 [fe80::430:59ff:feda:eceb%2]:123 Jun 25 18:44:05.036549 ntpd[2047]: Listening on routing socket on fd #22 for interface updates Jun 25 18:44:05.039143 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:44:05.039176 ntpd[2047]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jun 25 18:44:05.078233 (ntainerd)[2091]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 25 18:44:05.101831 update_engine[2063]: I0625 18:44:05.090626 2063 main.cc:92] Flatcar Update Engine starting Jun 25 18:44:05.123686 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 25 18:44:05.159469 update_engine[2063]: I0625 18:44:05.107789 2063 update_check_scheduler.cc:74] Next update check in 7m24s Jun 25 18:44:05.135355 dbus-daemon[2038]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 18:44:05.159643 tar[2081]: linux-amd64/helm Jun 25 18:44:05.161194 coreos-metadata[2037]: Jun 25 18:44:05.128 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 18:44:05.161194 coreos-metadata[2037]: Jun 25 18:44:05.139 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 25 18:44:05.161194 coreos-metadata[2037]: Jun 25 18:44:05.159 INFO Fetch successful Jun 25 18:44:05.161194 coreos-metadata[2037]: Jun 25 18:44:05.159 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 25 18:44:05.125015 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 18:44:05.170437 extend-filesystems[2075]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 25 18:44:05.170437 extend-filesystems[2075]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 18:44:05.170437 extend-filesystems[2075]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 25 18:44:05.125097 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 18:44:05.209269 coreos-metadata[2037]: Jun 25 18:44:05.171 INFO Fetch successful Jun 25 18:44:05.209269 coreos-metadata[2037]: Jun 25 18:44:05.174 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 25 18:44:05.209269 coreos-metadata[2037]: Jun 25 18:44:05.188 INFO Fetch successful Jun 25 18:44:05.209269 coreos-metadata[2037]: Jun 25 18:44:05.188 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 25 18:44:05.209269 coreos-metadata[2037]: Jun 25 18:44:05.200 INFO Fetch successful Jun 25 18:44:05.209269 coreos-metadata[2037]: Jun 25 18:44:05.200 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 25 18:44:05.209508 extend-filesystems[2041]: Resized filesystem in /dev/nvme0n1p9 Jun 25 18:44:05.127228 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 18:44:05.127257 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 18:44:05.129321 systemd[1]: Started update-engine.service - Update Engine. Jun 25 18:44:05.132473 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 18:44:05.146572 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 18:44:05.221341 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 18:44:05.229787 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 18:44:05.230292 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 18:44:05.234841 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.255 INFO Fetch failed with 404: resource not found Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.255 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.257 INFO Fetch successful Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.261 INFO Fetch successful Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.261 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.263 INFO Fetch successful Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.263 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.263 INFO Fetch successful Jun 25 18:44:05.264412 coreos-metadata[2037]: Jun 25 18:44:05.263 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 25 18:44:05.255425 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 25 18:44:05.271003 coreos-metadata[2037]: Jun 25 18:44:05.269 INFO Fetch successful Jun 25 18:44:05.374462 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 18:44:05.378708 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 18:44:05.440877 systemd-logind[2058]: Watching system buttons on /dev/input/event1 (Power Button) Jun 25 18:44:05.440911 systemd-logind[2058]: Watching system buttons on /dev/input/event2 (Sleep Button) Jun 25 18:44:05.440934 systemd-logind[2058]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 25 18:44:05.467168 bash[2149]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:44:05.452996 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 18:44:05.460493 systemd-logind[2058]: New seat seat0. Jun 25 18:44:05.463529 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 18:44:05.475889 systemd[1]: Starting sshkeys.service... Jun 25 18:44:05.490211 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (2154) Jun 25 18:44:05.588016 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 18:44:05.600448 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 18:44:05.655013 amazon-ssm-agent[2131]: Initializing new seelog logger Jun 25 18:44:05.655013 amazon-ssm-agent[2131]: New Seelog Logger Creation Complete Jun 25 18:44:05.655013 amazon-ssm-agent[2131]: 2024/06/25 18:44:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:44:05.655013 amazon-ssm-agent[2131]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:44:05.655013 amazon-ssm-agent[2131]: 2024/06/25 18:44:05 processing appconfig overrides Jun 25 18:44:05.660991 amazon-ssm-agent[2131]: 2024/06/25 18:44:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:44:05.662823 amazon-ssm-agent[2131]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:44:05.662823 amazon-ssm-agent[2131]: 2024/06/25 18:44:05 processing appconfig overrides Jun 25 18:44:05.662823 amazon-ssm-agent[2131]: 2024/06/25 18:44:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:44:05.662823 amazon-ssm-agent[2131]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:44:05.662823 amazon-ssm-agent[2131]: 2024/06/25 18:44:05 processing appconfig overrides Jun 25 18:44:05.662823 amazon-ssm-agent[2131]: 2024-06-25 18:44:05 INFO Proxy environment variables: Jun 25 18:44:05.682728 amazon-ssm-agent[2131]: 2024/06/25 18:44:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:44:05.682728 amazon-ssm-agent[2131]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 18:44:05.682728 amazon-ssm-agent[2131]: 2024/06/25 18:44:05 processing appconfig overrides Jun 25 18:44:05.702351 dbus-daemon[2038]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 18:44:05.702525 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 18:44:05.705357 dbus-daemon[2038]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2123 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 18:44:05.716114 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 18:44:05.791029 amazon-ssm-agent[2131]: 2024-06-25 18:44:05 INFO https_proxy: Jun 25 18:44:05.850926 polkitd[2218]: Started polkitd version 121 Jun 25 18:44:05.885965 amazon-ssm-agent[2131]: 2024-06-25 18:44:05 INFO http_proxy: Jun 25 18:44:05.935979 polkitd[2218]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 18:44:05.936232 polkitd[2218]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 18:44:05.941854 polkitd[2218]: Finished loading, compiling and executing 2 rules Jun 25 18:44:05.951520 dbus-daemon[2038]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 18:44:05.951702 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 18:44:05.952530 polkitd[2218]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 18:44:05.988932 coreos-metadata[2192]: Jun 25 18:44:05.986 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 18:44:06.006865 coreos-metadata[2192]: Jun 25 18:44:05.992 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 25 18:44:06.006865 coreos-metadata[2192]: Jun 25 18:44:05.998 INFO Fetch successful Jun 25 18:44:06.006865 coreos-metadata[2192]: Jun 25 18:44:05.998 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 18:44:06.007065 amazon-ssm-agent[2131]: 2024-06-25 18:44:05 INFO no_proxy: Jun 25 18:44:06.007712 coreos-metadata[2192]: Jun 25 18:44:06.007 INFO Fetch successful Jun 25 18:44:06.030915 unknown[2192]: wrote ssh authorized keys file for user: core Jun 25 18:44:06.100565 amazon-ssm-agent[2131]: 2024-06-25 18:44:05 INFO Checking if agent identity type OnPrem can be assumed Jun 25 18:44:06.139343 systemd-hostnamed[2123]: Hostname set to (transient) Jun 25 18:44:06.140429 update-ssh-keys[2280]: Updated "/home/core/.ssh/authorized_keys" Jun 25 18:44:06.145710 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 18:44:06.160179 systemd-resolved[1971]: System hostname changed to 'ip-172-31-31-33'. Jun 25 18:44:06.162657 systemd[1]: Finished sshkeys.service. Jun 25 18:44:06.196697 locksmithd[2120]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 18:44:06.204098 amazon-ssm-agent[2131]: 2024-06-25 18:44:05 INFO Checking if agent identity type EC2 can be assumed Jun 25 18:44:06.301256 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO Agent will take identity from EC2 Jun 25 18:44:06.403200 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:44:06.493360 sshd_keygen[2106]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 18:44:06.505130 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:44:06.532170 containerd[2091]: time="2024-06-25T18:44:06.531598806Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Jun 25 18:44:06.537032 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 18:44:06.547491 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 18:44:06.585191 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 18:44:06.585550 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 18:44:06.596541 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 18:44:06.606401 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 18:44:06.666772 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 18:44:06.678583 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 18:44:06.692628 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 18:44:06.694221 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 18:44:06.707149 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 25 18:44:06.719051 containerd[2091]: time="2024-06-25T18:44:06.718037585Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 18:44:06.719051 containerd[2091]: time="2024-06-25T18:44:06.718434117Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:06.727129 containerd[2091]: time="2024-06-25T18:44:06.723561474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.35-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:44:06.727129 containerd[2091]: time="2024-06-25T18:44:06.723910012Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:06.727129 containerd[2091]: time="2024-06-25T18:44:06.724341211Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:44:06.727129 containerd[2091]: time="2024-06-25T18:44:06.724369603Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 18:44:06.727129 containerd[2091]: time="2024-06-25T18:44:06.724490866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:06.727129 containerd[2091]: time="2024-06-25T18:44:06.724554358Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:44:06.727129 containerd[2091]: time="2024-06-25T18:44:06.724571508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:06.727129 containerd[2091]: time="2024-06-25T18:44:06.724652670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:06.727497 containerd[2091]: time="2024-06-25T18:44:06.727416734Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:06.727497 containerd[2091]: time="2024-06-25T18:44:06.727470196Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 18:44:06.727497 containerd[2091]: time="2024-06-25T18:44:06.727493745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 18:44:06.730850 containerd[2091]: time="2024-06-25T18:44:06.730796755Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 18:44:06.730850 containerd[2091]: time="2024-06-25T18:44:06.730846431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 18:44:06.731007 containerd[2091]: time="2024-06-25T18:44:06.730954671Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 18:44:06.731007 containerd[2091]: time="2024-06-25T18:44:06.730979715Z" level=info msg="metadata content store policy set" policy=shared Jun 25 18:44:06.747657 containerd[2091]: time="2024-06-25T18:44:06.747607015Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 18:44:06.747657 containerd[2091]: time="2024-06-25T18:44:06.747666872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 18:44:06.747822 containerd[2091]: time="2024-06-25T18:44:06.747684105Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 18:44:06.748540 containerd[2091]: time="2024-06-25T18:44:06.748500843Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 18:44:06.749042 containerd[2091]: time="2024-06-25T18:44:06.749017904Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 18:44:06.749129 containerd[2091]: time="2024-06-25T18:44:06.749047312Z" level=info msg="NRI interface is disabled by configuration." Jun 25 18:44:06.749129 containerd[2091]: time="2024-06-25T18:44:06.749066785Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 18:44:06.749318 containerd[2091]: time="2024-06-25T18:44:06.749296306Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 18:44:06.749360 containerd[2091]: time="2024-06-25T18:44:06.749327481Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 18:44:06.749360 containerd[2091]: time="2024-06-25T18:44:06.749350650Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 18:44:06.749428 containerd[2091]: time="2024-06-25T18:44:06.749372445Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 18:44:06.749428 containerd[2091]: time="2024-06-25T18:44:06.749395155Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 18:44:06.749509 containerd[2091]: time="2024-06-25T18:44:06.749424761Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 18:44:06.749509 containerd[2091]: time="2024-06-25T18:44:06.749445427Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 18:44:06.749509 containerd[2091]: time="2024-06-25T18:44:06.749467258Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 18:44:06.749509 containerd[2091]: time="2024-06-25T18:44:06.749490234Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 18:44:06.749637 containerd[2091]: time="2024-06-25T18:44:06.749512608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 18:44:06.749637 containerd[2091]: time="2024-06-25T18:44:06.749532177Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 18:44:06.749637 containerd[2091]: time="2024-06-25T18:44:06.749551857Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 18:44:06.749743 containerd[2091]: time="2024-06-25T18:44:06.749699181Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.752036003Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.752087044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.752933292Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.752978215Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753048729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753071406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753092749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753218540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753241341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753260330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753278124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753295349Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753310990Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 18:44:06.755645 containerd[2091]: time="2024-06-25T18:44:06.753447441Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.756244 containerd[2091]: time="2024-06-25T18:44:06.753462303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.756244 containerd[2091]: time="2024-06-25T18:44:06.753474922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.756244 containerd[2091]: time="2024-06-25T18:44:06.753488158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.756244 containerd[2091]: time="2024-06-25T18:44:06.753508164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.756244 containerd[2091]: time="2024-06-25T18:44:06.753521931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.756244 containerd[2091]: time="2024-06-25T18:44:06.753535142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.756244 containerd[2091]: time="2024-06-25T18:44:06.753546259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.753792648Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.753845165Z" level=info msg="Connect containerd service" Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.753883708Z" level=info msg="using legacy CRI server" Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.753893519Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.754020606Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.754738658Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.754785980Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.754811295Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.754826023Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 18:44:06.756505 containerd[2091]: time="2024-06-25T18:44:06.754842474Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jun 25 18:44:06.760280 containerd[2091]: time="2024-06-25T18:44:06.760233442Z" level=info msg="Start subscribing containerd event" Jun 25 18:44:06.760380 containerd[2091]: time="2024-06-25T18:44:06.760302510Z" level=info msg="Start recovering state" Jun 25 18:44:06.760423 containerd[2091]: time="2024-06-25T18:44:06.760385631Z" level=info msg="Start event monitor" Jun 25 18:44:06.760423 containerd[2091]: time="2024-06-25T18:44:06.760407999Z" level=info msg="Start snapshots syncer" Jun 25 18:44:06.760488 containerd[2091]: time="2024-06-25T18:44:06.760422579Z" level=info msg="Start cni network conf syncer for default" Jun 25 18:44:06.760488 containerd[2091]: time="2024-06-25T18:44:06.760434979Z" level=info msg="Start streaming server" Jun 25 18:44:06.761850 containerd[2091]: time="2024-06-25T18:44:06.760710636Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 18:44:06.761850 containerd[2091]: time="2024-06-25T18:44:06.760779181Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 18:44:06.761850 containerd[2091]: time="2024-06-25T18:44:06.760844034Z" level=info msg="containerd successfully booted in 0.247770s" Jun 25 18:44:06.762353 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 18:44:06.816191 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jun 25 18:44:06.920762 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO [amazon-ssm-agent] Starting Core Agent Jun 25 18:44:07.020384 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 25 18:44:07.080675 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO [Registrar] Starting registrar module Jun 25 18:44:07.080873 amazon-ssm-agent[2131]: 2024-06-25 18:44:06 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 25 18:44:07.080967 amazon-ssm-agent[2131]: 2024-06-25 18:44:07 INFO [EC2Identity] EC2 registration was successful. Jun 25 18:44:07.081044 amazon-ssm-agent[2131]: 2024-06-25 18:44:07 INFO [CredentialRefresher] credentialRefresher has started Jun 25 18:44:07.081148 amazon-ssm-agent[2131]: 2024-06-25 18:44:07 INFO [CredentialRefresher] Starting credentials refresher loop Jun 25 18:44:07.081234 amazon-ssm-agent[2131]: 2024-06-25 18:44:07 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 25 18:44:07.121115 amazon-ssm-agent[2131]: 2024-06-25 18:44:07 INFO [CredentialRefresher] Next credential rotation will be in 31.2083181163 minutes Jun 25 18:44:07.143156 tar[2081]: linux-amd64/LICENSE Jun 25 18:44:07.143629 tar[2081]: linux-amd64/README.md Jun 25 18:44:07.161923 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 18:44:07.486423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:07.488782 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 18:44:07.604157 systemd[1]: Startup finished in 8.667s (kernel) + 9.188s (userspace) = 17.856s. Jun 25 18:44:07.610032 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:08.098492 amazon-ssm-agent[2131]: 2024-06-25 18:44:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 25 18:44:08.205747 amazon-ssm-agent[2131]: 2024-06-25 18:44:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2341) started Jun 25 18:44:08.303614 amazon-ssm-agent[2131]: 2024-06-25 18:44:08 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 25 18:44:08.461073 kubelet[2331]: E0625 18:44:08.460890 2331 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:08.464236 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:08.464570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:11.919421 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 18:44:11.926063 systemd[1]: Started sshd@0-172.31.31.33:22-139.178.68.195:33472.service - OpenSSH per-connection server daemon (139.178.68.195:33472). Jun 25 18:44:12.248143 systemd-resolved[1971]: Clock change detected. Flushing caches. Jun 25 18:44:12.353406 sshd[2354]: Accepted publickey for core from 139.178.68.195 port 33472 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:44:12.356088 sshd[2354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:12.374071 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 18:44:12.379742 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 18:44:12.384756 systemd-logind[2058]: New session 1 of user core. Jun 25 18:44:12.403812 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 18:44:12.416147 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 18:44:12.421606 (systemd)[2360]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:12.642949 systemd[2360]: Queued start job for default target default.target. Jun 25 18:44:12.643562 systemd[2360]: Created slice app.slice - User Application Slice. Jun 25 18:44:12.643593 systemd[2360]: Reached target paths.target - Paths. Jun 25 18:44:12.643610 systemd[2360]: Reached target timers.target - Timers. Jun 25 18:44:12.649536 systemd[2360]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 25 18:44:12.659697 systemd[2360]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 25 18:44:12.659790 systemd[2360]: Reached target sockets.target - Sockets. Jun 25 18:44:12.659814 systemd[2360]: Reached target basic.target - Basic System. Jun 25 18:44:12.659868 systemd[2360]: Reached target default.target - Main User Target. Jun 25 18:44:12.659910 systemd[2360]: Startup finished in 220ms. Jun 25 18:44:12.660608 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 18:44:12.676053 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 18:44:12.844276 systemd[1]: Started sshd@1-172.31.31.33:22-139.178.68.195:33486.service - OpenSSH per-connection server daemon (139.178.68.195:33486). Jun 25 18:44:13.006655 sshd[2372]: Accepted publickey for core from 139.178.68.195 port 33486 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:44:13.008797 sshd[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:13.014618 systemd-logind[2058]: New session 2 of user core. Jun 25 18:44:13.024727 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 18:44:13.157305 sshd[2372]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:13.164424 systemd[1]: sshd@1-172.31.31.33:22-139.178.68.195:33486.service: Deactivated successfully. Jun 25 18:44:13.182567 systemd-logind[2058]: Session 2 logged out. Waiting for processes to exit. Jun 25 18:44:13.182844 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 18:44:13.199729 systemd[1]: Started sshd@2-172.31.31.33:22-139.178.68.195:33490.service - OpenSSH per-connection server daemon (139.178.68.195:33490). Jun 25 18:44:13.202138 systemd-logind[2058]: Removed session 2. Jun 25 18:44:13.372864 sshd[2380]: Accepted publickey for core from 139.178.68.195 port 33490 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:44:13.373934 sshd[2380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:13.379338 systemd-logind[2058]: New session 3 of user core. Jun 25 18:44:13.386692 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 18:44:13.505232 sshd[2380]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:13.512820 systemd[1]: sshd@2-172.31.31.33:22-139.178.68.195:33490.service: Deactivated successfully. Jun 25 18:44:13.520196 systemd-logind[2058]: Session 3 logged out. Waiting for processes to exit. Jun 25 18:44:13.520229 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 18:44:13.526610 systemd-logind[2058]: Removed session 3. Jun 25 18:44:13.534327 systemd[1]: Started sshd@3-172.31.31.33:22-139.178.68.195:33502.service - OpenSSH per-connection server daemon (139.178.68.195:33502). Jun 25 18:44:13.715396 sshd[2388]: Accepted publickey for core from 139.178.68.195 port 33502 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:44:13.717293 sshd[2388]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:13.723526 systemd-logind[2058]: New session 4 of user core. Jun 25 18:44:13.728661 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 18:44:13.852783 sshd[2388]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:13.859009 systemd[1]: sshd@3-172.31.31.33:22-139.178.68.195:33502.service: Deactivated successfully. Jun 25 18:44:13.860602 systemd-logind[2058]: Session 4 logged out. Waiting for processes to exit. Jun 25 18:44:13.863528 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 18:44:13.865153 systemd-logind[2058]: Removed session 4. Jun 25 18:44:13.878738 systemd[1]: Started sshd@4-172.31.31.33:22-139.178.68.195:33510.service - OpenSSH per-connection server daemon (139.178.68.195:33510). Jun 25 18:44:14.048413 sshd[2396]: Accepted publickey for core from 139.178.68.195 port 33510 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:44:14.050456 sshd[2396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:14.064783 systemd-logind[2058]: New session 5 of user core. Jun 25 18:44:14.072728 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 18:44:14.185059 sudo[2400]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 18:44:14.185433 sudo[2400]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:14.201690 sudo[2400]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:14.225897 sshd[2396]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:14.229512 systemd[1]: sshd@4-172.31.31.33:22-139.178.68.195:33510.service: Deactivated successfully. Jun 25 18:44:14.234959 systemd-logind[2058]: Session 5 logged out. Waiting for processes to exit. Jun 25 18:44:14.235852 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 18:44:14.237624 systemd-logind[2058]: Removed session 5. Jun 25 18:44:14.256749 systemd[1]: Started sshd@5-172.31.31.33:22-139.178.68.195:33512.service - OpenSSH per-connection server daemon (139.178.68.195:33512). Jun 25 18:44:14.413603 sshd[2405]: Accepted publickey for core from 139.178.68.195 port 33512 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:44:14.414864 sshd[2405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:14.421073 systemd-logind[2058]: New session 6 of user core. Jun 25 18:44:14.427774 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 18:44:14.532990 sudo[2410]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 18:44:14.533483 sudo[2410]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:14.543524 sudo[2410]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:14.558140 sudo[2409]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 18:44:14.558679 sudo[2409]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:14.588004 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:14.590317 auditctl[2413]: No rules Jun 25 18:44:14.590951 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 18:44:14.591430 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:14.602799 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 18:44:14.636389 augenrules[2432]: No rules Jun 25 18:44:14.638286 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 18:44:14.647218 sudo[2409]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:14.671159 sshd[2405]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:14.677145 systemd[1]: sshd@5-172.31.31.33:22-139.178.68.195:33512.service: Deactivated successfully. Jun 25 18:44:14.679503 systemd-logind[2058]: Session 6 logged out. Waiting for processes to exit. Jun 25 18:44:14.682463 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 18:44:14.683892 systemd-logind[2058]: Removed session 6. Jun 25 18:44:14.698755 systemd[1]: Started sshd@6-172.31.31.33:22-139.178.68.195:33520.service - OpenSSH per-connection server daemon (139.178.68.195:33520). Jun 25 18:44:14.855230 sshd[2441]: Accepted publickey for core from 139.178.68.195 port 33520 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:44:14.856748 sshd[2441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:44:14.862101 systemd-logind[2058]: New session 7 of user core. Jun 25 18:44:14.867806 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 18:44:14.965073 sudo[2445]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 18:44:14.965467 sudo[2445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 18:44:15.146743 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 18:44:15.147766 (dockerd)[2455]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 25 18:44:15.547496 dockerd[2455]: time="2024-06-25T18:44:15.547436282Z" level=info msg="Starting up" Jun 25 18:44:16.081677 dockerd[2455]: time="2024-06-25T18:44:16.081127884Z" level=info msg="Loading containers: start." Jun 25 18:44:16.242390 kernel: Initializing XFRM netlink socket Jun 25 18:44:16.288301 (udev-worker)[2511]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:44:16.378462 systemd-networkd[1657]: docker0: Link UP Jun 25 18:44:16.400243 dockerd[2455]: time="2024-06-25T18:44:16.400194520Z" level=info msg="Loading containers: done." Jun 25 18:44:16.513266 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3319545367-merged.mount: Deactivated successfully. Jun 25 18:44:16.517598 dockerd[2455]: time="2024-06-25T18:44:16.517547770Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 18:44:16.518451 dockerd[2455]: time="2024-06-25T18:44:16.518413533Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 18:44:16.518635 dockerd[2455]: time="2024-06-25T18:44:16.518611683Z" level=info msg="Daemon has completed initialization" Jun 25 18:44:16.566998 dockerd[2455]: time="2024-06-25T18:44:16.566728777Z" level=info msg="API listen on /run/docker.sock" Jun 25 18:44:16.567530 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 18:44:17.817615 containerd[2091]: time="2024-06-25T18:44:17.817340208Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 18:44:18.501978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196502965.mount: Deactivated successfully. Jun 25 18:44:18.948387 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 18:44:18.960680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:19.457803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:19.469974 (kubelet)[2632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:19.550394 kubelet[2632]: E0625 18:44:19.550322 2632 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:19.556640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:19.556900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:20.843873 containerd[2091]: time="2024-06-25T18:44:20.843817332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:20.845584 containerd[2091]: time="2024-06-25T18:44:20.845386134Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=34605178" Jun 25 18:44:20.848833 containerd[2091]: time="2024-06-25T18:44:20.848595485Z" level=info msg="ImageCreate event name:\"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:20.852960 containerd[2091]: time="2024-06-25T18:44:20.852892738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:20.854405 containerd[2091]: time="2024-06-25T18:44:20.854153602Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"34601978\" in 3.036743535s" Jun 25 18:44:20.854405 containerd[2091]: time="2024-06-25T18:44:20.854203523Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:b2de212bf8c1b7b0d1b2703356ac7ddcfccaadfcdcd32c1ae914b6078d11e524\"" Jun 25 18:44:20.885825 containerd[2091]: time="2024-06-25T18:44:20.885787469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 18:44:23.276883 containerd[2091]: time="2024-06-25T18:44:23.276825342Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:23.279854 containerd[2091]: time="2024-06-25T18:44:23.279672572Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=31719491" Jun 25 18:44:23.283395 containerd[2091]: time="2024-06-25T18:44:23.281565318Z" level=info msg="ImageCreate event name:\"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:23.287209 containerd[2091]: time="2024-06-25T18:44:23.287160074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:23.288362 containerd[2091]: time="2024-06-25T18:44:23.288319559Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"33315989\" in 2.402490116s" Jun 25 18:44:23.288553 containerd[2091]: time="2024-06-25T18:44:23.288529833Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:20145ae80ad309fd0c963e2539f6ef0be795ace696539514894b290892c1884b\"" Jun 25 18:44:23.325748 containerd[2091]: time="2024-06-25T18:44:23.325700651Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 18:44:24.838065 containerd[2091]: time="2024-06-25T18:44:24.838013567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:24.843137 containerd[2091]: time="2024-06-25T18:44:24.842947372Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=16925505" Jun 25 18:44:24.850440 containerd[2091]: time="2024-06-25T18:44:24.849208272Z" level=info msg="ImageCreate event name:\"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:24.859405 containerd[2091]: time="2024-06-25T18:44:24.859231957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:24.861312 containerd[2091]: time="2024-06-25T18:44:24.861168022Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"18522021\" in 1.535422091s" Jun 25 18:44:24.861312 containerd[2091]: time="2024-06-25T18:44:24.861213913Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:12c62a5a0745d200eb8333ea6244f6d6328e64c5c3b645a4ade456cc645399b9\"" Jun 25 18:44:24.895196 containerd[2091]: time="2024-06-25T18:44:24.895160077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 18:44:26.233602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4116525001.mount: Deactivated successfully. Jun 25 18:44:26.911761 containerd[2091]: time="2024-06-25T18:44:26.911703584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.913706 containerd[2091]: time="2024-06-25T18:44:26.913198499Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jun 25 18:44:26.917942 containerd[2091]: time="2024-06-25T18:44:26.915326400Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.921281 containerd[2091]: time="2024-06-25T18:44:26.921230841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:26.922478 containerd[2091]: time="2024-06-25T18:44:26.922436826Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.027235014s" Jun 25 18:44:26.922646 containerd[2091]: time="2024-06-25T18:44:26.922622253Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jun 25 18:44:26.955082 containerd[2091]: time="2024-06-25T18:44:26.955046618Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 18:44:27.496723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188081924.mount: Deactivated successfully. Jun 25 18:44:27.507770 containerd[2091]: time="2024-06-25T18:44:27.507702591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:27.509583 containerd[2091]: time="2024-06-25T18:44:27.509395845Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jun 25 18:44:27.511539 containerd[2091]: time="2024-06-25T18:44:27.511468441Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:27.514743 containerd[2091]: time="2024-06-25T18:44:27.514701337Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:27.516505 containerd[2091]: time="2024-06-25T18:44:27.515691761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 560.602626ms" Jun 25 18:44:27.516505 containerd[2091]: time="2024-06-25T18:44:27.515736141Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jun 25 18:44:27.558049 containerd[2091]: time="2024-06-25T18:44:27.558012670Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 18:44:28.137709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount318401401.mount: Deactivated successfully. Jun 25 18:44:29.806575 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 18:44:29.817502 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:30.379660 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:30.383664 (kubelet)[2762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 25 18:44:30.513973 kubelet[2762]: E0625 18:44:30.513703 2762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 18:44:30.517564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 18:44:30.517717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 18:44:31.227629 containerd[2091]: time="2024-06-25T18:44:31.227563438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:31.229204 containerd[2091]: time="2024-06-25T18:44:31.229024947Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jun 25 18:44:31.231275 containerd[2091]: time="2024-06-25T18:44:31.230799969Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:31.234940 containerd[2091]: time="2024-06-25T18:44:31.234901115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:31.236208 containerd[2091]: time="2024-06-25T18:44:31.236171045Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.678112602s" Jun 25 18:44:31.236351 containerd[2091]: time="2024-06-25T18:44:31.236329692Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jun 25 18:44:31.261983 containerd[2091]: time="2024-06-25T18:44:31.261946158Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 18:44:31.863326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236822362.mount: Deactivated successfully. Jun 25 18:44:32.543285 containerd[2091]: time="2024-06-25T18:44:32.543228914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:32.545141 containerd[2091]: time="2024-06-25T18:44:32.544903118Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191749" Jun 25 18:44:32.548394 containerd[2091]: time="2024-06-25T18:44:32.546887360Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:32.555877 containerd[2091]: time="2024-06-25T18:44:32.555809172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:44:32.557990 containerd[2091]: time="2024-06-25T18:44:32.557947544Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.295959639s" Jun 25 18:44:32.557990 containerd[2091]: time="2024-06-25T18:44:32.557991646Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Jun 25 18:44:35.694395 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:35.706794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:35.746223 systemd[1]: Reloading requested from client PID 2852 ('systemctl') (unit session-7.scope)... Jun 25 18:44:35.746242 systemd[1]: Reloading... Jun 25 18:44:35.914430 zram_generator::config[2889]: No configuration found. Jun 25 18:44:36.097243 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:36.197316 systemd[1]: Reloading finished in 450 ms. Jun 25 18:44:36.264059 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 25 18:44:36.264404 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 25 18:44:36.265056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:36.283794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:36.406287 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 18:44:36.724894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:36.757264 (kubelet)[2967]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:36.827740 kubelet[2967]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:36.827740 kubelet[2967]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:36.827740 kubelet[2967]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:36.828271 kubelet[2967]: I0625 18:44:36.827818 2967 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:37.444084 kubelet[2967]: I0625 18:44:37.444043 2967 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:44:37.444084 kubelet[2967]: I0625 18:44:37.444081 2967 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:37.444475 kubelet[2967]: I0625 18:44:37.444452 2967 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:44:37.489401 kubelet[2967]: I0625 18:44:37.488589 2967 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:37.489401 kubelet[2967]: E0625 18:44:37.488867 2967 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:37.520166 kubelet[2967]: I0625 18:44:37.520131 2967 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:37.522677 kubelet[2967]: I0625 18:44:37.522586 2967 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:37.523013 kubelet[2967]: I0625 18:44:37.522884 2967 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:37.523246 kubelet[2967]: I0625 18:44:37.523018 2967 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:37.523246 kubelet[2967]: I0625 18:44:37.523038 2967 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:37.524247 kubelet[2967]: I0625 18:44:37.524217 2967 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:37.529818 kubelet[2967]: I0625 18:44:37.528896 2967 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:44:37.529818 kubelet[2967]: I0625 18:44:37.528979 2967 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:37.529818 kubelet[2967]: I0625 18:44:37.529022 2967 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:44:37.529818 kubelet[2967]: I0625 18:44:37.529043 2967 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:37.532428 kubelet[2967]: W0625 18:44:37.531577 2967 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.31.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-33&limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:37.532428 kubelet[2967]: E0625 18:44:37.531650 2967 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-33&limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:37.532428 kubelet[2967]: W0625 18:44:37.531726 2967 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.31.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:37.532428 kubelet[2967]: E0625 18:44:37.531762 2967 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:37.533321 kubelet[2967]: I0625 18:44:37.532902 2967 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:37.540534 kubelet[2967]: W0625 18:44:37.539979 2967 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 18:44:37.547482 kubelet[2967]: I0625 18:44:37.540712 2967 server.go:1232] "Started kubelet" Jun 25 18:44:37.547482 kubelet[2967]: I0625 18:44:37.542244 2967 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:37.547482 kubelet[2967]: I0625 18:44:37.545319 2967 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:44:37.547482 kubelet[2967]: I0625 18:44:37.546960 2967 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:44:37.547687 kubelet[2967]: I0625 18:44:37.547659 2967 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:37.550512 kubelet[2967]: E0625 18:44:37.548768 2967 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-31-33.17dc5399be8cd2df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-31-33", UID:"ip-172-31-31-33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-31-33"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 44, 37, 540688607, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 44, 37, 540688607, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-31-33"}': 'Post "https://172.31.31.33:6443/api/v1/namespaces/default/events": dial tcp 172.31.31.33:6443: connect: connection refused'(may retry after sleeping) Jun 25 18:44:37.551288 kubelet[2967]: E0625 18:44:37.551268 2967 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:44:37.551426 kubelet[2967]: E0625 18:44:37.551417 2967 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:37.557622 kubelet[2967]: I0625 18:44:37.557581 2967 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:37.560303 kubelet[2967]: I0625 18:44:37.559908 2967 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:37.566517 kubelet[2967]: I0625 18:44:37.566486 2967 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:37.567067 kubelet[2967]: I0625 18:44:37.567048 2967 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:37.568613 kubelet[2967]: W0625 18:44:37.568562 2967 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.31.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:37.568762 kubelet[2967]: E0625 18:44:37.568748 2967 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:37.570383 kubelet[2967]: E0625 18:44:37.569048 2967 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-33?timeout=10s\": dial tcp 172.31.31.33:6443: connect: connection refused" interval="200ms" Jun 25 18:44:37.628727 kubelet[2967]: I0625 18:44:37.628694 2967 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:37.633178 kubelet[2967]: I0625 18:44:37.633144 2967 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:37.633178 kubelet[2967]: I0625 18:44:37.633181 2967 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:37.633652 kubelet[2967]: I0625 18:44:37.633237 2967 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:44:37.633652 kubelet[2967]: E0625 18:44:37.633328 2967 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:37.644401 kubelet[2967]: W0625 18:44:37.644222 2967 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.31.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:37.644986 kubelet[2967]: E0625 18:44:37.644922 2967 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:37.646236 kubelet[2967]: I0625 18:44:37.646212 2967 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:37.646236 kubelet[2967]: I0625 18:44:37.646233 2967 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:37.646358 kubelet[2967]: I0625 18:44:37.646251 2967 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:37.648897 kubelet[2967]: I0625 18:44:37.648873 2967 policy_none.go:49] "None policy: Start" Jun 25 18:44:37.649968 kubelet[2967]: I0625 18:44:37.649706 2967 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:44:37.649968 kubelet[2967]: I0625 18:44:37.649730 2967 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:37.656247 kubelet[2967]: I0625 18:44:37.656221 2967 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:37.658631 kubelet[2967]: I0625 18:44:37.657009 2967 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:37.659280 kubelet[2967]: E0625 18:44:37.659265 2967 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-31-33\" not found" Jun 25 18:44:37.663728 kubelet[2967]: I0625 18:44:37.663713 2967 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-33" Jun 25 18:44:37.664344 kubelet[2967]: E0625 18:44:37.664333 2967 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.31.33:6443/api/v1/nodes\": dial tcp 172.31.31.33:6443: connect: connection refused" node="ip-172-31-31-33" Jun 25 18:44:37.734794 kubelet[2967]: I0625 18:44:37.734661 2967 topology_manager.go:215] "Topology Admit Handler" podUID="9c8b9507ee89d2c821d1df136b428547" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-33" Jun 25 18:44:37.741338 kubelet[2967]: I0625 18:44:37.741305 2967 topology_manager.go:215] "Topology Admit Handler" podUID="523c6b0acecc52d13b2076802df2fdf4" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:37.745410 kubelet[2967]: I0625 18:44:37.744239 2967 topology_manager.go:215] "Topology Admit Handler" podUID="b5f52023375fb391d2c3281dd8ed7969" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-33" Jun 25 18:44:37.769309 kubelet[2967]: I0625 18:44:37.769279 2967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:37.769621 kubelet[2967]: I0625 18:44:37.769605 2967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:37.769721 kubelet[2967]: I0625 18:44:37.769712 2967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:37.769807 kubelet[2967]: I0625 18:44:37.769800 2967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c8b9507ee89d2c821d1df136b428547-ca-certs\") pod \"kube-apiserver-ip-172-31-31-33\" (UID: \"9c8b9507ee89d2c821d1df136b428547\") " pod="kube-system/kube-apiserver-ip-172-31-31-33" Jun 25 18:44:37.769896 kubelet[2967]: I0625 18:44:37.769888 2967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c8b9507ee89d2c821d1df136b428547-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-33\" (UID: \"9c8b9507ee89d2c821d1df136b428547\") " pod="kube-system/kube-apiserver-ip-172-31-31-33" Jun 25 18:44:37.770144 kubelet[2967]: I0625 18:44:37.769970 2967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c8b9507ee89d2c821d1df136b428547-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-33\" (UID: \"9c8b9507ee89d2c821d1df136b428547\") " pod="kube-system/kube-apiserver-ip-172-31-31-33" Jun 25 18:44:37.770144 kubelet[2967]: I0625 18:44:37.769999 2967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:37.770144 kubelet[2967]: I0625 18:44:37.770027 2967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:37.770144 kubelet[2967]: I0625 18:44:37.770056 2967 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5f52023375fb391d2c3281dd8ed7969-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-33\" (UID: \"b5f52023375fb391d2c3281dd8ed7969\") " pod="kube-system/kube-scheduler-ip-172-31-31-33" Jun 25 18:44:37.770315 kubelet[2967]: E0625 18:44:37.770166 2967 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-33?timeout=10s\": dial tcp 172.31.31.33:6443: connect: connection refused" interval="400ms" Jun 25 18:44:37.866619 kubelet[2967]: I0625 18:44:37.866588 2967 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-33" Jun 25 18:44:37.867422 kubelet[2967]: E0625 18:44:37.867396 2967 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.31.33:6443/api/v1/nodes\": dial tcp 172.31.31.33:6443: connect: connection refused" node="ip-172-31-31-33" Jun 25 18:44:38.051324 containerd[2091]: time="2024-06-25T18:44:38.051199548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-33,Uid:9c8b9507ee89d2c821d1df136b428547,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:38.066397 containerd[2091]: time="2024-06-25T18:44:38.065784156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-33,Uid:523c6b0acecc52d13b2076802df2fdf4,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:38.067634 containerd[2091]: time="2024-06-25T18:44:38.067596236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-33,Uid:b5f52023375fb391d2c3281dd8ed7969,Namespace:kube-system,Attempt:0,}" Jun 25 18:44:38.174201 kubelet[2967]: E0625 18:44:38.174157 2967 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-33?timeout=10s\": dial tcp 172.31.31.33:6443: connect: connection refused" interval="800ms" Jun 25 18:44:38.270078 kubelet[2967]: I0625 18:44:38.270046 2967 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-33" Jun 25 18:44:38.270602 kubelet[2967]: E0625 18:44:38.270576 2967 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.31.33:6443/api/v1/nodes\": dial tcp 172.31.31.33:6443: connect: connection refused" node="ip-172-31-31-33" Jun 25 18:44:38.478448 kubelet[2967]: W0625 18:44:38.478275 2967 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.31.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:38.478448 kubelet[2967]: E0625 18:44:38.478346 2967 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.31.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:38.586669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2276599194.mount: Deactivated successfully. Jun 25 18:44:38.599258 containerd[2091]: time="2024-06-25T18:44:38.599205729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:38.600634 containerd[2091]: time="2024-06-25T18:44:38.600582121Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jun 25 18:44:38.603604 containerd[2091]: time="2024-06-25T18:44:38.603563827Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:38.605012 containerd[2091]: time="2024-06-25T18:44:38.604975651Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:38.606325 containerd[2091]: time="2024-06-25T18:44:38.606276043Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:38.608297 containerd[2091]: time="2024-06-25T18:44:38.608257855Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:38.609598 containerd[2091]: time="2024-06-25T18:44:38.609539153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 18:44:38.612864 containerd[2091]: time="2024-06-25T18:44:38.612807842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 18:44:38.615411 containerd[2091]: time="2024-06-25T18:44:38.613757516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.057642ms" Jun 25 18:44:38.616032 containerd[2091]: time="2024-06-25T18:44:38.615862258Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.957745ms" Jun 25 18:44:38.618546 containerd[2091]: time="2024-06-25T18:44:38.618504865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 564.756625ms" Jun 25 18:44:38.732118 kubelet[2967]: W0625 18:44:38.731924 2967 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.31.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:38.732118 kubelet[2967]: E0625 18:44:38.731998 2967 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.31.33:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:38.849393 containerd[2091]: time="2024-06-25T18:44:38.849140636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:38.849393 containerd[2091]: time="2024-06-25T18:44:38.849250375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:38.849393 containerd[2091]: time="2024-06-25T18:44:38.849285907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:38.849393 containerd[2091]: time="2024-06-25T18:44:38.849322359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:38.859391 containerd[2091]: time="2024-06-25T18:44:38.858604980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:38.859391 containerd[2091]: time="2024-06-25T18:44:38.858743851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:38.859391 containerd[2091]: time="2024-06-25T18:44:38.858897721Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:38.859391 containerd[2091]: time="2024-06-25T18:44:38.858980507Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:38.860342 containerd[2091]: time="2024-06-25T18:44:38.860136790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:44:38.860342 containerd[2091]: time="2024-06-25T18:44:38.860210638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:38.860342 containerd[2091]: time="2024-06-25T18:44:38.860242294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:44:38.860342 containerd[2091]: time="2024-06-25T18:44:38.860264073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:44:38.909301 kubelet[2967]: W0625 18:44:38.909242 2967 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.31.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-33&limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:38.909301 kubelet[2967]: E0625 18:44:38.909306 2967 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.31.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-31-33&limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:38.976556 kubelet[2967]: E0625 18:44:38.975429 2967 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.31.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-31-33?timeout=10s\": dial tcp 172.31.31.33:6443: connect: connection refused" interval="1.6s" Jun 25 18:44:38.986934 containerd[2091]: time="2024-06-25T18:44:38.986787389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-31-33,Uid:9c8b9507ee89d2c821d1df136b428547,Namespace:kube-system,Attempt:0,} returns sandbox id \"60907337818e9fe126456e3fe4ab5318395968607bb188fc3b1974401e3c3611\"" Jun 25 18:44:38.994797 containerd[2091]: time="2024-06-25T18:44:38.994524027Z" level=info msg="CreateContainer within sandbox \"60907337818e9fe126456e3fe4ab5318395968607bb188fc3b1974401e3c3611\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 18:44:39.033531 containerd[2091]: time="2024-06-25T18:44:39.033479829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-31-33,Uid:b5f52023375fb391d2c3281dd8ed7969,Namespace:kube-system,Attempt:0,} returns sandbox id \"11d6932ea5bc9b1b19eebcc6f4d35b5d2d0b32eda4bbc3cf7604cb0e6a0f64e0\"" Jun 25 18:44:39.036679 containerd[2091]: time="2024-06-25T18:44:39.036635300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-31-33,Uid:523c6b0acecc52d13b2076802df2fdf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fbcffd607ef155d37df9a083df51d2ee5c073f420c6ea37ecaedef3d5c4a8fc\"" Jun 25 18:44:39.038122 containerd[2091]: time="2024-06-25T18:44:39.037823354Z" level=info msg="CreateContainer within sandbox \"11d6932ea5bc9b1b19eebcc6f4d35b5d2d0b32eda4bbc3cf7604cb0e6a0f64e0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 18:44:39.038336 containerd[2091]: time="2024-06-25T18:44:39.037958474Z" level=info msg="CreateContainer within sandbox \"60907337818e9fe126456e3fe4ab5318395968607bb188fc3b1974401e3c3611\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4d5eb2b42cfbe487d707f0a440827d9cd7e4d3447e25390040c6fe82d9364b99\"" Jun 25 18:44:39.038909 containerd[2091]: time="2024-06-25T18:44:39.038885638Z" level=info msg="StartContainer for \"4d5eb2b42cfbe487d707f0a440827d9cd7e4d3447e25390040c6fe82d9364b99\"" Jun 25 18:44:39.045804 kubelet[2967]: W0625 18:44:39.045724 2967 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.31.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:39.046035 kubelet[2967]: E0625 18:44:39.046012 2967 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.31.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:39.051388 containerd[2091]: time="2024-06-25T18:44:39.051189083Z" level=info msg="CreateContainer within sandbox \"2fbcffd607ef155d37df9a083df51d2ee5c073f420c6ea37ecaedef3d5c4a8fc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 18:44:39.063529 containerd[2091]: time="2024-06-25T18:44:39.063052362Z" level=info msg="CreateContainer within sandbox \"11d6932ea5bc9b1b19eebcc6f4d35b5d2d0b32eda4bbc3cf7604cb0e6a0f64e0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7ef608aa1a91b0aecb1d304aff692e83fb337c1f7d9bfa05b8e3049579e29ab\"" Jun 25 18:44:39.064558 containerd[2091]: time="2024-06-25T18:44:39.064408582Z" level=info msg="StartContainer for \"a7ef608aa1a91b0aecb1d304aff692e83fb337c1f7d9bfa05b8e3049579e29ab\"" Jun 25 18:44:39.073210 kubelet[2967]: I0625 18:44:39.072815 2967 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-33" Jun 25 18:44:39.073210 kubelet[2967]: E0625 18:44:39.073185 2967 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.31.33:6443/api/v1/nodes\": dial tcp 172.31.31.33:6443: connect: connection refused" node="ip-172-31-31-33" Jun 25 18:44:39.085028 containerd[2091]: time="2024-06-25T18:44:39.084974574Z" level=info msg="CreateContainer within sandbox \"2fbcffd607ef155d37df9a083df51d2ee5c073f420c6ea37ecaedef3d5c4a8fc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"719fa788181770be2b5a4b5aedfd90467a7793d8813df9261dd74883a2827194\"" Jun 25 18:44:39.086081 containerd[2091]: time="2024-06-25T18:44:39.086012117Z" level=info msg="StartContainer for \"719fa788181770be2b5a4b5aedfd90467a7793d8813df9261dd74883a2827194\"" Jun 25 18:44:39.174264 containerd[2091]: time="2024-06-25T18:44:39.174026261Z" level=info msg="StartContainer for \"4d5eb2b42cfbe487d707f0a440827d9cd7e4d3447e25390040c6fe82d9364b99\" returns successfully" Jun 25 18:44:39.237995 containerd[2091]: time="2024-06-25T18:44:39.237883558Z" level=info msg="StartContainer for \"719fa788181770be2b5a4b5aedfd90467a7793d8813df9261dd74883a2827194\" returns successfully" Jun 25 18:44:39.244468 containerd[2091]: time="2024-06-25T18:44:39.244420752Z" level=info msg="StartContainer for \"a7ef608aa1a91b0aecb1d304aff692e83fb337c1f7d9bfa05b8e3049579e29ab\" returns successfully" Jun 25 18:44:39.512949 kubelet[2967]: E0625 18:44:39.512841 2967 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.31.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.31.33:6443: connect: connection refused Jun 25 18:44:40.676726 kubelet[2967]: I0625 18:44:40.676694 2967 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-33" Jun 25 18:44:42.138103 kubelet[2967]: E0625 18:44:42.138053 2967 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-31-33\" not found" node="ip-172-31-31-33" Jun 25 18:44:42.170472 kubelet[2967]: I0625 18:44:42.170419 2967 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-31-33" Jun 25 18:44:42.241674 kubelet[2967]: E0625 18:44:42.241565 2967 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-31-33.17dc5399be8cd2df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-31-33", UID:"ip-172-31-31-33", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-31-33"}, FirstTimestamp:time.Date(2024, time.June, 25, 18, 44, 37, 540688607, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 18, 44, 37, 540688607, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-31-33"}': 'namespaces "default" not found' (will not retry!) Jun 25 18:44:42.534233 kubelet[2967]: I0625 18:44:42.534006 2967 apiserver.go:52] "Watching apiserver" Jun 25 18:44:42.567927 kubelet[2967]: I0625 18:44:42.567881 2967 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:44.904822 systemd[1]: Reloading requested from client PID 3238 ('systemctl') (unit session-7.scope)... Jun 25 18:44:44.904840 systemd[1]: Reloading... Jun 25 18:44:45.115397 zram_generator::config[3276]: No configuration found. Jun 25 18:44:45.308132 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 18:44:45.482644 systemd[1]: Reloading finished in 577 ms. Jun 25 18:44:45.536687 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:45.549925 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 18:44:45.550383 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:45.557340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 18:44:45.963099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 18:44:45.981949 (kubelet)[3343]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 25 18:44:46.113811 kubelet[3343]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:46.113811 kubelet[3343]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 18:44:46.113811 kubelet[3343]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 18:44:46.113811 kubelet[3343]: I0625 18:44:46.113323 3343 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 18:44:46.119905 kubelet[3343]: I0625 18:44:46.119201 3343 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 18:44:46.119905 kubelet[3343]: I0625 18:44:46.119230 3343 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 18:44:46.119905 kubelet[3343]: I0625 18:44:46.119525 3343 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 18:44:46.123054 kubelet[3343]: I0625 18:44:46.122982 3343 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 18:44:46.130551 kubelet[3343]: I0625 18:44:46.130304 3343 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 18:44:46.147322 kubelet[3343]: I0625 18:44:46.147284 3343 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 18:44:46.148335 kubelet[3343]: I0625 18:44:46.148011 3343 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 18:44:46.149289 kubelet[3343]: I0625 18:44:46.148886 3343 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 18:44:46.150884 kubelet[3343]: I0625 18:44:46.150116 3343 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 18:44:46.150884 kubelet[3343]: I0625 18:44:46.150146 3343 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 18:44:46.150884 kubelet[3343]: I0625 18:44:46.150190 3343 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:46.150884 kubelet[3343]: I0625 18:44:46.150291 3343 kubelet.go:393] "Attempting to sync node with API server" Jun 25 18:44:46.150884 kubelet[3343]: I0625 18:44:46.150307 3343 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 18:44:46.150884 kubelet[3343]: I0625 18:44:46.150339 3343 kubelet.go:309] "Adding apiserver pod source" Jun 25 18:44:46.150884 kubelet[3343]: I0625 18:44:46.150359 3343 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 18:44:46.153576 kubelet[3343]: I0625 18:44:46.151402 3343 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Jun 25 18:44:46.153576 kubelet[3343]: I0625 18:44:46.152008 3343 server.go:1232] "Started kubelet" Jun 25 18:44:46.159118 kubelet[3343]: I0625 18:44:46.158773 3343 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 18:44:46.172810 kubelet[3343]: I0625 18:44:46.172740 3343 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 18:44:46.180945 kubelet[3343]: I0625 18:44:46.179224 3343 server.go:462] "Adding debug handlers to kubelet server" Jun 25 18:44:46.181356 kubelet[3343]: I0625 18:44:46.181265 3343 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 18:44:46.181549 kubelet[3343]: I0625 18:44:46.181492 3343 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 18:44:46.182542 kubelet[3343]: E0625 18:44:46.182439 3343 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 18:44:46.182542 kubelet[3343]: E0625 18:44:46.182489 3343 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 18:44:46.184162 kubelet[3343]: I0625 18:44:46.184040 3343 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 18:44:46.193940 kubelet[3343]: I0625 18:44:46.193443 3343 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 18:44:46.193940 kubelet[3343]: I0625 18:44:46.193625 3343 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 18:44:46.220244 kubelet[3343]: I0625 18:44:46.219810 3343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 18:44:46.228846 kubelet[3343]: I0625 18:44:46.228810 3343 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 18:44:46.228846 kubelet[3343]: I0625 18:44:46.228841 3343 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 18:44:46.228846 kubelet[3343]: I0625 18:44:46.228912 3343 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 18:44:46.229168 kubelet[3343]: E0625 18:44:46.229040 3343 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 18:44:46.288852 kubelet[3343]: I0625 18:44:46.288823 3343 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-31-33" Jun 25 18:44:46.305882 kubelet[3343]: I0625 18:44:46.305432 3343 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-31-33" Jun 25 18:44:46.305882 kubelet[3343]: I0625 18:44:46.305509 3343 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-31-33" Jun 25 18:44:46.331587 kubelet[3343]: E0625 18:44:46.331270 3343 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 18:44:46.404626 kubelet[3343]: I0625 18:44:46.404593 3343 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 18:44:46.404774 kubelet[3343]: I0625 18:44:46.404687 3343 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 18:44:46.404774 kubelet[3343]: I0625 18:44:46.404709 3343 state_mem.go:36] "Initialized new in-memory state store" Jun 25 18:44:46.406560 kubelet[3343]: I0625 18:44:46.405026 3343 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 18:44:46.406560 kubelet[3343]: I0625 18:44:46.405059 3343 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 18:44:46.406560 kubelet[3343]: I0625 18:44:46.405069 3343 policy_none.go:49] "None policy: Start" Jun 25 18:44:46.407030 kubelet[3343]: I0625 18:44:46.407005 3343 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 18:44:46.407119 kubelet[3343]: I0625 18:44:46.407038 3343 state_mem.go:35] "Initializing new in-memory state store" Jun 25 18:44:46.407251 kubelet[3343]: I0625 18:44:46.407232 3343 state_mem.go:75] "Updated machine memory state" Jun 25 18:44:46.408598 kubelet[3343]: I0625 18:44:46.408566 3343 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 18:44:46.410727 kubelet[3343]: I0625 18:44:46.410055 3343 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 18:44:46.532629 kubelet[3343]: I0625 18:44:46.532315 3343 topology_manager.go:215] "Topology Admit Handler" podUID="9c8b9507ee89d2c821d1df136b428547" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-31-33" Jun 25 18:44:46.533324 kubelet[3343]: I0625 18:44:46.533286 3343 topology_manager.go:215] "Topology Admit Handler" podUID="523c6b0acecc52d13b2076802df2fdf4" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:46.535157 kubelet[3343]: I0625 18:44:46.535090 3343 topology_manager.go:215] "Topology Admit Handler" podUID="b5f52023375fb391d2c3281dd8ed7969" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-31-33" Jun 25 18:44:46.544788 kubelet[3343]: E0625 18:44:46.544750 3343 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-31-33\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:46.545797 kubelet[3343]: E0625 18:44:46.545720 3343 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-33\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-33" Jun 25 18:44:46.603258 kubelet[3343]: I0625 18:44:46.603212 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:46.603437 kubelet[3343]: I0625 18:44:46.603271 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:46.603437 kubelet[3343]: I0625 18:44:46.603300 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:46.603437 kubelet[3343]: I0625 18:44:46.603329 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:46.603437 kubelet[3343]: I0625 18:44:46.603361 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9c8b9507ee89d2c821d1df136b428547-ca-certs\") pod \"kube-apiserver-ip-172-31-31-33\" (UID: \"9c8b9507ee89d2c821d1df136b428547\") " pod="kube-system/kube-apiserver-ip-172-31-31-33" Jun 25 18:44:46.603437 kubelet[3343]: I0625 18:44:46.603404 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/523c6b0acecc52d13b2076802df2fdf4-ca-certs\") pod \"kube-controller-manager-ip-172-31-31-33\" (UID: \"523c6b0acecc52d13b2076802df2fdf4\") " pod="kube-system/kube-controller-manager-ip-172-31-31-33" Jun 25 18:44:46.603651 kubelet[3343]: I0625 18:44:46.603433 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5f52023375fb391d2c3281dd8ed7969-kubeconfig\") pod \"kube-scheduler-ip-172-31-31-33\" (UID: \"b5f52023375fb391d2c3281dd8ed7969\") " pod="kube-system/kube-scheduler-ip-172-31-31-33" Jun 25 18:44:46.603651 kubelet[3343]: I0625 18:44:46.603461 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9c8b9507ee89d2c821d1df136b428547-k8s-certs\") pod \"kube-apiserver-ip-172-31-31-33\" (UID: \"9c8b9507ee89d2c821d1df136b428547\") " pod="kube-system/kube-apiserver-ip-172-31-31-33" Jun 25 18:44:46.603651 kubelet[3343]: I0625 18:44:46.603519 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9c8b9507ee89d2c821d1df136b428547-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-31-33\" (UID: \"9c8b9507ee89d2c821d1df136b428547\") " pod="kube-system/kube-apiserver-ip-172-31-31-33" Jun 25 18:44:47.165280 kubelet[3343]: I0625 18:44:47.165231 3343 apiserver.go:52] "Watching apiserver" Jun 25 18:44:47.193792 kubelet[3343]: I0625 18:44:47.193695 3343 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 18:44:47.314820 kubelet[3343]: E0625 18:44:47.314765 3343 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-31-33\" already exists" pod="kube-system/kube-apiserver-ip-172-31-31-33" Jun 25 18:44:47.412502 kubelet[3343]: I0625 18:44:47.412463 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-31-33" podStartSLOduration=2.412409429 podCreationTimestamp="2024-06-25 18:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:47.392302461 +0000 UTC m=+1.379669398" watchObservedRunningTime="2024-06-25 18:44:47.412409429 +0000 UTC m=+1.399776367" Jun 25 18:44:47.454462 kubelet[3343]: I0625 18:44:47.452787 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-31-33" podStartSLOduration=1.452738289 podCreationTimestamp="2024-06-25 18:44:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:47.414676114 +0000 UTC m=+1.402043051" watchObservedRunningTime="2024-06-25 18:44:47.452738289 +0000 UTC m=+1.440105231" Jun 25 18:44:47.491353 kubelet[3343]: I0625 18:44:47.491123 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-31-33" podStartSLOduration=2.491071523 podCreationTimestamp="2024-06-25 18:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:44:47.454750685 +0000 UTC m=+1.442117622" watchObservedRunningTime="2024-06-25 18:44:47.491071523 +0000 UTC m=+1.478438462" Jun 25 18:44:50.995908 update_engine[2063]: I0625 18:44:50.995864 2063 update_attempter.cc:509] Updating boot flags... Jun 25 18:44:51.390269 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3413) Jun 25 18:44:52.643630 sudo[2445]: pam_unix(sudo:session): session closed for user root Jun 25 18:44:52.667044 sshd[2441]: pam_unix(sshd:session): session closed for user core Jun 25 18:44:52.672195 systemd[1]: sshd@6-172.31.31.33:22-139.178.68.195:33520.service: Deactivated successfully. Jun 25 18:44:52.677690 systemd-logind[2058]: Session 7 logged out. Waiting for processes to exit. Jun 25 18:44:52.678120 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 18:44:52.680967 systemd-logind[2058]: Removed session 7. Jun 25 18:44:58.782350 kubelet[3343]: I0625 18:44:58.782317 3343 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 18:44:58.785432 containerd[2091]: time="2024-06-25T18:44:58.784983958Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 18:44:58.785908 kubelet[3343]: I0625 18:44:58.785245 3343 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 18:44:59.644753 kubelet[3343]: I0625 18:44:59.644710 3343 topology_manager.go:215] "Topology Admit Handler" podUID="e5f9adbe-d556-4478-bbf3-abaf6a24e579" podNamespace="kube-system" podName="kube-proxy-5xcff" Jun 25 18:44:59.833583 kubelet[3343]: I0625 18:44:59.833423 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5f9adbe-d556-4478-bbf3-abaf6a24e579-lib-modules\") pod \"kube-proxy-5xcff\" (UID: \"e5f9adbe-d556-4478-bbf3-abaf6a24e579\") " pod="kube-system/kube-proxy-5xcff" Jun 25 18:44:59.833583 kubelet[3343]: I0625 18:44:59.833477 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e5f9adbe-d556-4478-bbf3-abaf6a24e579-kube-proxy\") pod \"kube-proxy-5xcff\" (UID: \"e5f9adbe-d556-4478-bbf3-abaf6a24e579\") " pod="kube-system/kube-proxy-5xcff" Jun 25 18:44:59.833583 kubelet[3343]: I0625 18:44:59.833514 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mdcw\" (UniqueName: \"kubernetes.io/projected/e5f9adbe-d556-4478-bbf3-abaf6a24e579-kube-api-access-9mdcw\") pod \"kube-proxy-5xcff\" (UID: \"e5f9adbe-d556-4478-bbf3-abaf6a24e579\") " pod="kube-system/kube-proxy-5xcff" Jun 25 18:44:59.833583 kubelet[3343]: I0625 18:44:59.833536 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5f9adbe-d556-4478-bbf3-abaf6a24e579-xtables-lock\") pod \"kube-proxy-5xcff\" (UID: \"e5f9adbe-d556-4478-bbf3-abaf6a24e579\") " pod="kube-system/kube-proxy-5xcff" Jun 25 18:44:59.839348 kubelet[3343]: I0625 18:44:59.839245 3343 topology_manager.go:215] "Topology Admit Handler" podUID="6c24c90d-985b-4ce1-bb55-9604d217fcb4" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-l8c9r" Jun 25 18:44:59.935271 kubelet[3343]: I0625 18:44:59.934819 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtg8l\" (UniqueName: \"kubernetes.io/projected/6c24c90d-985b-4ce1-bb55-9604d217fcb4-kube-api-access-vtg8l\") pod \"tigera-operator-76c4974c85-l8c9r\" (UID: \"6c24c90d-985b-4ce1-bb55-9604d217fcb4\") " pod="tigera-operator/tigera-operator-76c4974c85-l8c9r" Jun 25 18:44:59.935271 kubelet[3343]: I0625 18:44:59.934980 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6c24c90d-985b-4ce1-bb55-9604d217fcb4-var-lib-calico\") pod \"tigera-operator-76c4974c85-l8c9r\" (UID: \"6c24c90d-985b-4ce1-bb55-9604d217fcb4\") " pod="tigera-operator/tigera-operator-76c4974c85-l8c9r" Jun 25 18:45:00.146190 containerd[2091]: time="2024-06-25T18:45:00.145757872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-l8c9r,Uid:6c24c90d-985b-4ce1-bb55-9604d217fcb4,Namespace:tigera-operator,Attempt:0,}" Jun 25 18:45:00.187111 containerd[2091]: time="2024-06-25T18:45:00.186852741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:00.187778 containerd[2091]: time="2024-06-25T18:45:00.186934148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:00.187778 containerd[2091]: time="2024-06-25T18:45:00.187614199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:00.188992 containerd[2091]: time="2024-06-25T18:45:00.187734637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:00.258063 containerd[2091]: time="2024-06-25T18:45:00.257600597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xcff,Uid:e5f9adbe-d556-4478-bbf3-abaf6a24e579,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:00.280131 containerd[2091]: time="2024-06-25T18:45:00.280069715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-l8c9r,Uid:6c24c90d-985b-4ce1-bb55-9604d217fcb4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c85e7c8c8342f936c469b4a113da7e98a7605ea47908f0a438b099b71529997b\"" Jun 25 18:45:00.282165 containerd[2091]: time="2024-06-25T18:45:00.282016856Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 18:45:00.326391 containerd[2091]: time="2024-06-25T18:45:00.326218717Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:00.326567 containerd[2091]: time="2024-06-25T18:45:00.326296984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:00.326567 containerd[2091]: time="2024-06-25T18:45:00.326525277Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:00.326864 containerd[2091]: time="2024-06-25T18:45:00.326559120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:00.389945 containerd[2091]: time="2024-06-25T18:45:00.389898367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5xcff,Uid:e5f9adbe-d556-4478-bbf3-abaf6a24e579,Namespace:kube-system,Attempt:0,} returns sandbox id \"c74e830e13a04b9e65d223de90eb3b90021c3d27f5393629efb2810a099d5dfb\"" Jun 25 18:45:00.393133 containerd[2091]: time="2024-06-25T18:45:00.392848267Z" level=info msg="CreateContainer within sandbox \"c74e830e13a04b9e65d223de90eb3b90021c3d27f5393629efb2810a099d5dfb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 18:45:00.423325 containerd[2091]: time="2024-06-25T18:45:00.423193031Z" level=info msg="CreateContainer within sandbox \"c74e830e13a04b9e65d223de90eb3b90021c3d27f5393629efb2810a099d5dfb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3984093c70f12408b644fe99fe6b3acfd7600c88494ce4cab071e0cfbf75db73\"" Jun 25 18:45:00.425427 containerd[2091]: time="2024-06-25T18:45:00.424099703Z" level=info msg="StartContainer for \"3984093c70f12408b644fe99fe6b3acfd7600c88494ce4cab071e0cfbf75db73\"" Jun 25 18:45:00.506855 containerd[2091]: time="2024-06-25T18:45:00.506689331Z" level=info msg="StartContainer for \"3984093c70f12408b644fe99fe6b3acfd7600c88494ce4cab071e0cfbf75db73\" returns successfully" Jun 25 18:45:00.971351 systemd[1]: run-containerd-runc-k8s.io-c85e7c8c8342f936c469b4a113da7e98a7605ea47908f0a438b099b71529997b-runc.n5McXm.mount: Deactivated successfully. Jun 25 18:45:01.508514 kubelet[3343]: I0625 18:45:01.508459 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5xcff" podStartSLOduration=2.504851323 podCreationTimestamp="2024-06-25 18:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:01.504702542 +0000 UTC m=+15.492069493" watchObservedRunningTime="2024-06-25 18:45:01.504851323 +0000 UTC m=+15.492218254" Jun 25 18:45:02.313939 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431548934.mount: Deactivated successfully. Jun 25 18:45:03.536843 containerd[2091]: time="2024-06-25T18:45:03.536723932Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:03.538921 containerd[2091]: time="2024-06-25T18:45:03.538799764Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076068" Jun 25 18:45:03.544430 containerd[2091]: time="2024-06-25T18:45:03.544336723Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:03.549754 containerd[2091]: time="2024-06-25T18:45:03.549707530Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:03.551780 containerd[2091]: time="2024-06-25T18:45:03.550770181Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 3.268596189s" Jun 25 18:45:03.551780 containerd[2091]: time="2024-06-25T18:45:03.550816467Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jun 25 18:45:03.556916 containerd[2091]: time="2024-06-25T18:45:03.556868053Z" level=info msg="CreateContainer within sandbox \"c85e7c8c8342f936c469b4a113da7e98a7605ea47908f0a438b099b71529997b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 18:45:03.583829 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3348283338.mount: Deactivated successfully. Jun 25 18:45:03.595685 containerd[2091]: time="2024-06-25T18:45:03.595627459Z" level=info msg="CreateContainer within sandbox \"c85e7c8c8342f936c469b4a113da7e98a7605ea47908f0a438b099b71529997b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dfd7e9e137dd15df8230aa7684491f0a9e90fdbcf5fb4a98ac75f9131dd53239\"" Jun 25 18:45:03.596811 containerd[2091]: time="2024-06-25T18:45:03.596747059Z" level=info msg="StartContainer for \"dfd7e9e137dd15df8230aa7684491f0a9e90fdbcf5fb4a98ac75f9131dd53239\"" Jun 25 18:45:03.674514 systemd[1]: run-containerd-runc-k8s.io-dfd7e9e137dd15df8230aa7684491f0a9e90fdbcf5fb4a98ac75f9131dd53239-runc.Mslzaf.mount: Deactivated successfully. Jun 25 18:45:03.767362 containerd[2091]: time="2024-06-25T18:45:03.767308218Z" level=info msg="StartContainer for \"dfd7e9e137dd15df8230aa7684491f0a9e90fdbcf5fb4a98ac75f9131dd53239\" returns successfully" Jun 25 18:45:06.266392 kubelet[3343]: I0625 18:45:06.265130 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-l8c9r" podStartSLOduration=3.994210169 podCreationTimestamp="2024-06-25 18:44:59 +0000 UTC" firstStartedPulling="2024-06-25 18:45:00.281315422 +0000 UTC m=+14.268682339" lastFinishedPulling="2024-06-25 18:45:03.552182857 +0000 UTC m=+17.539549786" observedRunningTime="2024-06-25 18:45:04.463842585 +0000 UTC m=+18.451209728" watchObservedRunningTime="2024-06-25 18:45:06.265077616 +0000 UTC m=+20.252444555" Jun 25 18:45:07.109089 kubelet[3343]: I0625 18:45:07.108294 3343 topology_manager.go:215] "Topology Admit Handler" podUID="0586f3bf-5e81-48c7-824b-2c0b671b2b4f" podNamespace="calico-system" podName="calico-typha-7dd4478bf-sl96f" Jun 25 18:45:07.161163 kubelet[3343]: I0625 18:45:07.161036 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9n5t\" (UniqueName: \"kubernetes.io/projected/0586f3bf-5e81-48c7-824b-2c0b671b2b4f-kube-api-access-x9n5t\") pod \"calico-typha-7dd4478bf-sl96f\" (UID: \"0586f3bf-5e81-48c7-824b-2c0b671b2b4f\") " pod="calico-system/calico-typha-7dd4478bf-sl96f" Jun 25 18:45:07.161870 kubelet[3343]: I0625 18:45:07.161323 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0586f3bf-5e81-48c7-824b-2c0b671b2b4f-tigera-ca-bundle\") pod \"calico-typha-7dd4478bf-sl96f\" (UID: \"0586f3bf-5e81-48c7-824b-2c0b671b2b4f\") " pod="calico-system/calico-typha-7dd4478bf-sl96f" Jun 25 18:45:07.162265 kubelet[3343]: I0625 18:45:07.161983 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0586f3bf-5e81-48c7-824b-2c0b671b2b4f-typha-certs\") pod \"calico-typha-7dd4478bf-sl96f\" (UID: \"0586f3bf-5e81-48c7-824b-2c0b671b2b4f\") " pod="calico-system/calico-typha-7dd4478bf-sl96f" Jun 25 18:45:07.254630 kubelet[3343]: I0625 18:45:07.249894 3343 topology_manager.go:215] "Topology Admit Handler" podUID="76c1fad6-9c8d-4a23-9154-18bcda02dfa5" podNamespace="calico-system" podName="calico-node-67nff" Jun 25 18:45:07.262204 kubelet[3343]: I0625 18:45:07.262177 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-lib-modules\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.262470 kubelet[3343]: I0625 18:45:07.262444 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-flexvol-driver-host\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.264028 kubelet[3343]: I0625 18:45:07.263949 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-var-lib-calico\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.264812 kubelet[3343]: I0625 18:45:07.264739 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-cni-log-dir\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.265102 kubelet[3343]: I0625 18:45:07.265090 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-xtables-lock\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.266417 kubelet[3343]: I0625 18:45:07.265441 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-policysync\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.266417 kubelet[3343]: I0625 18:45:07.265485 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-tigera-ca-bundle\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.266417 kubelet[3343]: I0625 18:45:07.265520 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-var-run-calico\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.266417 kubelet[3343]: I0625 18:45:07.265555 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-node-certs\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.266417 kubelet[3343]: I0625 18:45:07.265585 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-cni-net-dir\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.267261 kubelet[3343]: I0625 18:45:07.265617 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-cni-bin-dir\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.267261 kubelet[3343]: I0625 18:45:07.265649 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdqt2\" (UniqueName: \"kubernetes.io/projected/76c1fad6-9c8d-4a23-9154-18bcda02dfa5-kube-api-access-fdqt2\") pod \"calico-node-67nff\" (UID: \"76c1fad6-9c8d-4a23-9154-18bcda02dfa5\") " pod="calico-system/calico-node-67nff" Jun 25 18:45:07.383460 kubelet[3343]: E0625 18:45:07.380682 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.383460 kubelet[3343]: W0625 18:45:07.380709 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.383460 kubelet[3343]: E0625 18:45:07.380830 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.388408 kubelet[3343]: E0625 18:45:07.386587 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.388408 kubelet[3343]: W0625 18:45:07.386608 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.388408 kubelet[3343]: E0625 18:45:07.386635 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.388408 kubelet[3343]: E0625 18:45:07.387187 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.388408 kubelet[3343]: W0625 18:45:07.387199 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.388408 kubelet[3343]: E0625 18:45:07.387219 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.388408 kubelet[3343]: E0625 18:45:07.387438 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.388408 kubelet[3343]: W0625 18:45:07.387448 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.388408 kubelet[3343]: E0625 18:45:07.387465 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.388408 kubelet[3343]: E0625 18:45:07.387660 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.399157 kubelet[3343]: W0625 18:45:07.387669 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.399157 kubelet[3343]: E0625 18:45:07.387685 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.399157 kubelet[3343]: E0625 18:45:07.387940 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.399157 kubelet[3343]: W0625 18:45:07.387950 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.399157 kubelet[3343]: E0625 18:45:07.387966 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.399157 kubelet[3343]: E0625 18:45:07.388317 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.399157 kubelet[3343]: W0625 18:45:07.388327 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.399157 kubelet[3343]: E0625 18:45:07.388343 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.399157 kubelet[3343]: E0625 18:45:07.388834 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.399157 kubelet[3343]: W0625 18:45:07.388846 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.399913 kubelet[3343]: E0625 18:45:07.388863 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.399913 kubelet[3343]: E0625 18:45:07.389096 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.399913 kubelet[3343]: W0625 18:45:07.389109 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.399913 kubelet[3343]: E0625 18:45:07.389124 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.399913 kubelet[3343]: E0625 18:45:07.389580 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.399913 kubelet[3343]: W0625 18:45:07.389590 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.399913 kubelet[3343]: E0625 18:45:07.389648 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.404535 kubelet[3343]: E0625 18:45:07.401706 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.404535 kubelet[3343]: W0625 18:45:07.401727 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.404535 kubelet[3343]: E0625 18:45:07.401758 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.404535 kubelet[3343]: E0625 18:45:07.402876 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.404535 kubelet[3343]: W0625 18:45:07.402898 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.404535 kubelet[3343]: E0625 18:45:07.402941 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.404535 kubelet[3343]: I0625 18:45:07.402906 3343 topology_manager.go:215] "Topology Admit Handler" podUID="11739506-f4c5-4731-8420-c05072fd4c97" podNamespace="calico-system" podName="csi-node-driver-86wj5" Jun 25 18:45:07.404535 kubelet[3343]: E0625 18:45:07.404096 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86wj5" podUID="11739506-f4c5-4731-8420-c05072fd4c97" Jun 25 18:45:07.406751 kubelet[3343]: E0625 18:45:07.406698 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.406751 kubelet[3343]: W0625 18:45:07.406750 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.407060 kubelet[3343]: E0625 18:45:07.406776 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.409391 kubelet[3343]: E0625 18:45:07.407723 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.409391 kubelet[3343]: W0625 18:45:07.407739 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.409391 kubelet[3343]: E0625 18:45:07.407762 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.427349 containerd[2091]: time="2024-06-25T18:45:07.426332819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dd4478bf-sl96f,Uid:0586f3bf-5e81-48c7-824b-2c0b671b2b4f,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:07.468793 kubelet[3343]: E0625 18:45:07.468562 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.469867 kubelet[3343]: W0625 18:45:07.468587 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.469867 kubelet[3343]: E0625 18:45:07.469111 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.472312 kubelet[3343]: E0625 18:45:07.471944 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.472312 kubelet[3343]: W0625 18:45:07.471964 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.472312 kubelet[3343]: E0625 18:45:07.472043 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.473558 kubelet[3343]: E0625 18:45:07.472907 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.473558 kubelet[3343]: W0625 18:45:07.472931 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.473558 kubelet[3343]: E0625 18:45:07.472953 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.475393 kubelet[3343]: E0625 18:45:07.475127 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.475799 kubelet[3343]: W0625 18:45:07.475647 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.476204 kubelet[3343]: E0625 18:45:07.476058 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.481395 kubelet[3343]: E0625 18:45:07.480027 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.481395 kubelet[3343]: W0625 18:45:07.480048 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.481395 kubelet[3343]: E0625 18:45:07.480080 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.483858 kubelet[3343]: E0625 18:45:07.482645 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.483858 kubelet[3343]: W0625 18:45:07.482665 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.483858 kubelet[3343]: E0625 18:45:07.482882 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.483858 kubelet[3343]: E0625 18:45:07.483742 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.483858 kubelet[3343]: W0625 18:45:07.483756 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.491431 kubelet[3343]: I0625 18:45:07.488400 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/11739506-f4c5-4731-8420-c05072fd4c97-varrun\") pod \"csi-node-driver-86wj5\" (UID: \"11739506-f4c5-4731-8420-c05072fd4c97\") " pod="calico-system/csi-node-driver-86wj5" Jun 25 18:45:07.491431 kubelet[3343]: E0625 18:45:07.488496 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.491431 kubelet[3343]: E0625 18:45:07.488633 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.491431 kubelet[3343]: W0625 18:45:07.488656 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.497857 kubelet[3343]: E0625 18:45:07.490728 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.497857 kubelet[3343]: E0625 18:45:07.495533 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.497857 kubelet[3343]: W0625 18:45:07.495551 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.497857 kubelet[3343]: E0625 18:45:07.496411 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.497857 kubelet[3343]: E0625 18:45:07.496819 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.497857 kubelet[3343]: W0625 18:45:07.496833 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.497857 kubelet[3343]: E0625 18:45:07.497585 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.504816 kubelet[3343]: E0625 18:45:07.500400 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.504816 kubelet[3343]: W0625 18:45:07.500424 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.504816 kubelet[3343]: E0625 18:45:07.500483 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.509806 kubelet[3343]: E0625 18:45:07.505415 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.509806 kubelet[3343]: W0625 18:45:07.505558 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.509806 kubelet[3343]: E0625 18:45:07.506972 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.512549 kubelet[3343]: E0625 18:45:07.512516 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.512723 kubelet[3343]: W0625 18:45:07.512706 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.512836 kubelet[3343]: E0625 18:45:07.512824 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.514356 kubelet[3343]: E0625 18:45:07.514339 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.514523 kubelet[3343]: W0625 18:45:07.514506 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.514633 kubelet[3343]: E0625 18:45:07.514622 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.515110 kubelet[3343]: E0625 18:45:07.515096 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.515326 kubelet[3343]: W0625 18:45:07.515312 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.515980 kubelet[3343]: E0625 18:45:07.515734 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.516335 kubelet[3343]: E0625 18:45:07.516323 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.516494 kubelet[3343]: W0625 18:45:07.516479 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.517192 kubelet[3343]: E0625 18:45:07.516619 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.518211 kubelet[3343]: E0625 18:45:07.517983 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.518211 kubelet[3343]: W0625 18:45:07.517998 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.518211 kubelet[3343]: E0625 18:45:07.518017 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.520311 kubelet[3343]: E0625 18:45:07.519588 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.520311 kubelet[3343]: W0625 18:45:07.519603 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.520311 kubelet[3343]: E0625 18:45:07.519621 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.520311 kubelet[3343]: E0625 18:45:07.519832 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.520311 kubelet[3343]: W0625 18:45:07.519841 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.520311 kubelet[3343]: E0625 18:45:07.519857 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.521034 kubelet[3343]: E0625 18:45:07.520748 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.521034 kubelet[3343]: W0625 18:45:07.520761 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.521034 kubelet[3343]: E0625 18:45:07.520778 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.522027 kubelet[3343]: E0625 18:45:07.521646 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.522027 kubelet[3343]: W0625 18:45:07.521659 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.522027 kubelet[3343]: E0625 18:45:07.521676 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.522655 kubelet[3343]: E0625 18:45:07.522321 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.522655 kubelet[3343]: W0625 18:45:07.522334 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.522655 kubelet[3343]: E0625 18:45:07.522352 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.523718 kubelet[3343]: E0625 18:45:07.523174 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.523718 kubelet[3343]: W0625 18:45:07.523187 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.523718 kubelet[3343]: E0625 18:45:07.523293 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.524825 kubelet[3343]: E0625 18:45:07.524543 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.524825 kubelet[3343]: W0625 18:45:07.524557 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.524825 kubelet[3343]: E0625 18:45:07.524574 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.525793 kubelet[3343]: E0625 18:45:07.525519 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.525793 kubelet[3343]: W0625 18:45:07.525533 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.525793 kubelet[3343]: E0625 18:45:07.525558 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.547418 containerd[2091]: time="2024-06-25T18:45:07.547245963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:07.547418 containerd[2091]: time="2024-06-25T18:45:07.547333298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:07.552433 containerd[2091]: time="2024-06-25T18:45:07.549654089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:07.552433 containerd[2091]: time="2024-06-25T18:45:07.549701261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:07.569562 containerd[2091]: time="2024-06-25T18:45:07.567160265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-67nff,Uid:76c1fad6-9c8d-4a23-9154-18bcda02dfa5,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:07.614727 kubelet[3343]: E0625 18:45:07.614668 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.614727 kubelet[3343]: W0625 18:45:07.614694 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.614727 kubelet[3343]: E0625 18:45:07.614720 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.614969 kubelet[3343]: I0625 18:45:07.614758 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgstx\" (UniqueName: \"kubernetes.io/projected/11739506-f4c5-4731-8420-c05072fd4c97-kube-api-access-qgstx\") pod \"csi-node-driver-86wj5\" (UID: \"11739506-f4c5-4731-8420-c05072fd4c97\") " pod="calico-system/csi-node-driver-86wj5" Jun 25 18:45:07.624619 kubelet[3343]: E0625 18:45:07.624588 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.624619 kubelet[3343]: W0625 18:45:07.624616 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.624821 kubelet[3343]: E0625 18:45:07.624648 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.624821 kubelet[3343]: I0625 18:45:07.624703 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11739506-f4c5-4731-8420-c05072fd4c97-kubelet-dir\") pod \"csi-node-driver-86wj5\" (UID: \"11739506-f4c5-4731-8420-c05072fd4c97\") " pod="calico-system/csi-node-driver-86wj5" Jun 25 18:45:07.625459 kubelet[3343]: E0625 18:45:07.625439 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.625611 kubelet[3343]: W0625 18:45:07.625595 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.627699 kubelet[3343]: E0625 18:45:07.627674 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.637892 kubelet[3343]: E0625 18:45:07.634763 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.637892 kubelet[3343]: W0625 18:45:07.634787 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.637892 kubelet[3343]: E0625 18:45:07.635773 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.637892 kubelet[3343]: W0625 18:45:07.635790 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.637892 kubelet[3343]: E0625 18:45:07.637327 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.637892 kubelet[3343]: W0625 18:45:07.637341 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.643724 kubelet[3343]: E0625 18:45:07.638009 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.643724 kubelet[3343]: W0625 18:45:07.638021 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.643724 kubelet[3343]: E0625 18:45:07.638045 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.648574 kubelet[3343]: E0625 18:45:07.645717 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.648574 kubelet[3343]: E0625 18:45:07.645762 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.648574 kubelet[3343]: E0625 18:45:07.645775 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.648574 kubelet[3343]: I0625 18:45:07.645816 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11739506-f4c5-4731-8420-c05072fd4c97-registration-dir\") pod \"csi-node-driver-86wj5\" (UID: \"11739506-f4c5-4731-8420-c05072fd4c97\") " pod="calico-system/csi-node-driver-86wj5" Jun 25 18:45:07.648574 kubelet[3343]: E0625 18:45:07.645998 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.648574 kubelet[3343]: W0625 18:45:07.646018 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.648574 kubelet[3343]: E0625 18:45:07.646043 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.648574 kubelet[3343]: E0625 18:45:07.648084 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.648574 kubelet[3343]: W0625 18:45:07.648100 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.653240 kubelet[3343]: E0625 18:45:07.648418 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.653240 kubelet[3343]: E0625 18:45:07.650587 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.662675 kubelet[3343]: W0625 18:45:07.654261 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.666758 kubelet[3343]: E0625 18:45:07.664136 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.667292 kubelet[3343]: E0625 18:45:07.666903 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.667292 kubelet[3343]: W0625 18:45:07.666922 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.667292 kubelet[3343]: E0625 18:45:07.666954 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.668133 kubelet[3343]: E0625 18:45:07.667774 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.668133 kubelet[3343]: W0625 18:45:07.667789 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.668133 kubelet[3343]: E0625 18:45:07.667875 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.675686 kubelet[3343]: E0625 18:45:07.671154 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.675686 kubelet[3343]: W0625 18:45:07.671278 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.675686 kubelet[3343]: E0625 18:45:07.671441 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.675686 kubelet[3343]: I0625 18:45:07.671486 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11739506-f4c5-4731-8420-c05072fd4c97-socket-dir\") pod \"csi-node-driver-86wj5\" (UID: \"11739506-f4c5-4731-8420-c05072fd4c97\") " pod="calico-system/csi-node-driver-86wj5" Jun 25 18:45:07.675686 kubelet[3343]: E0625 18:45:07.675551 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.675686 kubelet[3343]: W0625 18:45:07.675639 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.675686 kubelet[3343]: E0625 18:45:07.675682 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.679031 kubelet[3343]: E0625 18:45:07.677520 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.679031 kubelet[3343]: W0625 18:45:07.677540 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.679031 kubelet[3343]: E0625 18:45:07.677571 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.679031 kubelet[3343]: E0625 18:45:07.678508 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.679031 kubelet[3343]: W0625 18:45:07.678523 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.679031 kubelet[3343]: E0625 18:45:07.678661 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.682295 kubelet[3343]: E0625 18:45:07.679713 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.682295 kubelet[3343]: W0625 18:45:07.679739 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.682295 kubelet[3343]: E0625 18:45:07.679759 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.701815 containerd[2091]: time="2024-06-25T18:45:07.699210306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:07.704167 containerd[2091]: time="2024-06-25T18:45:07.703102348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:07.704930 containerd[2091]: time="2024-06-25T18:45:07.704836775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:07.705526 containerd[2091]: time="2024-06-25T18:45:07.705095676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:07.783447 kubelet[3343]: E0625 18:45:07.783333 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.784137 kubelet[3343]: W0625 18:45:07.783359 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.784137 kubelet[3343]: E0625 18:45:07.783790 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.787710 kubelet[3343]: E0625 18:45:07.787245 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.787710 kubelet[3343]: W0625 18:45:07.787267 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.787710 kubelet[3343]: E0625 18:45:07.787304 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.788784 kubelet[3343]: E0625 18:45:07.788461 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.788784 kubelet[3343]: W0625 18:45:07.788478 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.789540 kubelet[3343]: E0625 18:45:07.789066 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.789540 kubelet[3343]: E0625 18:45:07.789187 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.789540 kubelet[3343]: W0625 18:45:07.789197 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.790083 kubelet[3343]: E0625 18:45:07.789853 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.790397 kubelet[3343]: E0625 18:45:07.790311 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.791064 kubelet[3343]: W0625 18:45:07.790568 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.791064 kubelet[3343]: E0625 18:45:07.790604 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.791612 kubelet[3343]: E0625 18:45:07.791460 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.791612 kubelet[3343]: W0625 18:45:07.791474 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.792475 kubelet[3343]: E0625 18:45:07.792273 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.792796 kubelet[3343]: E0625 18:45:07.792348 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.792796 kubelet[3343]: W0625 18:45:07.792649 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.792796 kubelet[3343]: E0625 18:45:07.792668 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.794402 kubelet[3343]: E0625 18:45:07.794008 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.794402 kubelet[3343]: W0625 18:45:07.794053 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.794402 kubelet[3343]: E0625 18:45:07.794256 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.796656 kubelet[3343]: E0625 18:45:07.795945 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.796656 kubelet[3343]: W0625 18:45:07.795959 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.796656 kubelet[3343]: E0625 18:45:07.796075 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.798092 kubelet[3343]: E0625 18:45:07.797703 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.798092 kubelet[3343]: W0625 18:45:07.797813 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.798637 kubelet[3343]: E0625 18:45:07.798489 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.798968 kubelet[3343]: E0625 18:45:07.798938 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.798968 kubelet[3343]: W0625 18:45:07.798951 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.799399 kubelet[3343]: E0625 18:45:07.799172 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.799671 kubelet[3343]: E0625 18:45:07.799594 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.799671 kubelet[3343]: W0625 18:45:07.799608 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.801514 kubelet[3343]: E0625 18:45:07.801327 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.801891 kubelet[3343]: E0625 18:45:07.801779 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.802126 kubelet[3343]: W0625 18:45:07.801987 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.802682 kubelet[3343]: E0625 18:45:07.802600 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.806959 kubelet[3343]: E0625 18:45:07.806550 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.806959 kubelet[3343]: W0625 18:45:07.806570 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.809505 kubelet[3343]: E0625 18:45:07.809191 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.811488 kubelet[3343]: W0625 18:45:07.809658 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.814079 kubelet[3343]: E0625 18:45:07.814061 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.828246 kubelet[3343]: E0625 18:45:07.815407 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.830688 kubelet[3343]: E0625 18:45:07.828835 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.830688 kubelet[3343]: W0625 18:45:07.828856 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.830688 kubelet[3343]: E0625 18:45:07.828887 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.841397 kubelet[3343]: E0625 18:45:07.841167 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.843712 kubelet[3343]: W0625 18:45:07.843675 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.844211 kubelet[3343]: E0625 18:45:07.844193 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.844481 kubelet[3343]: E0625 18:45:07.844429 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.844481 kubelet[3343]: W0625 18:45:07.844444 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.845123 kubelet[3343]: E0625 18:45:07.844722 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.846803 kubelet[3343]: E0625 18:45:07.846380 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.846803 kubelet[3343]: W0625 18:45:07.846397 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.846803 kubelet[3343]: E0625 18:45:07.846443 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.847152 kubelet[3343]: E0625 18:45:07.847133 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.847349 kubelet[3343]: W0625 18:45:07.847332 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.847556 kubelet[3343]: E0625 18:45:07.847424 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:07.923602 kubelet[3343]: E0625 18:45:07.921312 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:07.923602 kubelet[3343]: W0625 18:45:07.921350 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:07.923602 kubelet[3343]: E0625 18:45:07.921403 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:08.000131 containerd[2091]: time="2024-06-25T18:45:07.999900694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7dd4478bf-sl96f,Uid:0586f3bf-5e81-48c7-824b-2c0b671b2b4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e1ba4c6cced940eb1c4cc3c561cea410fd6523342ab428e6ff53b282e455456\"" Jun 25 18:45:08.004136 containerd[2091]: time="2024-06-25T18:45:08.003936847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 18:45:08.032809 containerd[2091]: time="2024-06-25T18:45:08.032638843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-67nff,Uid:76c1fad6-9c8d-4a23-9154-18bcda02dfa5,Namespace:calico-system,Attempt:0,} returns sandbox id \"ccb42afc44b11ff9d3ea74d328b718b092ce92b8becd52fd4c90a2153c197369\"" Jun 25 18:45:09.230062 kubelet[3343]: E0625 18:45:09.229780 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86wj5" podUID="11739506-f4c5-4731-8420-c05072fd4c97" Jun 25 18:45:11.231202 kubelet[3343]: E0625 18:45:11.229775 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86wj5" podUID="11739506-f4c5-4731-8420-c05072fd4c97" Jun 25 18:45:11.475127 containerd[2091]: time="2024-06-25T18:45:11.475075578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:11.480976 containerd[2091]: time="2024-06-25T18:45:11.480480388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jun 25 18:45:11.483053 containerd[2091]: time="2024-06-25T18:45:11.481946586Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:11.488130 containerd[2091]: time="2024-06-25T18:45:11.487686427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:11.490681 containerd[2091]: time="2024-06-25T18:45:11.489503876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.485505777s" Jun 25 18:45:11.490681 containerd[2091]: time="2024-06-25T18:45:11.489613780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jun 25 18:45:11.493195 containerd[2091]: time="2024-06-25T18:45:11.492267918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 18:45:11.530583 containerd[2091]: time="2024-06-25T18:45:11.530527545Z" level=info msg="CreateContainer within sandbox \"2e1ba4c6cced940eb1c4cc3c561cea410fd6523342ab428e6ff53b282e455456\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 18:45:11.565550 containerd[2091]: time="2024-06-25T18:45:11.565360963Z" level=info msg="CreateContainer within sandbox \"2e1ba4c6cced940eb1c4cc3c561cea410fd6523342ab428e6ff53b282e455456\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"28e9d554d347a6717be7fc7ae4a05b837b8727a458b5e335980eaeefe7972f67\"" Jun 25 18:45:11.566913 containerd[2091]: time="2024-06-25T18:45:11.566775518Z" level=info msg="StartContainer for \"28e9d554d347a6717be7fc7ae4a05b837b8727a458b5e335980eaeefe7972f67\"" Jun 25 18:45:11.864474 containerd[2091]: time="2024-06-25T18:45:11.864302292Z" level=info msg="StartContainer for \"28e9d554d347a6717be7fc7ae4a05b837b8727a458b5e335980eaeefe7972f67\" returns successfully" Jun 25 18:45:12.503387 kubelet[3343]: E0625 18:45:12.502063 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.503387 kubelet[3343]: W0625 18:45:12.502088 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.503387 kubelet[3343]: E0625 18:45:12.502129 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.503387 kubelet[3343]: E0625 18:45:12.502900 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.503387 kubelet[3343]: W0625 18:45:12.502914 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.503387 kubelet[3343]: E0625 18:45:12.502937 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.504149 kubelet[3343]: E0625 18:45:12.503451 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.504149 kubelet[3343]: W0625 18:45:12.503463 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.504149 kubelet[3343]: E0625 18:45:12.503505 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.504149 kubelet[3343]: E0625 18:45:12.503751 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.504149 kubelet[3343]: W0625 18:45:12.503760 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.504149 kubelet[3343]: E0625 18:45:12.503774 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.504149 kubelet[3343]: E0625 18:45:12.504109 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.504149 kubelet[3343]: W0625 18:45:12.504120 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.504149 kubelet[3343]: E0625 18:45:12.504137 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.504744 kubelet[3343]: E0625 18:45:12.504335 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.504744 kubelet[3343]: W0625 18:45:12.504343 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.504744 kubelet[3343]: E0625 18:45:12.504361 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.513412 kubelet[3343]: E0625 18:45:12.504799 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.513412 kubelet[3343]: W0625 18:45:12.507379 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.513412 kubelet[3343]: E0625 18:45:12.507404 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.518341 kubelet[3343]: E0625 18:45:12.516864 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.518341 kubelet[3343]: W0625 18:45:12.517203 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.518341 kubelet[3343]: E0625 18:45:12.517241 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.518961 kubelet[3343]: E0625 18:45:12.518406 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.518961 kubelet[3343]: W0625 18:45:12.518420 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.518961 kubelet[3343]: E0625 18:45:12.518442 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.523157 kubelet[3343]: E0625 18:45:12.522721 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.523157 kubelet[3343]: W0625 18:45:12.522835 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.523157 kubelet[3343]: E0625 18:45:12.522864 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.523712 kubelet[3343]: E0625 18:45:12.523569 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.523712 kubelet[3343]: W0625 18:45:12.523582 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.523712 kubelet[3343]: E0625 18:45:12.523602 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.525847 kubelet[3343]: E0625 18:45:12.525474 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.525847 kubelet[3343]: W0625 18:45:12.525492 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.525847 kubelet[3343]: E0625 18:45:12.525525 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.527428 kubelet[3343]: E0625 18:45:12.527347 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.528437 kubelet[3343]: W0625 18:45:12.527665 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.528437 kubelet[3343]: E0625 18:45:12.527696 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.531395 kubelet[3343]: E0625 18:45:12.529754 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.531395 kubelet[3343]: W0625 18:45:12.529803 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.531395 kubelet[3343]: E0625 18:45:12.529824 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.533655 kubelet[3343]: E0625 18:45:12.531502 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.533655 kubelet[3343]: W0625 18:45:12.531514 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.533655 kubelet[3343]: E0625 18:45:12.531533 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.543744 kubelet[3343]: I0625 18:45:12.543706 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7dd4478bf-sl96f" podStartSLOduration=2.054693348 podCreationTimestamp="2024-06-25 18:45:07 +0000 UTC" firstStartedPulling="2024-06-25 18:45:08.002241224 +0000 UTC m=+21.989608156" lastFinishedPulling="2024-06-25 18:45:11.491202864 +0000 UTC m=+25.478569798" observedRunningTime="2024-06-25 18:45:12.521087722 +0000 UTC m=+26.508454660" watchObservedRunningTime="2024-06-25 18:45:12.54365499 +0000 UTC m=+26.531021955" Jun 25 18:45:12.567474 kubelet[3343]: E0625 18:45:12.567430 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.567474 kubelet[3343]: W0625 18:45:12.567470 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.567685 kubelet[3343]: E0625 18:45:12.567501 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.568749 kubelet[3343]: E0625 18:45:12.568714 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.568749 kubelet[3343]: W0625 18:45:12.568738 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.568918 kubelet[3343]: E0625 18:45:12.568777 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.570408 kubelet[3343]: E0625 18:45:12.570143 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.570408 kubelet[3343]: W0625 18:45:12.570161 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.570408 kubelet[3343]: E0625 18:45:12.570200 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.572755 kubelet[3343]: E0625 18:45:12.572730 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.572755 kubelet[3343]: W0625 18:45:12.572753 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.577869 kubelet[3343]: E0625 18:45:12.577838 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.579619 kubelet[3343]: E0625 18:45:12.579595 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.579619 kubelet[3343]: W0625 18:45:12.579618 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.579797 kubelet[3343]: E0625 18:45:12.579651 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.581439 kubelet[3343]: E0625 18:45:12.581415 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.581439 kubelet[3343]: W0625 18:45:12.581437 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.581591 kubelet[3343]: E0625 18:45:12.581557 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.581828 kubelet[3343]: E0625 18:45:12.581802 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.582087 kubelet[3343]: W0625 18:45:12.582002 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.582301 kubelet[3343]: E0625 18:45:12.582174 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.582725 kubelet[3343]: E0625 18:45:12.582632 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.583011 kubelet[3343]: W0625 18:45:12.582996 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.583405 kubelet[3343]: E0625 18:45:12.583313 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.584028 kubelet[3343]: E0625 18:45:12.583852 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.584028 kubelet[3343]: W0625 18:45:12.583865 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.584596 kubelet[3343]: E0625 18:45:12.584425 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.584596 kubelet[3343]: E0625 18:45:12.584493 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.584596 kubelet[3343]: W0625 18:45:12.584502 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.585298 kubelet[3343]: E0625 18:45:12.585205 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.585298 kubelet[3343]: E0625 18:45:12.585274 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.585298 kubelet[3343]: W0625 18:45:12.585283 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.586643 kubelet[3343]: E0625 18:45:12.585705 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.587196 kubelet[3343]: E0625 18:45:12.587096 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.587307 kubelet[3343]: W0625 18:45:12.587293 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.587612 kubelet[3343]: E0625 18:45:12.587588 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.588098 kubelet[3343]: E0625 18:45:12.588085 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.588274 kubelet[3343]: W0625 18:45:12.588261 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.588553 kubelet[3343]: E0625 18:45:12.588540 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.589287 kubelet[3343]: E0625 18:45:12.589274 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.589507 kubelet[3343]: W0625 18:45:12.589492 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.589945 kubelet[3343]: E0625 18:45:12.589759 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.591685 kubelet[3343]: E0625 18:45:12.591671 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.591960 kubelet[3343]: W0625 18:45:12.591783 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.592323 kubelet[3343]: E0625 18:45:12.592063 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.592669 kubelet[3343]: E0625 18:45:12.592655 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.592771 kubelet[3343]: W0625 18:45:12.592760 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.592955 kubelet[3343]: E0625 18:45:12.592944 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.593703 kubelet[3343]: E0625 18:45:12.593691 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.594018 kubelet[3343]: W0625 18:45:12.593801 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.594396 kubelet[3343]: E0625 18:45:12.594286 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:12.594850 kubelet[3343]: E0625 18:45:12.594514 3343 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 18:45:12.594850 kubelet[3343]: W0625 18:45:12.594526 3343 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 18:45:12.594850 kubelet[3343]: E0625 18:45:12.594543 3343 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 18:45:13.014403 containerd[2091]: time="2024-06-25T18:45:13.012110130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:13.019692 containerd[2091]: time="2024-06-25T18:45:13.019631356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jun 25 18:45:13.022549 containerd[2091]: time="2024-06-25T18:45:13.022303564Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:13.032392 containerd[2091]: time="2024-06-25T18:45:13.031343598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:13.035983 containerd[2091]: time="2024-06-25T18:45:13.033213691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.54022642s" Jun 25 18:45:13.036294 containerd[2091]: time="2024-06-25T18:45:13.036168955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jun 25 18:45:13.045400 containerd[2091]: time="2024-06-25T18:45:13.043162866Z" level=info msg="CreateContainer within sandbox \"ccb42afc44b11ff9d3ea74d328b718b092ce92b8becd52fd4c90a2153c197369\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 18:45:13.079600 containerd[2091]: time="2024-06-25T18:45:13.079176920Z" level=info msg="CreateContainer within sandbox \"ccb42afc44b11ff9d3ea74d328b718b092ce92b8becd52fd4c90a2153c197369\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"03bebbc5de41b9c8610022f4dd76a234b419c68f52482deb3911f21047a5951d\"" Jun 25 18:45:13.083719 containerd[2091]: time="2024-06-25T18:45:13.082762997Z" level=info msg="StartContainer for \"03bebbc5de41b9c8610022f4dd76a234b419c68f52482deb3911f21047a5951d\"" Jun 25 18:45:13.230271 kubelet[3343]: E0625 18:45:13.230227 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86wj5" podUID="11739506-f4c5-4731-8420-c05072fd4c97" Jun 25 18:45:13.278252 containerd[2091]: time="2024-06-25T18:45:13.278134916Z" level=info msg="StartContainer for \"03bebbc5de41b9c8610022f4dd76a234b419c68f52482deb3911f21047a5951d\" returns successfully" Jun 25 18:45:13.394651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03bebbc5de41b9c8610022f4dd76a234b419c68f52482deb3911f21047a5951d-rootfs.mount: Deactivated successfully. Jun 25 18:45:13.532853 containerd[2091]: time="2024-06-25T18:45:13.525691189Z" level=info msg="shim disconnected" id=03bebbc5de41b9c8610022f4dd76a234b419c68f52482deb3911f21047a5951d namespace=k8s.io Jun 25 18:45:13.532853 containerd[2091]: time="2024-06-25T18:45:13.532767025Z" level=warning msg="cleaning up after shim disconnected" id=03bebbc5de41b9c8610022f4dd76a234b419c68f52482deb3911f21047a5951d namespace=k8s.io Jun 25 18:45:13.532853 containerd[2091]: time="2024-06-25T18:45:13.532786835Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:14.500633 containerd[2091]: time="2024-06-25T18:45:14.500559919Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 18:45:15.229803 kubelet[3343]: E0625 18:45:15.229755 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86wj5" podUID="11739506-f4c5-4731-8420-c05072fd4c97" Jun 25 18:45:17.230598 kubelet[3343]: E0625 18:45:17.230546 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86wj5" podUID="11739506-f4c5-4731-8420-c05072fd4c97" Jun 25 18:45:19.229933 kubelet[3343]: E0625 18:45:19.229895 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-86wj5" podUID="11739506-f4c5-4731-8420-c05072fd4c97" Jun 25 18:45:19.780664 containerd[2091]: time="2024-06-25T18:45:19.780565742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:19.785277 containerd[2091]: time="2024-06-25T18:45:19.785130575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jun 25 18:45:19.789755 containerd[2091]: time="2024-06-25T18:45:19.789663953Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:19.794930 containerd[2091]: time="2024-06-25T18:45:19.794880593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:19.796079 containerd[2091]: time="2024-06-25T18:45:19.795883852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.295269354s" Jun 25 18:45:19.796079 containerd[2091]: time="2024-06-25T18:45:19.795928550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jun 25 18:45:19.802122 containerd[2091]: time="2024-06-25T18:45:19.802041496Z" level=info msg="CreateContainer within sandbox \"ccb42afc44b11ff9d3ea74d328b718b092ce92b8becd52fd4c90a2153c197369\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 18:45:19.923217 containerd[2091]: time="2024-06-25T18:45:19.923159426Z" level=info msg="CreateContainer within sandbox \"ccb42afc44b11ff9d3ea74d328b718b092ce92b8becd52fd4c90a2153c197369\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1532199f51cb9ec63b79db7b692570b02687e22a7b31664edfa2d72d279a80ac\"" Jun 25 18:45:19.925884 containerd[2091]: time="2024-06-25T18:45:19.923866797Z" level=info msg="StartContainer for \"1532199f51cb9ec63b79db7b692570b02687e22a7b31664edfa2d72d279a80ac\"" Jun 25 18:45:20.041003 systemd[1]: run-containerd-runc-k8s.io-1532199f51cb9ec63b79db7b692570b02687e22a7b31664edfa2d72d279a80ac-runc.O9O0CF.mount: Deactivated successfully. Jun 25 18:45:20.070641 containerd[2091]: time="2024-06-25T18:45:20.070597090Z" level=info msg="StartContainer for \"1532199f51cb9ec63b79db7b692570b02687e22a7b31664edfa2d72d279a80ac\" returns successfully" Jun 25 18:45:21.133016 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1532199f51cb9ec63b79db7b692570b02687e22a7b31664edfa2d72d279a80ac-rootfs.mount: Deactivated successfully. Jun 25 18:45:21.136635 containerd[2091]: time="2024-06-25T18:45:21.136293462Z" level=info msg="shim disconnected" id=1532199f51cb9ec63b79db7b692570b02687e22a7b31664edfa2d72d279a80ac namespace=k8s.io Jun 25 18:45:21.140137 containerd[2091]: time="2024-06-25T18:45:21.138799263Z" level=warning msg="cleaning up after shim disconnected" id=1532199f51cb9ec63b79db7b692570b02687e22a7b31664edfa2d72d279a80ac namespace=k8s.io Jun 25 18:45:21.140137 containerd[2091]: time="2024-06-25T18:45:21.138837836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 18:45:21.168458 kubelet[3343]: I0625 18:45:21.168329 3343 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 18:45:21.194731 containerd[2091]: time="2024-06-25T18:45:21.193452778Z" level=warning msg="cleanup warnings time=\"2024-06-25T18:45:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jun 25 18:45:21.225018 kubelet[3343]: I0625 18:45:21.224430 3343 topology_manager.go:215] "Topology Admit Handler" podUID="8434a9e8-834b-4b31-ad53-9bb8bd09d9ce" podNamespace="kube-system" podName="coredns-5dd5756b68-qllp6" Jun 25 18:45:21.249935 containerd[2091]: time="2024-06-25T18:45:21.244830018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86wj5,Uid:11739506-f4c5-4731-8420-c05072fd4c97,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:21.250091 kubelet[3343]: I0625 18:45:21.248996 3343 topology_manager.go:215] "Topology Admit Handler" podUID="aa3cf42f-c171-493a-bb2e-92d413f86d05" podNamespace="calico-system" podName="calico-kube-controllers-86cfbfdcf-nvcqf" Jun 25 18:45:21.250091 kubelet[3343]: I0625 18:45:21.249892 3343 topology_manager.go:215] "Topology Admit Handler" podUID="8b7a1d64-49a7-49eb-97e6-0860978799bc" podNamespace="kube-system" podName="coredns-5dd5756b68-z8v6z" Jun 25 18:45:21.294477 kubelet[3343]: I0625 18:45:21.290155 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8434a9e8-834b-4b31-ad53-9bb8bd09d9ce-config-volume\") pod \"coredns-5dd5756b68-qllp6\" (UID: \"8434a9e8-834b-4b31-ad53-9bb8bd09d9ce\") " pod="kube-system/coredns-5dd5756b68-qllp6" Jun 25 18:45:21.302100 kubelet[3343]: I0625 18:45:21.301551 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4rrf\" (UniqueName: \"kubernetes.io/projected/8434a9e8-834b-4b31-ad53-9bb8bd09d9ce-kube-api-access-w4rrf\") pod \"coredns-5dd5756b68-qllp6\" (UID: \"8434a9e8-834b-4b31-ad53-9bb8bd09d9ce\") " pod="kube-system/coredns-5dd5756b68-qllp6" Jun 25 18:45:21.405812 kubelet[3343]: I0625 18:45:21.405362 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb67c\" (UniqueName: \"kubernetes.io/projected/aa3cf42f-c171-493a-bb2e-92d413f86d05-kube-api-access-mb67c\") pod \"calico-kube-controllers-86cfbfdcf-nvcqf\" (UID: \"aa3cf42f-c171-493a-bb2e-92d413f86d05\") " pod="calico-system/calico-kube-controllers-86cfbfdcf-nvcqf" Jun 25 18:45:21.405812 kubelet[3343]: I0625 18:45:21.405490 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa3cf42f-c171-493a-bb2e-92d413f86d05-tigera-ca-bundle\") pod \"calico-kube-controllers-86cfbfdcf-nvcqf\" (UID: \"aa3cf42f-c171-493a-bb2e-92d413f86d05\") " pod="calico-system/calico-kube-controllers-86cfbfdcf-nvcqf" Jun 25 18:45:21.405812 kubelet[3343]: I0625 18:45:21.405540 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7228c\" (UniqueName: \"kubernetes.io/projected/8b7a1d64-49a7-49eb-97e6-0860978799bc-kube-api-access-7228c\") pod \"coredns-5dd5756b68-z8v6z\" (UID: \"8b7a1d64-49a7-49eb-97e6-0860978799bc\") " pod="kube-system/coredns-5dd5756b68-z8v6z" Jun 25 18:45:21.405812 kubelet[3343]: I0625 18:45:21.405593 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b7a1d64-49a7-49eb-97e6-0860978799bc-config-volume\") pod \"coredns-5dd5756b68-z8v6z\" (UID: \"8b7a1d64-49a7-49eb-97e6-0860978799bc\") " pod="kube-system/coredns-5dd5756b68-z8v6z" Jun 25 18:45:21.540948 containerd[2091]: time="2024-06-25T18:45:21.540898237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qllp6,Uid:8434a9e8-834b-4b31-ad53-9bb8bd09d9ce,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:21.569305 containerd[2091]: time="2024-06-25T18:45:21.569258424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 18:45:21.573218 containerd[2091]: time="2024-06-25T18:45:21.571837903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-z8v6z,Uid:8b7a1d64-49a7-49eb-97e6-0860978799bc,Namespace:kube-system,Attempt:0,}" Jun 25 18:45:21.586811 containerd[2091]: time="2024-06-25T18:45:21.586694725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86cfbfdcf-nvcqf,Uid:aa3cf42f-c171-493a-bb2e-92d413f86d05,Namespace:calico-system,Attempt:0,}" Jun 25 18:45:21.625616 containerd[2091]: time="2024-06-25T18:45:21.625506602Z" level=error msg="Failed to destroy network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.640786 containerd[2091]: time="2024-06-25T18:45:21.640700844Z" level=error msg="encountered an error cleaning up failed sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.673236 containerd[2091]: time="2024-06-25T18:45:21.672191736Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86wj5,Uid:11739506-f4c5-4731-8420-c05072fd4c97,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.681933 kubelet[3343]: E0625 18:45:21.681903 3343 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.682086 kubelet[3343]: E0625 18:45:21.681995 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86wj5" Jun 25 18:45:21.682086 kubelet[3343]: E0625 18:45:21.682041 3343 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-86wj5" Jun 25 18:45:21.682384 kubelet[3343]: E0625 18:45:21.682350 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-86wj5_calico-system(11739506-f4c5-4731-8420-c05072fd4c97)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-86wj5_calico-system(11739506-f4c5-4731-8420-c05072fd4c97)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86wj5" podUID="11739506-f4c5-4731-8420-c05072fd4c97" Jun 25 18:45:21.777735 containerd[2091]: time="2024-06-25T18:45:21.777681323Z" level=error msg="Failed to destroy network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.779647 containerd[2091]: time="2024-06-25T18:45:21.779597716Z" level=error msg="encountered an error cleaning up failed sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.779937 containerd[2091]: time="2024-06-25T18:45:21.779827670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qllp6,Uid:8434a9e8-834b-4b31-ad53-9bb8bd09d9ce,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.780245 kubelet[3343]: E0625 18:45:21.780197 3343 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.780387 kubelet[3343]: E0625 18:45:21.780262 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qllp6" Jun 25 18:45:21.780387 kubelet[3343]: E0625 18:45:21.780289 3343 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-qllp6" Jun 25 18:45:21.780387 kubelet[3343]: E0625 18:45:21.780352 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-qllp6_kube-system(8434a9e8-834b-4b31-ad53-9bb8bd09d9ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-qllp6_kube-system(8434a9e8-834b-4b31-ad53-9bb8bd09d9ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qllp6" podUID="8434a9e8-834b-4b31-ad53-9bb8bd09d9ce" Jun 25 18:45:21.846215 containerd[2091]: time="2024-06-25T18:45:21.846157081Z" level=error msg="Failed to destroy network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.846763 containerd[2091]: time="2024-06-25T18:45:21.846670907Z" level=error msg="encountered an error cleaning up failed sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.846873 containerd[2091]: time="2024-06-25T18:45:21.846784801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86cfbfdcf-nvcqf,Uid:aa3cf42f-c171-493a-bb2e-92d413f86d05,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.847832 kubelet[3343]: E0625 18:45:21.847724 3343 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.848111 kubelet[3343]: E0625 18:45:21.847995 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86cfbfdcf-nvcqf" Jun 25 18:45:21.848111 kubelet[3343]: E0625 18:45:21.848053 3343 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86cfbfdcf-nvcqf" Jun 25 18:45:21.848441 kubelet[3343]: E0625 18:45:21.848248 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86cfbfdcf-nvcqf_calico-system(aa3cf42f-c171-493a-bb2e-92d413f86d05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86cfbfdcf-nvcqf_calico-system(aa3cf42f-c171-493a-bb2e-92d413f86d05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86cfbfdcf-nvcqf" podUID="aa3cf42f-c171-493a-bb2e-92d413f86d05" Jun 25 18:45:21.851662 containerd[2091]: time="2024-06-25T18:45:21.851501672Z" level=error msg="Failed to destroy network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.852136 containerd[2091]: time="2024-06-25T18:45:21.851989102Z" level=error msg="encountered an error cleaning up failed sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.853607 containerd[2091]: time="2024-06-25T18:45:21.852126269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-z8v6z,Uid:8b7a1d64-49a7-49eb-97e6-0860978799bc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.854272 kubelet[3343]: E0625 18:45:21.852436 3343 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:21.854272 kubelet[3343]: E0625 18:45:21.853340 3343 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-z8v6z" Jun 25 18:45:21.854272 kubelet[3343]: E0625 18:45:21.853506 3343 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-z8v6z" Jun 25 18:45:21.856052 kubelet[3343]: E0625 18:45:21.853600 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-z8v6z_kube-system(8b7a1d64-49a7-49eb-97e6-0860978799bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-z8v6z_kube-system(8b7a1d64-49a7-49eb-97e6-0860978799bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-z8v6z" podUID="8b7a1d64-49a7-49eb-97e6-0860978799bc" Jun 25 18:45:22.126048 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162-shm.mount: Deactivated successfully. Jun 25 18:45:22.556622 kubelet[3343]: I0625 18:45:22.556575 3343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:22.558667 containerd[2091]: time="2024-06-25T18:45:22.558322022Z" level=info msg="StopPodSandbox for \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\"" Jun 25 18:45:22.562771 kubelet[3343]: I0625 18:45:22.562344 3343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:22.563097 containerd[2091]: time="2024-06-25T18:45:22.562409182Z" level=info msg="Ensure that sandbox 4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63 in task-service has been cleanup successfully" Jun 25 18:45:22.564670 containerd[2091]: time="2024-06-25T18:45:22.563347601Z" level=info msg="StopPodSandbox for \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\"" Jun 25 18:45:22.564670 containerd[2091]: time="2024-06-25T18:45:22.563771472Z" level=info msg="Ensure that sandbox 37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2 in task-service has been cleanup successfully" Jun 25 18:45:22.569220 kubelet[3343]: I0625 18:45:22.568614 3343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:22.576332 containerd[2091]: time="2024-06-25T18:45:22.576177212Z" level=info msg="StopPodSandbox for \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\"" Jun 25 18:45:22.576591 containerd[2091]: time="2024-06-25T18:45:22.576564451Z" level=info msg="Ensure that sandbox 7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f in task-service has been cleanup successfully" Jun 25 18:45:22.600775 kubelet[3343]: I0625 18:45:22.596096 3343 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:22.605600 containerd[2091]: time="2024-06-25T18:45:22.605562802Z" level=info msg="StopPodSandbox for \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\"" Jun 25 18:45:22.606106 containerd[2091]: time="2024-06-25T18:45:22.606076309Z" level=info msg="Ensure that sandbox 4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162 in task-service has been cleanup successfully" Jun 25 18:45:22.685750 containerd[2091]: time="2024-06-25T18:45:22.685671637Z" level=error msg="StopPodSandbox for \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\" failed" error="failed to destroy network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:22.686058 kubelet[3343]: E0625 18:45:22.686034 3343 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:22.686147 kubelet[3343]: E0625 18:45:22.686125 3343 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63"} Jun 25 18:45:22.686197 kubelet[3343]: E0625 18:45:22.686176 3343 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b7a1d64-49a7-49eb-97e6-0860978799bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:22.686289 kubelet[3343]: E0625 18:45:22.686219 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b7a1d64-49a7-49eb-97e6-0860978799bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-z8v6z" podUID="8b7a1d64-49a7-49eb-97e6-0860978799bc" Jun 25 18:45:22.690286 containerd[2091]: time="2024-06-25T18:45:22.690004651Z" level=error msg="StopPodSandbox for \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\" failed" error="failed to destroy network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:22.691326 kubelet[3343]: E0625 18:45:22.691226 3343 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:22.691532 kubelet[3343]: E0625 18:45:22.691359 3343 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f"} Jun 25 18:45:22.691667 kubelet[3343]: E0625 18:45:22.691617 3343 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa3cf42f-c171-493a-bb2e-92d413f86d05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:22.692149 kubelet[3343]: E0625 18:45:22.692061 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa3cf42f-c171-493a-bb2e-92d413f86d05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86cfbfdcf-nvcqf" podUID="aa3cf42f-c171-493a-bb2e-92d413f86d05" Jun 25 18:45:22.714589 containerd[2091]: time="2024-06-25T18:45:22.714397228Z" level=error msg="StopPodSandbox for \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\" failed" error="failed to destroy network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:22.714729 kubelet[3343]: E0625 18:45:22.714664 3343 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:22.714729 kubelet[3343]: E0625 18:45:22.714712 3343 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2"} Jun 25 18:45:22.714990 kubelet[3343]: E0625 18:45:22.714842 3343 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8434a9e8-834b-4b31-ad53-9bb8bd09d9ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:22.714990 kubelet[3343]: E0625 18:45:22.714913 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8434a9e8-834b-4b31-ad53-9bb8bd09d9ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-qllp6" podUID="8434a9e8-834b-4b31-ad53-9bb8bd09d9ce" Jun 25 18:45:22.717460 containerd[2091]: time="2024-06-25T18:45:22.716991970Z" level=error msg="StopPodSandbox for \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\" failed" error="failed to destroy network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 18:45:22.717576 kubelet[3343]: E0625 18:45:22.717273 3343 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:22.717576 kubelet[3343]: E0625 18:45:22.717314 3343 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162"} Jun 25 18:45:22.717576 kubelet[3343]: E0625 18:45:22.717387 3343 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11739506-f4c5-4731-8420-c05072fd4c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 18:45:22.717576 kubelet[3343]: E0625 18:45:22.717429 3343 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11739506-f4c5-4731-8420-c05072fd4c97\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-86wj5" podUID="11739506-f4c5-4731-8420-c05072fd4c97" Jun 25 18:45:28.877540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount984963262.mount: Deactivated successfully. Jun 25 18:45:28.938398 containerd[2091]: time="2024-06-25T18:45:28.938301700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:28.940630 containerd[2091]: time="2024-06-25T18:45:28.940569141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jun 25 18:45:28.941941 containerd[2091]: time="2024-06-25T18:45:28.941732558Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:28.957671 containerd[2091]: time="2024-06-25T18:45:28.957618385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:28.959950 containerd[2091]: time="2024-06-25T18:45:28.959685193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.389945683s" Jun 25 18:45:28.959950 containerd[2091]: time="2024-06-25T18:45:28.959776482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jun 25 18:45:29.113030 containerd[2091]: time="2024-06-25T18:45:29.096107198Z" level=info msg="CreateContainer within sandbox \"ccb42afc44b11ff9d3ea74d328b718b092ce92b8becd52fd4c90a2153c197369\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 18:45:29.144789 containerd[2091]: time="2024-06-25T18:45:29.144297419Z" level=info msg="CreateContainer within sandbox \"ccb42afc44b11ff9d3ea74d328b718b092ce92b8becd52fd4c90a2153c197369\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d5a424af0f520a351019a62fe972da84af9e4a01dab7fd0dc7730df49eafc050\"" Jun 25 18:45:29.164613 containerd[2091]: time="2024-06-25T18:45:29.164534225Z" level=info msg="StartContainer for \"d5a424af0f520a351019a62fe972da84af9e4a01dab7fd0dc7730df49eafc050\"" Jun 25 18:45:29.266550 containerd[2091]: time="2024-06-25T18:45:29.266457930Z" level=info msg="StartContainer for \"d5a424af0f520a351019a62fe972da84af9e4a01dab7fd0dc7730df49eafc050\" returns successfully" Jun 25 18:45:29.421415 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 18:45:29.421715 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 18:45:29.764054 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:45:29.763831 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:45:29.763918 systemd-resolved[1971]: Flushed all caches. Jun 25 18:45:30.631174 kubelet[3343]: I0625 18:45:30.630995 3343 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:45:31.803342 (udev-worker)[4618]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:45:31.812348 systemd-networkd[1657]: vxlan.calico: Link UP Jun 25 18:45:31.812356 systemd-networkd[1657]: vxlan.calico: Gained carrier Jun 25 18:45:31.865725 (udev-worker)[4629]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:45:31.868932 (udev-worker)[4449]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:45:33.230500 containerd[2091]: time="2024-06-25T18:45:33.230455837Z" level=info msg="StopPodSandbox for \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\"" Jun 25 18:45:33.423714 kubelet[3343]: I0625 18:45:33.423426 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-67nff" podStartSLOduration=5.417997285 podCreationTimestamp="2024-06-25 18:45:07 +0000 UTC" firstStartedPulling="2024-06-25 18:45:08.035641847 +0000 UTC m=+22.023008765" lastFinishedPulling="2024-06-25 18:45:28.962527186 +0000 UTC m=+42.949894120" observedRunningTime="2024-06-25 18:45:29.671389584 +0000 UTC m=+43.658756522" watchObservedRunningTime="2024-06-25 18:45:33.34488264 +0000 UTC m=+47.332249590" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.351 [INFO][4682] k8s.go 608: Cleaning up netns ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.352 [INFO][4682] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" iface="eth0" netns="/var/run/netns/cni-4bcd1938-ef68-daf7-0ca0-37e87ea26d2b" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.353 [INFO][4682] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" iface="eth0" netns="/var/run/netns/cni-4bcd1938-ef68-daf7-0ca0-37e87ea26d2b" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.353 [INFO][4682] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" iface="eth0" netns="/var/run/netns/cni-4bcd1938-ef68-daf7-0ca0-37e87ea26d2b" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.353 [INFO][4682] k8s.go 615: Releasing IP address(es) ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.354 [INFO][4682] utils.go 188: Calico CNI releasing IP address ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.624 [INFO][4689] ipam_plugin.go 411: Releasing address using handleID ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" HandleID="k8s-pod-network.4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.627 [INFO][4689] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.628 [INFO][4689] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.650 [WARNING][4689] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" HandleID="k8s-pod-network.4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.650 [INFO][4689] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" HandleID="k8s-pod-network.4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.655 [INFO][4689] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:33.663850 containerd[2091]: 2024-06-25 18:45:33.659 [INFO][4682] k8s.go 621: Teardown processing complete. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:33.666712 containerd[2091]: time="2024-06-25T18:45:33.665071684Z" level=info msg="TearDown network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\" successfully" Jun 25 18:45:33.666712 containerd[2091]: time="2024-06-25T18:45:33.665114618Z" level=info msg="StopPodSandbox for \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\" returns successfully" Jun 25 18:45:33.675717 systemd[1]: run-netns-cni\x2d4bcd1938\x2def68\x2ddaf7\x2d0ca0\x2d37e87ea26d2b.mount: Deactivated successfully. Jun 25 18:45:33.700558 containerd[2091]: time="2024-06-25T18:45:33.700490585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-z8v6z,Uid:8b7a1d64-49a7-49eb-97e6-0860978799bc,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:33.854624 systemd-networkd[1657]: vxlan.calico: Gained IPv6LL Jun 25 18:45:33.930807 systemd-networkd[1657]: calied2834182b2: Link UP Jun 25 18:45:33.931720 systemd-networkd[1657]: calied2834182b2: Gained carrier Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.822 [INFO][4695] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0 coredns-5dd5756b68- kube-system 8b7a1d64-49a7-49eb-97e6-0860978799bc 718 0 2024-06-25 18:44:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-33 coredns-5dd5756b68-z8v6z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied2834182b2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Namespace="kube-system" Pod="coredns-5dd5756b68-z8v6z" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.822 [INFO][4695] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Namespace="kube-system" Pod="coredns-5dd5756b68-z8v6z" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.859 [INFO][4706] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" HandleID="k8s-pod-network.95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.878 [INFO][4706] ipam_plugin.go 264: Auto assigning IP ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" HandleID="k8s-pod-network.95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f0140), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-33", "pod":"coredns-5dd5756b68-z8v6z", "timestamp":"2024-06-25 18:45:33.859181491 +0000 UTC"}, Hostname:"ip-172-31-31-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.878 [INFO][4706] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.879 [INFO][4706] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.879 [INFO][4706] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-33' Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.885 [INFO][4706] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" host="ip-172-31-31-33" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.896 [INFO][4706] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-33" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.901 [INFO][4706] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.903 [INFO][4706] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.905 [INFO][4706] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.905 [INFO][4706] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" host="ip-172-31-31-33" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.908 [INFO][4706] ipam.go 1685: Creating new handle: k8s-pod-network.95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5 Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.915 [INFO][4706] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" host="ip-172-31-31-33" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.922 [INFO][4706] ipam.go 1216: Successfully claimed IPs: [192.168.32.129/26] block=192.168.32.128/26 handle="k8s-pod-network.95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" host="ip-172-31-31-33" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.922 [INFO][4706] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.129/26] handle="k8s-pod-network.95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" host="ip-172-31-31-33" Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.922 [INFO][4706] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:33.958202 containerd[2091]: 2024-06-25 18:45:33.922 [INFO][4706] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.129/26] IPv6=[] ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" HandleID="k8s-pod-network.95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:33.964236 containerd[2091]: 2024-06-25 18:45:33.926 [INFO][4695] k8s.go 386: Populated endpoint ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Namespace="kube-system" Pod="coredns-5dd5756b68-z8v6z" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8b7a1d64-49a7-49eb-97e6-0860978799bc", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"", Pod:"coredns-5dd5756b68-z8v6z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied2834182b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:33.964236 containerd[2091]: 2024-06-25 18:45:33.926 [INFO][4695] k8s.go 387: Calico CNI using IPs: [192.168.32.129/32] ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Namespace="kube-system" Pod="coredns-5dd5756b68-z8v6z" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:33.964236 containerd[2091]: 2024-06-25 18:45:33.926 [INFO][4695] dataplane_linux.go 68: Setting the host side veth name to calied2834182b2 ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Namespace="kube-system" Pod="coredns-5dd5756b68-z8v6z" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:33.964236 containerd[2091]: 2024-06-25 18:45:33.929 [INFO][4695] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Namespace="kube-system" Pod="coredns-5dd5756b68-z8v6z" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:33.964236 containerd[2091]: 2024-06-25 18:45:33.931 [INFO][4695] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Namespace="kube-system" Pod="coredns-5dd5756b68-z8v6z" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8b7a1d64-49a7-49eb-97e6-0860978799bc", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5", Pod:"coredns-5dd5756b68-z8v6z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied2834182b2", MAC:"b6:44:bd:b2:37:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:33.964236 containerd[2091]: 2024-06-25 18:45:33.952 [INFO][4695] k8s.go 500: Wrote updated endpoint to datastore ContainerID="95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5" Namespace="kube-system" Pod="coredns-5dd5756b68-z8v6z" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:34.037987 containerd[2091]: time="2024-06-25T18:45:34.037686496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:34.037987 containerd[2091]: time="2024-06-25T18:45:34.037740470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:34.037987 containerd[2091]: time="2024-06-25T18:45:34.037754560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:34.037987 containerd[2091]: time="2024-06-25T18:45:34.037767249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:34.142496 containerd[2091]: time="2024-06-25T18:45:34.142161536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-z8v6z,Uid:8b7a1d64-49a7-49eb-97e6-0860978799bc,Namespace:kube-system,Attempt:1,} returns sandbox id \"95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5\"" Jun 25 18:45:34.164428 containerd[2091]: time="2024-06-25T18:45:34.164245683Z" level=info msg="CreateContainer within sandbox \"95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:34.223025 containerd[2091]: time="2024-06-25T18:45:34.222508791Z" level=info msg="CreateContainer within sandbox \"95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"204992d6062237dad285f09d44b974a5129cbbf444db96f14e1399268f5710fc\"" Jun 25 18:45:34.225150 containerd[2091]: time="2024-06-25T18:45:34.223602003Z" level=info msg="StartContainer for \"204992d6062237dad285f09d44b974a5129cbbf444db96f14e1399268f5710fc\"" Jun 25 18:45:34.297135 containerd[2091]: time="2024-06-25T18:45:34.297081691Z" level=info msg="StartContainer for \"204992d6062237dad285f09d44b974a5129cbbf444db96f14e1399268f5710fc\" returns successfully" Jun 25 18:45:34.746474 kubelet[3343]: I0625 18:45:34.746423 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-z8v6z" podStartSLOduration=35.746139467 podCreationTimestamp="2024-06-25 18:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:34.743307219 +0000 UTC m=+48.730674163" watchObservedRunningTime="2024-06-25 18:45:34.746139467 +0000 UTC m=+48.733506405" Jun 25 18:45:35.074104 systemd-networkd[1657]: calied2834182b2: Gained IPv6LL Jun 25 18:45:35.265817 systemd[1]: Started sshd@7-172.31.31.33:22-139.178.68.195:42326.service - OpenSSH per-connection server daemon (139.178.68.195:42326). Jun 25 18:45:35.488687 sshd[4810]: Accepted publickey for core from 139.178.68.195 port 42326 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:45:35.492963 sshd[4810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:35.523833 systemd-logind[2058]: New session 8 of user core. Jun 25 18:45:35.534790 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 18:45:35.718566 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:45:35.714218 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:45:35.714407 systemd-resolved[1971]: Flushed all caches. Jun 25 18:45:35.900712 sshd[4810]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:35.904949 systemd[1]: sshd@7-172.31.31.33:22-139.178.68.195:42326.service: Deactivated successfully. Jun 25 18:45:35.914015 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 18:45:35.914074 systemd-logind[2058]: Session 8 logged out. Waiting for processes to exit. Jun 25 18:45:35.917183 systemd-logind[2058]: Removed session 8. Jun 25 18:45:36.234200 containerd[2091]: time="2024-06-25T18:45:36.233753751Z" level=info msg="StopPodSandbox for \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\"" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.331 [INFO][4841] k8s.go 608: Cleaning up netns ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.331 [INFO][4841] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" iface="eth0" netns="/var/run/netns/cni-02baf554-d149-97a7-f0f3-0cf3b7eb0bf0" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.332 [INFO][4841] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" iface="eth0" netns="/var/run/netns/cni-02baf554-d149-97a7-f0f3-0cf3b7eb0bf0" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.332 [INFO][4841] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" iface="eth0" netns="/var/run/netns/cni-02baf554-d149-97a7-f0f3-0cf3b7eb0bf0" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.332 [INFO][4841] k8s.go 615: Releasing IP address(es) ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.332 [INFO][4841] utils.go 188: Calico CNI releasing IP address ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.443 [INFO][4848] ipam_plugin.go 411: Releasing address using handleID ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" HandleID="k8s-pod-network.4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.443 [INFO][4848] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.444 [INFO][4848] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.453 [WARNING][4848] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" HandleID="k8s-pod-network.4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.453 [INFO][4848] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" HandleID="k8s-pod-network.4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.455 [INFO][4848] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:36.465881 containerd[2091]: 2024-06-25 18:45:36.458 [INFO][4841] k8s.go 621: Teardown processing complete. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:36.469809 containerd[2091]: time="2024-06-25T18:45:36.466473934Z" level=info msg="TearDown network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\" successfully" Jun 25 18:45:36.469809 containerd[2091]: time="2024-06-25T18:45:36.466517052Z" level=info msg="StopPodSandbox for \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\" returns successfully" Jun 25 18:45:36.469809 containerd[2091]: time="2024-06-25T18:45:36.467703371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86wj5,Uid:11739506-f4c5-4731-8420-c05072fd4c97,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:36.472695 systemd[1]: run-netns-cni\x2d02baf554\x2dd149\x2d97a7\x2df0f3\x2d0cf3b7eb0bf0.mount: Deactivated successfully. Jun 25 18:45:36.795570 systemd-networkd[1657]: cali1328c744502: Link UP Jun 25 18:45:36.795846 systemd-networkd[1657]: cali1328c744502: Gained carrier Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.623 [INFO][4854] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0 csi-node-driver- calico-system 11739506-f4c5-4731-8420-c05072fd4c97 773 0 2024-06-25 18:45:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-31-33 csi-node-driver-86wj5 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali1328c744502 [] []}} ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Namespace="calico-system" Pod="csi-node-driver-86wj5" WorkloadEndpoint="ip--172--31--31--33-k8s-csi--node--driver--86wj5-" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.625 [INFO][4854] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Namespace="calico-system" Pod="csi-node-driver-86wj5" WorkloadEndpoint="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.682 [INFO][4866] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" HandleID="k8s-pod-network.c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.710 [INFO][4866] ipam_plugin.go 264: Auto assigning IP ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" HandleID="k8s-pod-network.c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002927d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-33", "pod":"csi-node-driver-86wj5", "timestamp":"2024-06-25 18:45:36.682748519 +0000 UTC"}, Hostname:"ip-172-31-31-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.711 [INFO][4866] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.711 [INFO][4866] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.711 [INFO][4866] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-33' Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.723 [INFO][4866] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" host="ip-172-31-31-33" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.739 [INFO][4866] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-33" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.752 [INFO][4866] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.755 [INFO][4866] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.760 [INFO][4866] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.760 [INFO][4866] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" host="ip-172-31-31-33" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.764 [INFO][4866] ipam.go 1685: Creating new handle: k8s-pod-network.c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.774 [INFO][4866] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" host="ip-172-31-31-33" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.784 [INFO][4866] ipam.go 1216: Successfully claimed IPs: [192.168.32.130/26] block=192.168.32.128/26 handle="k8s-pod-network.c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" host="ip-172-31-31-33" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.784 [INFO][4866] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.130/26] handle="k8s-pod-network.c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" host="ip-172-31-31-33" Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.784 [INFO][4866] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:36.825361 containerd[2091]: 2024-06-25 18:45:36.784 [INFO][4866] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.130/26] IPv6=[] ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" HandleID="k8s-pod-network.c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.832538 containerd[2091]: 2024-06-25 18:45:36.789 [INFO][4854] k8s.go 386: Populated endpoint ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Namespace="calico-system" Pod="csi-node-driver-86wj5" WorkloadEndpoint="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11739506-f4c5-4731-8420-c05072fd4c97", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"", Pod:"csi-node-driver-86wj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1328c744502", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:36.832538 containerd[2091]: 2024-06-25 18:45:36.790 [INFO][4854] k8s.go 387: Calico CNI using IPs: [192.168.32.130/32] ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Namespace="calico-system" Pod="csi-node-driver-86wj5" WorkloadEndpoint="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.832538 containerd[2091]: 2024-06-25 18:45:36.790 [INFO][4854] dataplane_linux.go 68: Setting the host side veth name to cali1328c744502 ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Namespace="calico-system" Pod="csi-node-driver-86wj5" WorkloadEndpoint="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.832538 containerd[2091]: 2024-06-25 18:45:36.792 [INFO][4854] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Namespace="calico-system" Pod="csi-node-driver-86wj5" WorkloadEndpoint="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.832538 containerd[2091]: 2024-06-25 18:45:36.792 [INFO][4854] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Namespace="calico-system" Pod="csi-node-driver-86wj5" WorkloadEndpoint="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11739506-f4c5-4731-8420-c05072fd4c97", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad", Pod:"csi-node-driver-86wj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1328c744502", MAC:"e2:25:83:7c:9c:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:36.832538 containerd[2091]: 2024-06-25 18:45:36.817 [INFO][4854] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad" Namespace="calico-system" Pod="csi-node-driver-86wj5" WorkloadEndpoint="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:36.874426 containerd[2091]: time="2024-06-25T18:45:36.874284775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:36.874426 containerd[2091]: time="2024-06-25T18:45:36.874350246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:36.874893 containerd[2091]: time="2024-06-25T18:45:36.874514619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:36.875319 containerd[2091]: time="2024-06-25T18:45:36.875035578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:36.983719 containerd[2091]: time="2024-06-25T18:45:36.983669988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-86wj5,Uid:11739506-f4c5-4731-8420-c05072fd4c97,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad\"" Jun 25 18:45:36.987578 containerd[2091]: time="2024-06-25T18:45:36.987519415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 18:45:37.230834 containerd[2091]: time="2024-06-25T18:45:37.230099839Z" level=info msg="StopPodSandbox for \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\"" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.297 [INFO][4946] k8s.go 608: Cleaning up netns ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.298 [INFO][4946] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" iface="eth0" netns="/var/run/netns/cni-029a7317-148e-c0f8-6794-718bcba5ccfc" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.298 [INFO][4946] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" iface="eth0" netns="/var/run/netns/cni-029a7317-148e-c0f8-6794-718bcba5ccfc" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.298 [INFO][4946] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" iface="eth0" netns="/var/run/netns/cni-029a7317-148e-c0f8-6794-718bcba5ccfc" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.298 [INFO][4946] k8s.go 615: Releasing IP address(es) ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.298 [INFO][4946] utils.go 188: Calico CNI releasing IP address ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.329 [INFO][4952] ipam_plugin.go 411: Releasing address using handleID ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" HandleID="k8s-pod-network.37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.329 [INFO][4952] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.329 [INFO][4952] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.348 [WARNING][4952] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" HandleID="k8s-pod-network.37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.348 [INFO][4952] ipam_plugin.go 439: Releasing address using workloadID ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" HandleID="k8s-pod-network.37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.354 [INFO][4952] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:37.361580 containerd[2091]: 2024-06-25 18:45:37.359 [INFO][4946] k8s.go 621: Teardown processing complete. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:37.365470 containerd[2091]: time="2024-06-25T18:45:37.361841088Z" level=info msg="TearDown network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\" successfully" Jun 25 18:45:37.365470 containerd[2091]: time="2024-06-25T18:45:37.361874384Z" level=info msg="StopPodSandbox for \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\" returns successfully" Jun 25 18:45:37.365470 containerd[2091]: time="2024-06-25T18:45:37.364765328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qllp6,Uid:8434a9e8-834b-4b31-ad53-9bb8bd09d9ce,Namespace:kube-system,Attempt:1,}" Jun 25 18:45:37.486121 systemd[1]: run-netns-cni\x2d029a7317\x2d148e\x2dc0f8\x2d6794\x2d718bcba5ccfc.mount: Deactivated successfully. Jun 25 18:45:37.509044 kubelet[3343]: I0625 18:45:37.509002 3343 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 18:45:37.761999 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:45:37.760264 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:45:37.760300 systemd-resolved[1971]: Flushed all caches. Jun 25 18:45:37.841339 systemd-networkd[1657]: calie5d941f1684: Link UP Jun 25 18:45:37.843176 systemd-networkd[1657]: calie5d941f1684: Gained carrier Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.444 [INFO][4959] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0 coredns-5dd5756b68- kube-system 8434a9e8-834b-4b31-ad53-9bb8bd09d9ce 782 0 2024-06-25 18:44:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-31-33 coredns-5dd5756b68-qllp6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie5d941f1684 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Namespace="kube-system" Pod="coredns-5dd5756b68-qllp6" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.445 [INFO][4959] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Namespace="kube-system" Pod="coredns-5dd5756b68-qllp6" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.523 [INFO][4970] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" HandleID="k8s-pod-network.6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.630 [INFO][4970] ipam_plugin.go 264: Auto assigning IP ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" HandleID="k8s-pod-network.6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003594e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-31-33", "pod":"coredns-5dd5756b68-qllp6", "timestamp":"2024-06-25 18:45:37.52373758 +0000 UTC"}, Hostname:"ip-172-31-31-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.636 [INFO][4970] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.642 [INFO][4970] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.644 [INFO][4970] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-33' Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.665 [INFO][4970] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" host="ip-172-31-31-33" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.697 [INFO][4970] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-33" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.728 [INFO][4970] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.736 [INFO][4970] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.746 [INFO][4970] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.747 [INFO][4970] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" host="ip-172-31-31-33" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.753 [INFO][4970] ipam.go 1685: Creating new handle: k8s-pod-network.6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.765 [INFO][4970] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" host="ip-172-31-31-33" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.787 [INFO][4970] ipam.go 1216: Successfully claimed IPs: [192.168.32.131/26] block=192.168.32.128/26 handle="k8s-pod-network.6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" host="ip-172-31-31-33" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.787 [INFO][4970] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.131/26] handle="k8s-pod-network.6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" host="ip-172-31-31-33" Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.787 [INFO][4970] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:37.893587 containerd[2091]: 2024-06-25 18:45:37.787 [INFO][4970] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.131/26] IPv6=[] ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" HandleID="k8s-pod-network.6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.898673 containerd[2091]: 2024-06-25 18:45:37.829 [INFO][4959] k8s.go 386: Populated endpoint ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Namespace="kube-system" Pod="coredns-5dd5756b68-qllp6" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8434a9e8-834b-4b31-ad53-9bb8bd09d9ce", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"", Pod:"coredns-5dd5756b68-qllp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5d941f1684", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:37.898673 containerd[2091]: 2024-06-25 18:45:37.829 [INFO][4959] k8s.go 387: Calico CNI using IPs: [192.168.32.131/32] ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Namespace="kube-system" Pod="coredns-5dd5756b68-qllp6" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.898673 containerd[2091]: 2024-06-25 18:45:37.829 [INFO][4959] dataplane_linux.go 68: Setting the host side veth name to calie5d941f1684 ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Namespace="kube-system" Pod="coredns-5dd5756b68-qllp6" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.898673 containerd[2091]: 2024-06-25 18:45:37.847 [INFO][4959] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Namespace="kube-system" Pod="coredns-5dd5756b68-qllp6" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.898673 containerd[2091]: 2024-06-25 18:45:37.858 [INFO][4959] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Namespace="kube-system" Pod="coredns-5dd5756b68-qllp6" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8434a9e8-834b-4b31-ad53-9bb8bd09d9ce", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a", Pod:"coredns-5dd5756b68-qllp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5d941f1684", MAC:"82:db:77:d7:60:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:37.898673 containerd[2091]: 2024-06-25 18:45:37.881 [INFO][4959] k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a" Namespace="kube-system" Pod="coredns-5dd5756b68-qllp6" WorkloadEndpoint="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:37.983605 containerd[2091]: time="2024-06-25T18:45:37.981133382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:37.983605 containerd[2091]: time="2024-06-25T18:45:37.981220256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:37.983605 containerd[2091]: time="2024-06-25T18:45:37.981252563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:37.983605 containerd[2091]: time="2024-06-25T18:45:37.981271593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:38.226715 containerd[2091]: time="2024-06-25T18:45:38.226670655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-qllp6,Uid:8434a9e8-834b-4b31-ad53-9bb8bd09d9ce,Namespace:kube-system,Attempt:1,} returns sandbox id \"6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a\"" Jun 25 18:45:38.243388 containerd[2091]: time="2024-06-25T18:45:38.241812403Z" level=info msg="StopPodSandbox for \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\"" Jun 25 18:45:38.258389 containerd[2091]: time="2024-06-25T18:45:38.255266694Z" level=info msg="CreateContainer within sandbox \"6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 18:45:38.299395 containerd[2091]: time="2024-06-25T18:45:38.297161769Z" level=info msg="CreateContainer within sandbox \"6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b9a38cfe53112a32969409a34bfdaee527e54bc1d3d7656f2dd42e415781003f\"" Jun 25 18:45:38.299395 containerd[2091]: time="2024-06-25T18:45:38.298505634Z" level=info msg="StartContainer for \"b9a38cfe53112a32969409a34bfdaee527e54bc1d3d7656f2dd42e415781003f\"" Jun 25 18:45:38.514158 containerd[2091]: time="2024-06-25T18:45:38.510870790Z" level=info msg="StartContainer for \"b9a38cfe53112a32969409a34bfdaee527e54bc1d3d7656f2dd42e415781003f\" returns successfully" Jun 25 18:45:38.591493 systemd-networkd[1657]: cali1328c744502: Gained IPv6LL Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.426 [INFO][5084] k8s.go 608: Cleaning up netns ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.426 [INFO][5084] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" iface="eth0" netns="/var/run/netns/cni-0e06bcf1-cee3-cb05-095a-455781dfc1b7" Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.429 [INFO][5084] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" iface="eth0" netns="/var/run/netns/cni-0e06bcf1-cee3-cb05-095a-455781dfc1b7" Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.430 [INFO][5084] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" iface="eth0" netns="/var/run/netns/cni-0e06bcf1-cee3-cb05-095a-455781dfc1b7" Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.431 [INFO][5084] k8s.go 615: Releasing IP address(es) ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.431 [INFO][5084] utils.go 188: Calico CNI releasing IP address ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.554 [INFO][5124] ipam_plugin.go 411: Releasing address using handleID ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" HandleID="k8s-pod-network.7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.555 [INFO][5124] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.556 [INFO][5124] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.576 [WARNING][5124] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" HandleID="k8s-pod-network.7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.576 [INFO][5124] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" HandleID="k8s-pod-network.7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.580 [INFO][5124] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:38.617711 containerd[2091]: 2024-06-25 18:45:38.589 [INFO][5084] k8s.go 621: Teardown processing complete. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:38.624210 containerd[2091]: time="2024-06-25T18:45:38.617892931Z" level=info msg="TearDown network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\" successfully" Jun 25 18:45:38.624210 containerd[2091]: time="2024-06-25T18:45:38.617924534Z" level=info msg="StopPodSandbox for \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\" returns successfully" Jun 25 18:45:38.624210 containerd[2091]: time="2024-06-25T18:45:38.624027649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86cfbfdcf-nvcqf,Uid:aa3cf42f-c171-493a-bb2e-92d413f86d05,Namespace:calico-system,Attempt:1,}" Jun 25 18:45:38.636441 systemd[1]: run-netns-cni\x2d0e06bcf1\x2dcee3\x2dcb05\x2d095a\x2d455781dfc1b7.mount: Deactivated successfully. Jun 25 18:45:38.864628 kubelet[3343]: I0625 18:45:38.864589 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-qllp6" podStartSLOduration=39.864534403 podCreationTimestamp="2024-06-25 18:44:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 18:45:38.863318415 +0000 UTC m=+52.850685377" watchObservedRunningTime="2024-06-25 18:45:38.864534403 +0000 UTC m=+52.851901343" Jun 25 18:45:39.347251 systemd-networkd[1657]: calicf34e9c29dc: Link UP Jun 25 18:45:39.347645 systemd-networkd[1657]: calicf34e9c29dc: Gained carrier Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:38.971 [INFO][5147] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0 calico-kube-controllers-86cfbfdcf- calico-system aa3cf42f-c171-493a-bb2e-92d413f86d05 795 0 2024-06-25 18:45:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86cfbfdcf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-31-33 calico-kube-controllers-86cfbfdcf-nvcqf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicf34e9c29dc [] []}} ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Namespace="calico-system" Pod="calico-kube-controllers-86cfbfdcf-nvcqf" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:38.974 [INFO][5147] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Namespace="calico-system" Pod="calico-kube-controllers-86cfbfdcf-nvcqf" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.156 [INFO][5161] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" HandleID="k8s-pod-network.0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.197 [INFO][5161] ipam_plugin.go 264: Auto assigning IP ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" HandleID="k8s-pod-network.0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000387300), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-31-33", "pod":"calico-kube-controllers-86cfbfdcf-nvcqf", "timestamp":"2024-06-25 18:45:39.156206324 +0000 UTC"}, Hostname:"ip-172-31-31-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.197 [INFO][5161] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.198 [INFO][5161] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.198 [INFO][5161] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-33' Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.202 [INFO][5161] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" host="ip-172-31-31-33" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.212 [INFO][5161] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-33" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.230 [INFO][5161] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.247 [INFO][5161] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.257 [INFO][5161] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.262 [INFO][5161] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" host="ip-172-31-31-33" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.272 [INFO][5161] ipam.go 1685: Creating new handle: k8s-pod-network.0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60 Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.288 [INFO][5161] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" host="ip-172-31-31-33" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.304 [INFO][5161] ipam.go 1216: Successfully claimed IPs: [192.168.32.132/26] block=192.168.32.128/26 handle="k8s-pod-network.0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" host="ip-172-31-31-33" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.304 [INFO][5161] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.132/26] handle="k8s-pod-network.0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" host="ip-172-31-31-33" Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.304 [INFO][5161] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:39.380359 containerd[2091]: 2024-06-25 18:45:39.304 [INFO][5161] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.132/26] IPv6=[] ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" HandleID="k8s-pod-network.0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:39.384224 containerd[2091]: 2024-06-25 18:45:39.324 [INFO][5147] k8s.go 386: Populated endpoint ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Namespace="calico-system" Pod="calico-kube-controllers-86cfbfdcf-nvcqf" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0", GenerateName:"calico-kube-controllers-86cfbfdcf-", Namespace:"calico-system", SelfLink:"", UID:"aa3cf42f-c171-493a-bb2e-92d413f86d05", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86cfbfdcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"", Pod:"calico-kube-controllers-86cfbfdcf-nvcqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf34e9c29dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.384224 containerd[2091]: 2024-06-25 18:45:39.328 [INFO][5147] k8s.go 387: Calico CNI using IPs: [192.168.32.132/32] ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Namespace="calico-system" Pod="calico-kube-controllers-86cfbfdcf-nvcqf" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:39.384224 containerd[2091]: 2024-06-25 18:45:39.328 [INFO][5147] dataplane_linux.go 68: Setting the host side veth name to calicf34e9c29dc ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Namespace="calico-system" Pod="calico-kube-controllers-86cfbfdcf-nvcqf" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:39.384224 containerd[2091]: 2024-06-25 18:45:39.340 [INFO][5147] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Namespace="calico-system" Pod="calico-kube-controllers-86cfbfdcf-nvcqf" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:39.384224 containerd[2091]: 2024-06-25 18:45:39.341 [INFO][5147] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Namespace="calico-system" Pod="calico-kube-controllers-86cfbfdcf-nvcqf" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0", GenerateName:"calico-kube-controllers-86cfbfdcf-", Namespace:"calico-system", SelfLink:"", UID:"aa3cf42f-c171-493a-bb2e-92d413f86d05", ResourceVersion:"795", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86cfbfdcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60", Pod:"calico-kube-controllers-86cfbfdcf-nvcqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf34e9c29dc", MAC:"e2:77:e0:31:7a:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:39.384224 containerd[2091]: 2024-06-25 18:45:39.371 [INFO][5147] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60" Namespace="calico-system" Pod="calico-kube-controllers-86cfbfdcf-nvcqf" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:39.641173 containerd[2091]: time="2024-06-25T18:45:39.637892679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:45:39.645503 containerd[2091]: time="2024-06-25T18:45:39.643498071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:39.646307 containerd[2091]: time="2024-06-25T18:45:39.646248925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:45:39.646659 containerd[2091]: time="2024-06-25T18:45:39.646321614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:45:39.728674 containerd[2091]: time="2024-06-25T18:45:39.728561469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:39.737930 containerd[2091]: time="2024-06-25T18:45:39.737064632Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jun 25 18:45:39.739288 containerd[2091]: time="2024-06-25T18:45:39.738328058Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:39.763239 containerd[2091]: time="2024-06-25T18:45:39.763179468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:39.771380 containerd[2091]: time="2024-06-25T18:45:39.771253896Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.783663676s" Jun 25 18:45:39.772581 containerd[2091]: time="2024-06-25T18:45:39.771310023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jun 25 18:45:39.799894 containerd[2091]: time="2024-06-25T18:45:39.799832918Z" level=info msg="CreateContainer within sandbox \"c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 18:45:39.807440 systemd-networkd[1657]: calie5d941f1684: Gained IPv6LL Jun 25 18:45:39.990878 containerd[2091]: time="2024-06-25T18:45:39.990833828Z" level=info msg="CreateContainer within sandbox \"c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"62305cec3b4a1a6772970a2af82864dad55d45555ad8994d0532732e9eb27389\"" Jun 25 18:45:39.993558 containerd[2091]: time="2024-06-25T18:45:39.992317017Z" level=info msg="StartContainer for \"62305cec3b4a1a6772970a2af82864dad55d45555ad8994d0532732e9eb27389\"" Jun 25 18:45:40.136246 containerd[2091]: time="2024-06-25T18:45:40.136184241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86cfbfdcf-nvcqf,Uid:aa3cf42f-c171-493a-bb2e-92d413f86d05,Namespace:calico-system,Attempt:1,} returns sandbox id \"0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60\"" Jun 25 18:45:40.142402 containerd[2091]: time="2024-06-25T18:45:40.140164612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 18:45:40.315292 containerd[2091]: time="2024-06-25T18:45:40.315237687Z" level=info msg="StartContainer for \"62305cec3b4a1a6772970a2af82864dad55d45555ad8994d0532732e9eb27389\" returns successfully" Jun 25 18:45:40.709508 systemd-networkd[1657]: calicf34e9c29dc: Gained IPv6LL Jun 25 18:45:40.933697 systemd[1]: Started sshd@8-172.31.31.33:22-139.178.68.195:48026.service - OpenSSH per-connection server daemon (139.178.68.195:48026). Jun 25 18:45:41.175906 sshd[5260]: Accepted publickey for core from 139.178.68.195 port 48026 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:45:41.179423 sshd[5260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:41.189350 systemd-logind[2058]: New session 9 of user core. Jun 25 18:45:41.196739 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 18:45:41.730471 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:45:41.728450 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:45:41.728529 systemd-resolved[1971]: Flushed all caches. Jun 25 18:45:41.828766 sshd[5260]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:41.842173 systemd[1]: sshd@8-172.31.31.33:22-139.178.68.195:48026.service: Deactivated successfully. Jun 25 18:45:41.863320 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 18:45:41.880335 systemd-logind[2058]: Session 9 logged out. Waiting for processes to exit. Jun 25 18:45:41.899254 systemd-logind[2058]: Removed session 9. Jun 25 18:45:43.248797 ntpd[2047]: Listen normally on 6 vxlan.calico 192.168.32.128:123 Jun 25 18:45:43.248887 ntpd[2047]: Listen normally on 7 vxlan.calico [fe80::64a7:f6ff:fed0:ef32%4]:123 Jun 25 18:45:43.252327 ntpd[2047]: 25 Jun 18:45:43 ntpd[2047]: Listen normally on 6 vxlan.calico 192.168.32.128:123 Jun 25 18:45:43.252327 ntpd[2047]: 25 Jun 18:45:43 ntpd[2047]: Listen normally on 7 vxlan.calico [fe80::64a7:f6ff:fed0:ef32%4]:123 Jun 25 18:45:43.252327 ntpd[2047]: 25 Jun 18:45:43 ntpd[2047]: Listen normally on 8 calied2834182b2 [fe80::ecee:eeff:feee:eeee%7]:123 Jun 25 18:45:43.252327 ntpd[2047]: 25 Jun 18:45:43 ntpd[2047]: Listen normally on 9 cali1328c744502 [fe80::ecee:eeff:feee:eeee%8]:123 Jun 25 18:45:43.252327 ntpd[2047]: 25 Jun 18:45:43 ntpd[2047]: Listen normally on 10 calie5d941f1684 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 25 18:45:43.252327 ntpd[2047]: 25 Jun 18:45:43 ntpd[2047]: Listen normally on 11 calicf34e9c29dc [fe80::ecee:eeff:feee:eeee%10]:123 Jun 25 18:45:43.249276 ntpd[2047]: Listen normally on 8 calied2834182b2 [fe80::ecee:eeff:feee:eeee%7]:123 Jun 25 18:45:43.249403 ntpd[2047]: Listen normally on 9 cali1328c744502 [fe80::ecee:eeff:feee:eeee%8]:123 Jun 25 18:45:43.249450 ntpd[2047]: Listen normally on 10 calie5d941f1684 [fe80::ecee:eeff:feee:eeee%9]:123 Jun 25 18:45:43.249491 ntpd[2047]: Listen normally on 11 calicf34e9c29dc [fe80::ecee:eeff:feee:eeee%10]:123 Jun 25 18:45:43.775437 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:45:43.775382 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:45:43.775393 systemd-resolved[1971]: Flushed all caches. Jun 25 18:45:43.850053 containerd[2091]: time="2024-06-25T18:45:43.850005636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:43.854138 containerd[2091]: time="2024-06-25T18:45:43.853099985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jun 25 18:45:43.856538 containerd[2091]: time="2024-06-25T18:45:43.855647775Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:43.868565 containerd[2091]: time="2024-06-25T18:45:43.867885523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:43.876419 containerd[2091]: time="2024-06-25T18:45:43.876359386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.736045262s" Jun 25 18:45:43.877128 containerd[2091]: time="2024-06-25T18:45:43.876424585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jun 25 18:45:43.880104 containerd[2091]: time="2024-06-25T18:45:43.877203062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 18:45:43.946603 containerd[2091]: time="2024-06-25T18:45:43.946552961Z" level=info msg="CreateContainer within sandbox \"0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 18:45:43.978868 containerd[2091]: time="2024-06-25T18:45:43.978716477Z" level=info msg="CreateContainer within sandbox \"0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5964ccbef60860d43b8a38e9231f0c9ce359f91b85a4f3935922e13b8ad65000\"" Jun 25 18:45:43.983336 containerd[2091]: time="2024-06-25T18:45:43.980667662Z" level=info msg="StartContainer for \"5964ccbef60860d43b8a38e9231f0c9ce359f91b85a4f3935922e13b8ad65000\"" Jun 25 18:45:44.186149 containerd[2091]: time="2024-06-25T18:45:44.186104525Z" level=info msg="StartContainer for \"5964ccbef60860d43b8a38e9231f0c9ce359f91b85a4f3935922e13b8ad65000\" returns successfully" Jun 25 18:45:44.946191 kubelet[3343]: I0625 18:45:44.943873 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86cfbfdcf-nvcqf" podStartSLOduration=34.206122324 podCreationTimestamp="2024-06-25 18:45:07 +0000 UTC" firstStartedPulling="2024-06-25 18:45:40.139016857 +0000 UTC m=+54.126383779" lastFinishedPulling="2024-06-25 18:45:43.876711862 +0000 UTC m=+57.864078781" observedRunningTime="2024-06-25 18:45:44.939712028 +0000 UTC m=+58.927078966" watchObservedRunningTime="2024-06-25 18:45:44.943817326 +0000 UTC m=+58.931184264" Jun 25 18:45:44.993954 systemd[1]: run-containerd-runc-k8s.io-5964ccbef60860d43b8a38e9231f0c9ce359f91b85a4f3935922e13b8ad65000-runc.e7WWJY.mount: Deactivated successfully. Jun 25 18:45:45.600754 containerd[2091]: time="2024-06-25T18:45:45.600703573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:45.602984 containerd[2091]: time="2024-06-25T18:45:45.602222017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jun 25 18:45:45.606245 containerd[2091]: time="2024-06-25T18:45:45.605427944Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:45.621040 containerd[2091]: time="2024-06-25T18:45:45.620901876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:45:45.621719 containerd[2091]: time="2024-06-25T18:45:45.621673961Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.744424943s" Jun 25 18:45:45.622914 containerd[2091]: time="2024-06-25T18:45:45.621725741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jun 25 18:45:45.624889 containerd[2091]: time="2024-06-25T18:45:45.624855930Z" level=info msg="CreateContainer within sandbox \"c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 18:45:45.650031 containerd[2091]: time="2024-06-25T18:45:45.649979495Z" level=info msg="CreateContainer within sandbox \"c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"acd778661ca3f67f6a9f01db4ed042ff7b3620a045c6b33e2c8c1a9b04dbfac0\"" Jun 25 18:45:45.655434 containerd[2091]: time="2024-06-25T18:45:45.652669338Z" level=info msg="StartContainer for \"acd778661ca3f67f6a9f01db4ed042ff7b3620a045c6b33e2c8c1a9b04dbfac0\"" Jun 25 18:45:45.792581 containerd[2091]: time="2024-06-25T18:45:45.792531741Z" level=info msg="StartContainer for \"acd778661ca3f67f6a9f01db4ed042ff7b3620a045c6b33e2c8c1a9b04dbfac0\" returns successfully" Jun 25 18:45:45.955326 kubelet[3343]: I0625 18:45:45.950472 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-86wj5" podStartSLOduration=30.315488514 podCreationTimestamp="2024-06-25 18:45:07 +0000 UTC" firstStartedPulling="2024-06-25 18:45:36.987141897 +0000 UTC m=+50.974508814" lastFinishedPulling="2024-06-25 18:45:45.622070695 +0000 UTC m=+59.609437625" observedRunningTime="2024-06-25 18:45:45.949320625 +0000 UTC m=+59.936687562" watchObservedRunningTime="2024-06-25 18:45:45.950417325 +0000 UTC m=+59.937784265" Jun 25 18:45:46.278710 containerd[2091]: time="2024-06-25T18:45:46.278314147Z" level=info msg="StopPodSandbox for \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\"" Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.376 [WARNING][5404] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11739506-f4c5-4731-8420-c05072fd4c97", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad", Pod:"csi-node-driver-86wj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1328c744502", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.377 [INFO][5404] k8s.go 608: Cleaning up netns ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.377 [INFO][5404] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" iface="eth0" netns="" Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.377 [INFO][5404] k8s.go 615: Releasing IP address(es) ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.377 [INFO][5404] utils.go 188: Calico CNI releasing IP address ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.456 [INFO][5410] ipam_plugin.go 411: Releasing address using handleID ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" HandleID="k8s-pod-network.4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.456 [INFO][5410] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.456 [INFO][5410] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.470 [WARNING][5410] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" HandleID="k8s-pod-network.4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.471 [INFO][5410] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" HandleID="k8s-pod-network.4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.474 [INFO][5410] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:46.480728 containerd[2091]: 2024-06-25 18:45:46.477 [INFO][5404] k8s.go 621: Teardown processing complete. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:46.480728 containerd[2091]: time="2024-06-25T18:45:46.480694064Z" level=info msg="TearDown network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\" successfully" Jun 25 18:45:46.481881 containerd[2091]: time="2024-06-25T18:45:46.480748493Z" level=info msg="StopPodSandbox for \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\" returns successfully" Jun 25 18:45:46.492098 containerd[2091]: time="2024-06-25T18:45:46.492043661Z" level=info msg="RemovePodSandbox for \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\"" Jun 25 18:45:46.492098 containerd[2091]: time="2024-06-25T18:45:46.492088705Z" level=info msg="Forcibly stopping sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\"" Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.616 [WARNING][5428] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11739506-f4c5-4731-8420-c05072fd4c97", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"c8ce3bda601a5e8d2ae1e2db0eaad1fa6cc0fc3af6449070ac532be3db34a7ad", Pod:"csi-node-driver-86wj5", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1328c744502", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.619 [INFO][5428] k8s.go 608: Cleaning up netns ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.619 [INFO][5428] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" iface="eth0" netns="" Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.619 [INFO][5428] k8s.go 615: Releasing IP address(es) ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.619 [INFO][5428] utils.go 188: Calico CNI releasing IP address ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.731 [INFO][5434] ipam_plugin.go 411: Releasing address using handleID ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" HandleID="k8s-pod-network.4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.731 [INFO][5434] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.731 [INFO][5434] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.741 [WARNING][5434] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" HandleID="k8s-pod-network.4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.741 [INFO][5434] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" HandleID="k8s-pod-network.4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Workload="ip--172--31--31--33-k8s-csi--node--driver--86wj5-eth0" Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.746 [INFO][5434] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:46.757424 containerd[2091]: 2024-06-25 18:45:46.752 [INFO][5428] k8s.go 621: Teardown processing complete. ContainerID="4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162" Jun 25 18:45:46.758969 containerd[2091]: time="2024-06-25T18:45:46.757452775Z" level=info msg="TearDown network for sandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\" successfully" Jun 25 18:45:46.771618 containerd[2091]: time="2024-06-25T18:45:46.771568114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:46.772478 containerd[2091]: time="2024-06-25T18:45:46.772437388Z" level=info msg="RemovePodSandbox \"4314bd7c36ceedeb4854927655cae4244eb8f2db9643f9b57200fc1489654162\" returns successfully" Jun 25 18:45:46.774701 containerd[2091]: time="2024-06-25T18:45:46.774167328Z" level=info msg="StopPodSandbox for \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\"" Jun 25 18:45:46.798005 kubelet[3343]: I0625 18:45:46.797866 3343 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 18:45:46.798005 kubelet[3343]: I0625 18:45:46.797935 3343 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 18:45:46.870621 systemd[1]: Started sshd@9-172.31.31.33:22-139.178.68.195:48038.service - OpenSSH per-connection server daemon (139.178.68.195:48038). Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.008 [WARNING][5452] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0", GenerateName:"calico-kube-controllers-86cfbfdcf-", Namespace:"calico-system", SelfLink:"", UID:"aa3cf42f-c171-493a-bb2e-92d413f86d05", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86cfbfdcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60", Pod:"calico-kube-controllers-86cfbfdcf-nvcqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf34e9c29dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.008 [INFO][5452] k8s.go 608: Cleaning up netns ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.008 [INFO][5452] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" iface="eth0" netns="" Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.008 [INFO][5452] k8s.go 615: Releasing IP address(es) ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.008 [INFO][5452] utils.go 188: Calico CNI releasing IP address ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.084 [INFO][5461] ipam_plugin.go 411: Releasing address using handleID ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" HandleID="k8s-pod-network.7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.086 [INFO][5461] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.086 [INFO][5461] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.094 [WARNING][5461] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" HandleID="k8s-pod-network.7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.094 [INFO][5461] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" HandleID="k8s-pod-network.7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.097 [INFO][5461] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:47.103029 containerd[2091]: 2024-06-25 18:45:47.100 [INFO][5452] k8s.go 621: Teardown processing complete. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:47.104790 containerd[2091]: time="2024-06-25T18:45:47.103077871Z" level=info msg="TearDown network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\" successfully" Jun 25 18:45:47.104790 containerd[2091]: time="2024-06-25T18:45:47.103108268Z" level=info msg="StopPodSandbox for \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\" returns successfully" Jun 25 18:45:47.104790 containerd[2091]: time="2024-06-25T18:45:47.104110098Z" level=info msg="RemovePodSandbox for \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\"" Jun 25 18:45:47.104790 containerd[2091]: time="2024-06-25T18:45:47.104146486Z" level=info msg="Forcibly stopping sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\"" Jun 25 18:45:47.149273 sshd[5457]: Accepted publickey for core from 139.178.68.195 port 48038 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:45:47.151220 sshd[5457]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:47.161722 systemd-logind[2058]: New session 10 of user core. Jun 25 18:45:47.167451 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.181 [WARNING][5479] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0", GenerateName:"calico-kube-controllers-86cfbfdcf-", Namespace:"calico-system", SelfLink:"", UID:"aa3cf42f-c171-493a-bb2e-92d413f86d05", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86cfbfdcf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"0c95f0d7ad7f107a694272a17f795458e3c0b2bae2d3bbfbd64c3701f990bd60", Pod:"calico-kube-controllers-86cfbfdcf-nvcqf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicf34e9c29dc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.191 [INFO][5479] k8s.go 608: Cleaning up netns ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.193 [INFO][5479] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" iface="eth0" netns="" Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.193 [INFO][5479] k8s.go 615: Releasing IP address(es) ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.194 [INFO][5479] utils.go 188: Calico CNI releasing IP address ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.235 [INFO][5488] ipam_plugin.go 411: Releasing address using handleID ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" HandleID="k8s-pod-network.7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.235 [INFO][5488] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.235 [INFO][5488] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.244 [WARNING][5488] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" HandleID="k8s-pod-network.7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.244 [INFO][5488] ipam_plugin.go 439: Releasing address using workloadID ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" HandleID="k8s-pod-network.7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Workload="ip--172--31--31--33-k8s-calico--kube--controllers--86cfbfdcf--nvcqf-eth0" Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.245 [INFO][5488] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:47.249811 containerd[2091]: 2024-06-25 18:45:47.247 [INFO][5479] k8s.go 621: Teardown processing complete. ContainerID="7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f" Jun 25 18:45:47.250631 containerd[2091]: time="2024-06-25T18:45:47.249860197Z" level=info msg="TearDown network for sandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\" successfully" Jun 25 18:45:47.254851 containerd[2091]: time="2024-06-25T18:45:47.254801918Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:47.254971 containerd[2091]: time="2024-06-25T18:45:47.254877511Z" level=info msg="RemovePodSandbox \"7babfd157faa1e2771e2674ec898ea65fa098814459f76984038d5bb3892ea5f\" returns successfully" Jun 25 18:45:47.255539 containerd[2091]: time="2024-06-25T18:45:47.255439319Z" level=info msg="StopPodSandbox for \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\"" Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.325 [WARNING][5507] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8b7a1d64-49a7-49eb-97e6-0860978799bc", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5", Pod:"coredns-5dd5756b68-z8v6z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied2834182b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.325 [INFO][5507] k8s.go 608: Cleaning up netns ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.325 [INFO][5507] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" iface="eth0" netns="" Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.325 [INFO][5507] k8s.go 615: Releasing IP address(es) ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.325 [INFO][5507] utils.go 188: Calico CNI releasing IP address ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.373 [INFO][5518] ipam_plugin.go 411: Releasing address using handleID ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" HandleID="k8s-pod-network.4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.373 [INFO][5518] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.373 [INFO][5518] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.382 [WARNING][5518] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" HandleID="k8s-pod-network.4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.382 [INFO][5518] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" HandleID="k8s-pod-network.4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.385 [INFO][5518] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:47.391207 containerd[2091]: 2024-06-25 18:45:47.388 [INFO][5507] k8s.go 621: Teardown processing complete. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:47.391207 containerd[2091]: time="2024-06-25T18:45:47.390485908Z" level=info msg="TearDown network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\" successfully" Jun 25 18:45:47.391207 containerd[2091]: time="2024-06-25T18:45:47.390519974Z" level=info msg="StopPodSandbox for \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\" returns successfully" Jun 25 18:45:47.392564 containerd[2091]: time="2024-06-25T18:45:47.391309298Z" level=info msg="RemovePodSandbox for \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\"" Jun 25 18:45:47.392564 containerd[2091]: time="2024-06-25T18:45:47.391344984Z" level=info msg="Forcibly stopping sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\"" Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.480 [WARNING][5536] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8b7a1d64-49a7-49eb-97e6-0860978799bc", ResourceVersion:"764", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"95539f4fe616bf9537ef760948456e3b5537ecb3e5c682d71e91ca8786a30fd5", Pod:"coredns-5dd5756b68-z8v6z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied2834182b2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.480 [INFO][5536] k8s.go 608: Cleaning up netns ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.480 [INFO][5536] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" iface="eth0" netns="" Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.480 [INFO][5536] k8s.go 615: Releasing IP address(es) ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.480 [INFO][5536] utils.go 188: Calico CNI releasing IP address ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.520 [INFO][5542] ipam_plugin.go 411: Releasing address using handleID ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" HandleID="k8s-pod-network.4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.522 [INFO][5542] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.522 [INFO][5542] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.531 [WARNING][5542] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" HandleID="k8s-pod-network.4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.531 [INFO][5542] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" HandleID="k8s-pod-network.4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--z8v6z-eth0" Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.533 [INFO][5542] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:47.537839 containerd[2091]: 2024-06-25 18:45:47.535 [INFO][5536] k8s.go 621: Teardown processing complete. ContainerID="4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63" Jun 25 18:45:47.538761 containerd[2091]: time="2024-06-25T18:45:47.537899824Z" level=info msg="TearDown network for sandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\" successfully" Jun 25 18:45:47.553651 containerd[2091]: time="2024-06-25T18:45:47.553251037Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:47.553651 containerd[2091]: time="2024-06-25T18:45:47.553535642Z" level=info msg="RemovePodSandbox \"4e80f3fd2333469ed1f39e3f955f187a151a12b2d20c867f4b9797611e9f1e63\" returns successfully" Jun 25 18:45:47.554979 containerd[2091]: time="2024-06-25T18:45:47.554947298Z" level=info msg="StopPodSandbox for \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\"" Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.653 [WARNING][5564] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8434a9e8-834b-4b31-ad53-9bb8bd09d9ce", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a", Pod:"coredns-5dd5756b68-qllp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5d941f1684", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.654 [INFO][5564] k8s.go 608: Cleaning up netns ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.654 [INFO][5564] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" iface="eth0" netns="" Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.654 [INFO][5564] k8s.go 615: Releasing IP address(es) ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.654 [INFO][5564] utils.go 188: Calico CNI releasing IP address ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.697 [INFO][5571] ipam_plugin.go 411: Releasing address using handleID ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" HandleID="k8s-pod-network.37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.697 [INFO][5571] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.697 [INFO][5571] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.708 [WARNING][5571] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" HandleID="k8s-pod-network.37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.708 [INFO][5571] ipam_plugin.go 439: Releasing address using workloadID ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" HandleID="k8s-pod-network.37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.714 [INFO][5571] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:47.720218 containerd[2091]: 2024-06-25 18:45:47.717 [INFO][5564] k8s.go 621: Teardown processing complete. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:47.720218 containerd[2091]: time="2024-06-25T18:45:47.720182311Z" level=info msg="TearDown network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\" successfully" Jun 25 18:45:47.720218 containerd[2091]: time="2024-06-25T18:45:47.720215841Z" level=info msg="StopPodSandbox for \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\" returns successfully" Jun 25 18:45:47.723889 containerd[2091]: time="2024-06-25T18:45:47.722348371Z" level=info msg="RemovePodSandbox for \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\"" Jun 25 18:45:47.723889 containerd[2091]: time="2024-06-25T18:45:47.723803280Z" level=info msg="Forcibly stopping sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\"" Jun 25 18:45:47.872456 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:45:47.870859 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:45:47.870959 systemd-resolved[1971]: Flushed all caches. Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.821 [WARNING][5590] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"8434a9e8-834b-4b31-ad53-9bb8bd09d9ce", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 44, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"6ca365cb9aa4bcbf68fb13a18a3c200ce7b37c2588f6eba45fb88100d1f1215a", Pod:"coredns-5dd5756b68-qllp6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5d941f1684", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.822 [INFO][5590] k8s.go 608: Cleaning up netns ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.822 [INFO][5590] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" iface="eth0" netns="" Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.822 [INFO][5590] k8s.go 615: Releasing IP address(es) ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.822 [INFO][5590] utils.go 188: Calico CNI releasing IP address ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.868 [INFO][5596] ipam_plugin.go 411: Releasing address using handleID ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" HandleID="k8s-pod-network.37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.868 [INFO][5596] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.868 [INFO][5596] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.877 [WARNING][5596] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" HandleID="k8s-pod-network.37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.877 [INFO][5596] ipam_plugin.go 439: Releasing address using workloadID ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" HandleID="k8s-pod-network.37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Workload="ip--172--31--31--33-k8s-coredns--5dd5756b68--qllp6-eth0" Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.879 [INFO][5596] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:45:47.893003 containerd[2091]: 2024-06-25 18:45:47.882 [INFO][5590] k8s.go 621: Teardown processing complete. ContainerID="37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2" Jun 25 18:45:47.893003 containerd[2091]: time="2024-06-25T18:45:47.889204584Z" level=info msg="TearDown network for sandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\" successfully" Jun 25 18:45:47.899354 containerd[2091]: time="2024-06-25T18:45:47.898580089Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 18:45:47.899354 containerd[2091]: time="2024-06-25T18:45:47.898688709Z" level=info msg="RemovePodSandbox \"37916717a38b79cab081a22ef36cba8681649510dc05453153dfaddaf1952ba2\" returns successfully" Jun 25 18:45:47.974176 sshd[5457]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:47.987743 systemd-logind[2058]: Session 10 logged out. Waiting for processes to exit. Jun 25 18:45:47.992595 systemd[1]: sshd@9-172.31.31.33:22-139.178.68.195:48038.service: Deactivated successfully. Jun 25 18:45:47.997855 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 18:45:48.013972 systemd[1]: Started sshd@10-172.31.31.33:22-139.178.68.195:48054.service - OpenSSH per-connection server daemon (139.178.68.195:48054). Jun 25 18:45:48.019106 systemd-logind[2058]: Removed session 10. Jun 25 18:45:48.198087 sshd[5606]: Accepted publickey for core from 139.178.68.195 port 48054 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:45:48.203955 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:48.211994 systemd-logind[2058]: New session 11 of user core. Jun 25 18:45:48.217838 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 18:45:48.727728 sshd[5606]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:48.740495 systemd[1]: sshd@10-172.31.31.33:22-139.178.68.195:48054.service: Deactivated successfully. Jun 25 18:45:48.759144 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 18:45:48.767089 systemd-logind[2058]: Session 11 logged out. Waiting for processes to exit. Jun 25 18:45:48.788572 systemd[1]: Started sshd@11-172.31.31.33:22-139.178.68.195:49630.service - OpenSSH per-connection server daemon (139.178.68.195:49630). Jun 25 18:45:48.789711 systemd-logind[2058]: Removed session 11. Jun 25 18:45:48.977556 sshd[5619]: Accepted publickey for core from 139.178.68.195 port 49630 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:45:48.980986 sshd[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:48.988870 systemd-logind[2058]: New session 12 of user core. Jun 25 18:45:48.994698 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 18:45:49.268030 sshd[5619]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:49.279497 systemd-logind[2058]: Session 12 logged out. Waiting for processes to exit. Jun 25 18:45:49.279981 systemd[1]: sshd@11-172.31.31.33:22-139.178.68.195:49630.service: Deactivated successfully. Jun 25 18:45:49.303838 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 18:45:49.309221 systemd-logind[2058]: Removed session 12. Jun 25 18:45:54.299767 systemd[1]: Started sshd@12-172.31.31.33:22-139.178.68.195:49644.service - OpenSSH per-connection server daemon (139.178.68.195:49644). Jun 25 18:45:54.472388 sshd[5665]: Accepted publickey for core from 139.178.68.195 port 49644 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:45:54.474223 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:54.480741 systemd-logind[2058]: New session 13 of user core. Jun 25 18:45:54.488961 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 18:45:54.738194 sshd[5665]: pam_unix(sshd:session): session closed for user core Jun 25 18:45:54.745121 systemd-logind[2058]: Session 13 logged out. Waiting for processes to exit. Jun 25 18:45:54.746499 systemd[1]: sshd@12-172.31.31.33:22-139.178.68.195:49644.service: Deactivated successfully. Jun 25 18:45:54.761720 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 18:45:54.765093 systemd-logind[2058]: Removed session 13. Jun 25 18:45:59.768931 systemd[1]: Started sshd@13-172.31.31.33:22-139.178.68.195:32956.service - OpenSSH per-connection server daemon (139.178.68.195:32956). Jun 25 18:45:59.935670 sshd[5686]: Accepted publickey for core from 139.178.68.195 port 32956 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:45:59.938165 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:45:59.944308 systemd-logind[2058]: New session 14 of user core. Jun 25 18:45:59.949964 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 18:46:00.156737 sshd[5686]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:00.165128 systemd-logind[2058]: Session 14 logged out. Waiting for processes to exit. Jun 25 18:46:00.165683 systemd[1]: sshd@13-172.31.31.33:22-139.178.68.195:32956.service: Deactivated successfully. Jun 25 18:46:00.182812 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 18:46:00.185763 systemd-logind[2058]: Removed session 14. Jun 25 18:46:05.186205 systemd[1]: Started sshd@14-172.31.31.33:22-139.178.68.195:32960.service - OpenSSH per-connection server daemon (139.178.68.195:32960). Jun 25 18:46:05.358552 sshd[5707]: Accepted publickey for core from 139.178.68.195 port 32960 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:05.360987 sshd[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:05.367500 systemd-logind[2058]: New session 15 of user core. Jun 25 18:46:05.371761 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 18:46:05.637027 sshd[5707]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:05.641885 systemd-logind[2058]: Session 15 logged out. Waiting for processes to exit. Jun 25 18:46:05.642593 systemd[1]: sshd@14-172.31.31.33:22-139.178.68.195:32960.service: Deactivated successfully. Jun 25 18:46:05.650750 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 18:46:05.652510 systemd-logind[2058]: Removed session 15. Jun 25 18:46:07.542024 systemd[1]: run-containerd-runc-k8s.io-d5a424af0f520a351019a62fe972da84af9e4a01dab7fd0dc7730df49eafc050-runc.ufKXuL.mount: Deactivated successfully. Jun 25 18:46:10.673750 systemd[1]: Started sshd@15-172.31.31.33:22-139.178.68.195:47502.service - OpenSSH per-connection server daemon (139.178.68.195:47502). Jun 25 18:46:10.877333 sshd[5745]: Accepted publickey for core from 139.178.68.195 port 47502 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:10.878921 sshd[5745]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:10.894226 systemd-logind[2058]: New session 16 of user core. Jun 25 18:46:10.899760 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 18:46:11.242931 sshd[5745]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:11.254626 systemd[1]: sshd@15-172.31.31.33:22-139.178.68.195:47502.service: Deactivated successfully. Jun 25 18:46:11.261153 systemd-logind[2058]: Session 16 logged out. Waiting for processes to exit. Jun 25 18:46:11.275872 systemd[1]: Started sshd@16-172.31.31.33:22-139.178.68.195:47510.service - OpenSSH per-connection server daemon (139.178.68.195:47510). Jun 25 18:46:11.277683 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 18:46:11.285270 systemd-logind[2058]: Removed session 16. Jun 25 18:46:11.518171 sshd[5759]: Accepted publickey for core from 139.178.68.195 port 47510 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:11.534653 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:11.556460 systemd-logind[2058]: New session 17 of user core. Jun 25 18:46:11.564873 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 18:46:12.395854 sshd[5759]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:12.407801 systemd[1]: sshd@16-172.31.31.33:22-139.178.68.195:47510.service: Deactivated successfully. Jun 25 18:46:12.441150 systemd-logind[2058]: Session 17 logged out. Waiting for processes to exit. Jun 25 18:46:12.447333 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 18:46:12.459395 systemd[1]: Started sshd@17-172.31.31.33:22-139.178.68.195:47514.service - OpenSSH per-connection server daemon (139.178.68.195:47514). Jun 25 18:46:12.464701 systemd-logind[2058]: Removed session 17. Jun 25 18:46:12.706542 sshd[5777]: Accepted publickey for core from 139.178.68.195 port 47514 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:12.709074 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:12.717062 systemd-logind[2058]: New session 18 of user core. Jun 25 18:46:12.723285 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 18:46:14.625263 sshd[5777]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:14.638310 systemd[1]: sshd@17-172.31.31.33:22-139.178.68.195:47514.service: Deactivated successfully. Jun 25 18:46:14.650551 systemd-logind[2058]: Session 18 logged out. Waiting for processes to exit. Jun 25 18:46:14.667294 systemd[1]: Started sshd@18-172.31.31.33:22-139.178.68.195:47516.service - OpenSSH per-connection server daemon (139.178.68.195:47516). Jun 25 18:46:14.669162 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 18:46:14.672239 systemd-logind[2058]: Removed session 18. Jun 25 18:46:14.898689 sshd[5821]: Accepted publickey for core from 139.178.68.195 port 47516 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:14.900774 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:14.910853 systemd-logind[2058]: New session 19 of user core. Jun 25 18:46:14.917486 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 18:46:15.721856 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:46:15.714154 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:46:15.714199 systemd-resolved[1971]: Flushed all caches. Jun 25 18:46:15.963146 sshd[5821]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:15.974209 systemd[1]: sshd@18-172.31.31.33:22-139.178.68.195:47516.service: Deactivated successfully. Jun 25 18:46:15.991539 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 18:46:15.997398 systemd-logind[2058]: Session 19 logged out. Waiting for processes to exit. Jun 25 18:46:16.008488 systemd[1]: Started sshd@19-172.31.31.33:22-139.178.68.195:47518.service - OpenSSH per-connection server daemon (139.178.68.195:47518). Jun 25 18:46:16.009896 systemd-logind[2058]: Removed session 19. Jun 25 18:46:16.187126 sshd[5836]: Accepted publickey for core from 139.178.68.195 port 47518 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:16.189331 sshd[5836]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:16.194729 systemd-logind[2058]: New session 20 of user core. Jun 25 18:46:16.207342 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 18:46:16.410698 sshd[5836]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:16.416602 systemd[1]: sshd@19-172.31.31.33:22-139.178.68.195:47518.service: Deactivated successfully. Jun 25 18:46:16.424039 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 18:46:16.426558 systemd-logind[2058]: Session 20 logged out. Waiting for processes to exit. Jun 25 18:46:16.428666 systemd-logind[2058]: Removed session 20. Jun 25 18:46:21.450406 systemd[1]: Started sshd@20-172.31.31.33:22-139.178.68.195:58104.service - OpenSSH per-connection server daemon (139.178.68.195:58104). Jun 25 18:46:21.633711 sshd[5852]: Accepted publickey for core from 139.178.68.195 port 58104 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:21.638566 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:21.652783 systemd-logind[2058]: New session 21 of user core. Jun 25 18:46:21.657899 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 18:46:21.948264 kubelet[3343]: I0625 18:46:21.948214 3343 topology_manager.go:215] "Topology Admit Handler" podUID="256690f1-cbcc-4b5c-8f02-eb9f5567e318" podNamespace="calico-apiserver" podName="calico-apiserver-77c76c464b-knfqm" Jun 25 18:46:22.072942 sshd[5852]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:22.078455 systemd[1]: sshd@20-172.31.31.33:22-139.178.68.195:58104.service: Deactivated successfully. Jun 25 18:46:22.089443 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 18:46:22.089692 systemd-logind[2058]: Session 21 logged out. Waiting for processes to exit. Jun 25 18:46:22.095342 systemd-logind[2058]: Removed session 21. Jun 25 18:46:22.142295 kubelet[3343]: I0625 18:46:22.142223 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frqrf\" (UniqueName: \"kubernetes.io/projected/256690f1-cbcc-4b5c-8f02-eb9f5567e318-kube-api-access-frqrf\") pod \"calico-apiserver-77c76c464b-knfqm\" (UID: \"256690f1-cbcc-4b5c-8f02-eb9f5567e318\") " pod="calico-apiserver/calico-apiserver-77c76c464b-knfqm" Jun 25 18:46:22.142535 kubelet[3343]: I0625 18:46:22.142518 3343 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/256690f1-cbcc-4b5c-8f02-eb9f5567e318-calico-apiserver-certs\") pod \"calico-apiserver-77c76c464b-knfqm\" (UID: \"256690f1-cbcc-4b5c-8f02-eb9f5567e318\") " pod="calico-apiserver/calico-apiserver-77c76c464b-knfqm" Jun 25 18:46:22.290203 kubelet[3343]: E0625 18:46:22.290062 3343 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 18:46:22.358896 kubelet[3343]: E0625 18:46:22.358853 3343 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/256690f1-cbcc-4b5c-8f02-eb9f5567e318-calico-apiserver-certs podName:256690f1-cbcc-4b5c-8f02-eb9f5567e318 nodeName:}" failed. No retries permitted until 2024-06-25 18:46:22.825041635 +0000 UTC m=+96.812408575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/256690f1-cbcc-4b5c-8f02-eb9f5567e318-calico-apiserver-certs") pod "calico-apiserver-77c76c464b-knfqm" (UID: "256690f1-cbcc-4b5c-8f02-eb9f5567e318") : secret "calico-apiserver-certs" not found Jun 25 18:46:22.915442 containerd[2091]: time="2024-06-25T18:46:22.915396870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c76c464b-knfqm,Uid:256690f1-cbcc-4b5c-8f02-eb9f5567e318,Namespace:calico-apiserver,Attempt:0,}" Jun 25 18:46:23.208933 systemd-networkd[1657]: calica3d61b0b88: Link UP Jun 25 18:46:23.210532 systemd-networkd[1657]: calica3d61b0b88: Gained carrier Jun 25 18:46:23.219730 (udev-worker)[5914]: Network interface NamePolicy= disabled on kernel command line. Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.079 [INFO][5896] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0 calico-apiserver-77c76c464b- calico-apiserver 256690f1-cbcc-4b5c-8f02-eb9f5567e318 1101 0 2024-06-25 18:46:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77c76c464b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-31-33 calico-apiserver-77c76c464b-knfqm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calica3d61b0b88 [] []}} ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Namespace="calico-apiserver" Pod="calico-apiserver-77c76c464b-knfqm" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.080 [INFO][5896] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Namespace="calico-apiserver" Pod="calico-apiserver-77c76c464b-knfqm" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.129 [INFO][5906] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" HandleID="k8s-pod-network.72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Workload="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.143 [INFO][5906] ipam_plugin.go 264: Auto assigning IP ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" HandleID="k8s-pod-network.72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Workload="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000267ed0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-31-33", "pod":"calico-apiserver-77c76c464b-knfqm", "timestamp":"2024-06-25 18:46:23.129048562 +0000 UTC"}, Hostname:"ip-172-31-31-33", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.143 [INFO][5906] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.143 [INFO][5906] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.143 [INFO][5906] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-31-33' Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.146 [INFO][5906] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" host="ip-172-31-31-33" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.156 [INFO][5906] ipam.go 372: Looking up existing affinities for host host="ip-172-31-31-33" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.166 [INFO][5906] ipam.go 489: Trying affinity for 192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.169 [INFO][5906] ipam.go 155: Attempting to load block cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.171 [INFO][5906] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.128/26 host="ip-172-31-31-33" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.172 [INFO][5906] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.128/26 handle="k8s-pod-network.72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" host="ip-172-31-31-33" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.173 [INFO][5906] ipam.go 1685: Creating new handle: k8s-pod-network.72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3 Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.181 [INFO][5906] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.128/26 handle="k8s-pod-network.72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" host="ip-172-31-31-33" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.193 [INFO][5906] ipam.go 1216: Successfully claimed IPs: [192.168.32.133/26] block=192.168.32.128/26 handle="k8s-pod-network.72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" host="ip-172-31-31-33" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.193 [INFO][5906] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.133/26] handle="k8s-pod-network.72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" host="ip-172-31-31-33" Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.193 [INFO][5906] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 18:46:23.249442 containerd[2091]: 2024-06-25 18:46:23.193 [INFO][5906] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.133/26] IPv6=[] ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" HandleID="k8s-pod-network.72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Workload="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" Jun 25 18:46:23.251749 containerd[2091]: 2024-06-25 18:46:23.199 [INFO][5896] k8s.go 386: Populated endpoint ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Namespace="calico-apiserver" Pod="calico-apiserver-77c76c464b-knfqm" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0", GenerateName:"calico-apiserver-77c76c464b-", Namespace:"calico-apiserver", SelfLink:"", UID:"256690f1-cbcc-4b5c-8f02-eb9f5567e318", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 46, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c76c464b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"", Pod:"calico-apiserver-77c76c464b-knfqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica3d61b0b88", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:46:23.251749 containerd[2091]: 2024-06-25 18:46:23.200 [INFO][5896] k8s.go 387: Calico CNI using IPs: [192.168.32.133/32] ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Namespace="calico-apiserver" Pod="calico-apiserver-77c76c464b-knfqm" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" Jun 25 18:46:23.251749 containerd[2091]: 2024-06-25 18:46:23.200 [INFO][5896] dataplane_linux.go 68: Setting the host side veth name to calica3d61b0b88 ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Namespace="calico-apiserver" Pod="calico-apiserver-77c76c464b-knfqm" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" Jun 25 18:46:23.251749 containerd[2091]: 2024-06-25 18:46:23.211 [INFO][5896] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Namespace="calico-apiserver" Pod="calico-apiserver-77c76c464b-knfqm" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" Jun 25 18:46:23.251749 containerd[2091]: 2024-06-25 18:46:23.213 [INFO][5896] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Namespace="calico-apiserver" Pod="calico-apiserver-77c76c464b-knfqm" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0", GenerateName:"calico-apiserver-77c76c464b-", Namespace:"calico-apiserver", SelfLink:"", UID:"256690f1-cbcc-4b5c-8f02-eb9f5567e318", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 18, 46, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77c76c464b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-31-33", ContainerID:"72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3", Pod:"calico-apiserver-77c76c464b-knfqm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calica3d61b0b88", MAC:"a2:8e:d1:37:d3:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 18:46:23.251749 containerd[2091]: 2024-06-25 18:46:23.234 [INFO][5896] k8s.go 500: Wrote updated endpoint to datastore ContainerID="72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3" Namespace="calico-apiserver" Pod="calico-apiserver-77c76c464b-knfqm" WorkloadEndpoint="ip--172--31--31--33-k8s-calico--apiserver--77c76c464b--knfqm-eth0" Jun 25 18:46:23.345138 containerd[2091]: time="2024-06-25T18:46:23.345032096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 18:46:23.345138 containerd[2091]: time="2024-06-25T18:46:23.345110239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:46:23.345752 containerd[2091]: time="2024-06-25T18:46:23.345672514Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 18:46:23.345909 containerd[2091]: time="2024-06-25T18:46:23.345722385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 18:46:23.453146 containerd[2091]: time="2024-06-25T18:46:23.453098375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77c76c464b-knfqm,Uid:256690f1-cbcc-4b5c-8f02-eb9f5567e318,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3\"" Jun 25 18:46:23.455308 containerd[2091]: time="2024-06-25T18:46:23.455043868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 18:46:25.119499 systemd-networkd[1657]: calica3d61b0b88: Gained IPv6LL Jun 25 18:46:25.694457 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:46:25.694506 systemd-resolved[1971]: Flushed all caches. Jun 25 18:46:25.696449 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:46:26.145919 containerd[2091]: time="2024-06-25T18:46:26.141582201Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:46:26.145919 containerd[2091]: time="2024-06-25T18:46:26.144084118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jun 25 18:46:26.148855 containerd[2091]: time="2024-06-25T18:46:26.147914073Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:46:26.171835 containerd[2091]: time="2024-06-25T18:46:26.170498964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 18:46:26.173938 containerd[2091]: time="2024-06-25T18:46:26.173426399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 2.718114193s" Jun 25 18:46:26.173938 containerd[2091]: time="2024-06-25T18:46:26.173476033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jun 25 18:46:26.187505 containerd[2091]: time="2024-06-25T18:46:26.186972091Z" level=info msg="CreateContainer within sandbox \"72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 18:46:26.218392 containerd[2091]: time="2024-06-25T18:46:26.216288446Z" level=info msg="CreateContainer within sandbox \"72c10a2307824493d30da73356eeb6c2d689609dff78fb0fc5690d1bf70cb0b3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6c4a8837f916043fdf944050817b8968a54f8c28f07fc53107eb7a882af25b64\"" Jun 25 18:46:26.218392 containerd[2091]: time="2024-06-25T18:46:26.217300370Z" level=info msg="StartContainer for \"6c4a8837f916043fdf944050817b8968a54f8c28f07fc53107eb7a882af25b64\"" Jun 25 18:46:26.531880 containerd[2091]: time="2024-06-25T18:46:26.530216167Z" level=info msg="StartContainer for \"6c4a8837f916043fdf944050817b8968a54f8c28f07fc53107eb7a882af25b64\" returns successfully" Jun 25 18:46:27.119173 systemd[1]: Started sshd@21-172.31.31.33:22-139.178.68.195:58120.service - OpenSSH per-connection server daemon (139.178.68.195:58120). Jun 25 18:46:27.258054 ntpd[2047]: Listen normally on 12 calica3d61b0b88 [fe80::ecee:eeff:feee:eeee%11]:123 Jun 25 18:46:27.267046 ntpd[2047]: 25 Jun 18:46:27 ntpd[2047]: Listen normally on 12 calica3d61b0b88 [fe80::ecee:eeff:feee:eeee%11]:123 Jun 25 18:46:27.332426 kubelet[3343]: I0625 18:46:27.332332 3343 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77c76c464b-knfqm" podStartSLOduration=3.471851179 podCreationTimestamp="2024-06-25 18:46:21 +0000 UTC" firstStartedPulling="2024-06-25 18:46:23.454586283 +0000 UTC m=+97.441953210" lastFinishedPulling="2024-06-25 18:46:26.174623249 +0000 UTC m=+100.161990175" observedRunningTime="2024-06-25 18:46:27.188388472 +0000 UTC m=+101.175755411" watchObservedRunningTime="2024-06-25 18:46:27.191888144 +0000 UTC m=+101.179255082" Jun 25 18:46:27.412055 sshd[6020]: Accepted publickey for core from 139.178.68.195 port 58120 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:27.422358 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:27.441539 systemd-logind[2058]: New session 22 of user core. Jun 25 18:46:27.450555 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 18:46:27.746860 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:46:27.743442 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:46:27.743451 systemd-resolved[1971]: Flushed all caches. Jun 25 18:46:28.235888 sshd[6020]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:28.266798 systemd[1]: sshd@21-172.31.31.33:22-139.178.68.195:58120.service: Deactivated successfully. Jun 25 18:46:28.279796 systemd-logind[2058]: Session 22 logged out. Waiting for processes to exit. Jun 25 18:46:28.280933 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 18:46:28.284786 systemd-logind[2058]: Removed session 22. Jun 25 18:46:29.791449 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:46:29.791797 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:46:29.791839 systemd-resolved[1971]: Flushed all caches. Jun 25 18:46:33.264753 systemd[1]: Started sshd@22-172.31.31.33:22-139.178.68.195:58322.service - OpenSSH per-connection server daemon (139.178.68.195:58322). Jun 25 18:46:33.461640 sshd[6043]: Accepted publickey for core from 139.178.68.195 port 58322 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:33.463823 sshd[6043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:33.472443 systemd-logind[2058]: New session 23 of user core. Jun 25 18:46:33.476702 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 18:46:33.894035 sshd[6043]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:33.899940 systemd[1]: sshd@22-172.31.31.33:22-139.178.68.195:58322.service: Deactivated successfully. Jun 25 18:46:33.904903 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 18:46:33.905431 systemd-logind[2058]: Session 23 logged out. Waiting for processes to exit. Jun 25 18:46:33.914550 systemd-logind[2058]: Removed session 23. Jun 25 18:46:38.943361 systemd[1]: Started sshd@23-172.31.31.33:22-139.178.68.195:37832.service - OpenSSH per-connection server daemon (139.178.68.195:37832). Jun 25 18:46:39.263576 sshd[6087]: Accepted publickey for core from 139.178.68.195 port 37832 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:39.271482 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:39.283038 systemd-logind[2058]: New session 24 of user core. Jun 25 18:46:39.288814 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 18:46:39.713218 systemd-resolved[1971]: Under memory pressure, flushing caches. Jun 25 18:46:39.713665 systemd-journald[1565]: Under memory pressure, flushing caches. Jun 25 18:46:39.713253 systemd-resolved[1971]: Flushed all caches. Jun 25 18:46:39.739592 sshd[6087]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:39.746437 systemd-logind[2058]: Session 24 logged out. Waiting for processes to exit. Jun 25 18:46:39.747961 systemd[1]: sshd@23-172.31.31.33:22-139.178.68.195:37832.service: Deactivated successfully. Jun 25 18:46:39.763058 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 18:46:39.764101 systemd-logind[2058]: Removed session 24. Jun 25 18:46:44.773287 systemd[1]: Started sshd@24-172.31.31.33:22-139.178.68.195:37844.service - OpenSSH per-connection server daemon (139.178.68.195:37844). Jun 25 18:46:44.980393 sshd[6103]: Accepted publickey for core from 139.178.68.195 port 37844 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:44.981040 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:44.998633 systemd-logind[2058]: New session 25 of user core. Jun 25 18:46:45.004444 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 18:46:45.495551 sshd[6103]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:45.500176 systemd[1]: sshd@24-172.31.31.33:22-139.178.68.195:37844.service: Deactivated successfully. Jun 25 18:46:45.507063 systemd-logind[2058]: Session 25 logged out. Waiting for processes to exit. Jun 25 18:46:45.507929 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 18:46:45.514228 systemd-logind[2058]: Removed session 25. Jun 25 18:46:50.549094 systemd[1]: Started sshd@25-172.31.31.33:22-139.178.68.195:41292.service - OpenSSH per-connection server daemon (139.178.68.195:41292). Jun 25 18:46:50.727401 sshd[6124]: Accepted publickey for core from 139.178.68.195 port 41292 ssh2: RSA SHA256:zWpntMacToOmwCaU62vdvg6t1el6aib1JfI6hz3EHOQ Jun 25 18:46:50.734625 sshd[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 18:46:50.741232 systemd-logind[2058]: New session 26 of user core. Jun 25 18:46:50.748643 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 18:46:51.078880 sshd[6124]: pam_unix(sshd:session): session closed for user core Jun 25 18:46:51.086786 systemd[1]: sshd@25-172.31.31.33:22-139.178.68.195:41292.service: Deactivated successfully. Jun 25 18:46:51.095226 systemd-logind[2058]: Session 26 logged out. Waiting for processes to exit. Jun 25 18:46:51.096615 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 18:46:51.106856 systemd-logind[2058]: Removed session 26. Jun 25 18:46:51.692448 systemd[1]: run-containerd-runc-k8s.io-5964ccbef60860d43b8a38e9231f0c9ce359f91b85a4f3935922e13b8ad65000-runc.790Gg9.mount: Deactivated successfully.