Jul 2 00:23:20.087302 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:23:20.087343 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:23:20.087358 kernel: BIOS-provided physical RAM map: Jul 2 00:23:20.087369 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:23:20.087379 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:23:20.087389 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:23:20.087912 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jul 2 00:23:20.087928 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jul 2 00:23:20.087939 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jul 2 00:23:20.087949 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:23:20.087960 kernel: NX (Execute Disable) protection: active Jul 2 00:23:20.087971 kernel: APIC: Static calls initialized Jul 2 00:23:20.087981 kernel: SMBIOS 2.7 present. Jul 2 00:23:20.087992 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 2 00:23:20.088008 kernel: Hypervisor detected: KVM Jul 2 00:23:20.088021 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:23:20.088034 kernel: kvm-clock: using sched offset of 6846016898 cycles Jul 2 00:23:20.088047 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:23:20.088058 kernel: tsc: Detected 2499.998 MHz processor Jul 2 00:23:20.088071 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:23:20.088083 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:23:20.088099 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jul 2 00:23:20.088112 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:23:20.088123 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:23:20.088134 kernel: Using GB pages for direct mapping Jul 2 00:23:20.088146 kernel: ACPI: Early table checksum verification disabled Jul 2 00:23:20.088158 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jul 2 00:23:20.088170 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jul 2 00:23:20.088182 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:23:20.088194 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 2 00:23:20.088210 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jul 2 00:23:20.088222 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:23:20.088233 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:23:20.088245 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 2 00:23:20.088257 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:23:20.088269 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 2 00:23:20.088280 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 2 00:23:20.088292 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:23:20.088356 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jul 2 00:23:20.088373 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jul 2 00:23:20.088392 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jul 2 00:23:20.088406 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jul 2 00:23:20.088456 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jul 2 00:23:20.088471 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jul 2 00:23:20.088487 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jul 2 00:23:20.088534 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jul 2 00:23:20.088549 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jul 2 00:23:20.088562 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jul 2 00:23:20.088574 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:23:20.088619 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:23:20.088635 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 2 00:23:20.088742 kernel: NUMA: Initialized distance table, cnt=1 Jul 2 00:23:20.088884 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jul 2 00:23:20.088905 kernel: Zone ranges: Jul 2 00:23:20.088918 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:23:20.088931 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jul 2 00:23:20.088943 kernel: Normal empty Jul 2 00:23:20.088956 kernel: Movable zone start for each node Jul 2 00:23:20.088967 kernel: Early memory node ranges Jul 2 00:23:20.088980 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:23:20.088992 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jul 2 00:23:20.089005 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jul 2 00:23:20.089020 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:23:20.089033 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:23:20.089045 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jul 2 00:23:20.089057 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 00:23:20.089069 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:23:20.089082 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 2 00:23:20.089094 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:23:20.089106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:23:20.089118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:23:20.089134 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:23:20.089147 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:23:20.089159 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:23:20.089172 kernel: TSC deadline timer available Jul 2 00:23:20.089184 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:23:20.089197 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:23:20.089210 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jul 2 00:23:20.089223 kernel: Booting paravirtualized kernel on KVM Jul 2 00:23:20.089236 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:23:20.089252 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:23:20.089265 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:23:20.089278 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:23:20.089296 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:23:20.089309 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:23:20.090972 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:23:20.090995 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:23:20.091010 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:23:20.091028 kernel: random: crng init done Jul 2 00:23:20.091041 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:23:20.091056 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:23:20.091069 kernel: Fallback order for Node 0: 0 Jul 2 00:23:20.091084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jul 2 00:23:20.091244 kernel: Policy zone: DMA32 Jul 2 00:23:20.091357 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:23:20.091374 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Jul 2 00:23:20.091389 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:23:20.091409 kernel: Kernel/User page tables isolation: enabled Jul 2 00:23:20.091423 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:23:20.091437 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:23:20.091586 kernel: Dynamic Preempt: voluntary Jul 2 00:23:20.091603 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:23:20.091644 kernel: rcu: RCU event tracing is enabled. Jul 2 00:23:20.093222 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:23:20.093240 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:23:20.093257 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:23:20.093273 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:23:20.093304 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:23:20.093320 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:23:20.093336 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:23:20.093355 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:23:20.093371 kernel: Console: colour VGA+ 80x25 Jul 2 00:23:20.093386 kernel: printk: console [ttyS0] enabled Jul 2 00:23:20.093402 kernel: ACPI: Core revision 20230628 Jul 2 00:23:20.093417 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 2 00:23:20.093432 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:23:20.093451 kernel: x2apic enabled Jul 2 00:23:20.093467 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:23:20.093497 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 2 00:23:20.093517 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jul 2 00:23:20.093533 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 00:23:20.093549 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 00:23:20.093566 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:23:20.093582 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:23:20.093598 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:23:20.093614 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:23:20.093629 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 00:23:20.093644 kernel: RETBleed: Vulnerable Jul 2 00:23:20.093749 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:23:20.093766 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:23:20.093850 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:23:20.093868 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 00:23:20.093882 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:23:20.093896 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:23:20.093915 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:23:20.093930 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 00:23:20.093943 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 00:23:20.093958 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 00:23:20.093972 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 00:23:20.093986 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 00:23:20.094000 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 2 00:23:20.094014 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:23:20.094028 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 00:23:20.094043 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 00:23:20.094059 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 2 00:23:20.094079 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 2 00:23:20.094096 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 2 00:23:20.094112 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 2 00:23:20.094129 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 2 00:23:20.094145 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:23:20.094161 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:23:20.094178 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:23:20.094194 kernel: SELinux: Initializing. Jul 2 00:23:20.094211 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:23:20.094227 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:23:20.094244 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 00:23:20.094261 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:20.094282 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:20.094300 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:23:20.094317 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 00:23:20.094334 kernel: signal: max sigframe size: 3632 Jul 2 00:23:20.094352 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:23:20.094369 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:23:20.094386 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:23:20.094402 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:23:20.094419 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:23:20.094438 kernel: .... node #0, CPUs: #1 Jul 2 00:23:20.094454 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 00:23:20.094471 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 00:23:20.094487 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:23:20.094559 kernel: smpboot: Max logical packages: 1 Jul 2 00:23:20.094577 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jul 2 00:23:20.094592 kernel: devtmpfs: initialized Jul 2 00:23:20.094606 kernel: x86/mm: Memory block size: 128MB Jul 2 00:23:20.094626 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:23:20.094641 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:23:20.094672 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:23:20.094688 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:23:20.094704 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:23:20.094720 kernel: audit: type=2000 audit(1719879799.446:1): state=initialized audit_enabled=0 res=1 Jul 2 00:23:20.094735 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:23:20.094750 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:23:20.094765 kernel: cpuidle: using governor menu Jul 2 00:23:20.094784 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:23:20.094800 kernel: dca service started, version 1.12.1 Jul 2 00:23:20.094815 kernel: PCI: Using configuration type 1 for base access Jul 2 00:23:20.094831 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:23:20.094848 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:23:20.094863 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:23:20.094879 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:23:20.094895 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:23:20.094911 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:23:20.094932 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:23:20.094949 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:23:20.094966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:23:20.094983 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 00:23:20.094998 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:23:20.095012 kernel: ACPI: Interpreter enabled Jul 2 00:23:20.095026 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:23:20.095040 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:23:20.095054 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:23:20.095071 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:23:20.095085 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 00:23:20.095100 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:23:20.095428 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:23:20.095595 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:23:20.098951 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:23:20.098990 kernel: acpiphp: Slot [3] registered Jul 2 00:23:20.099015 kernel: acpiphp: Slot [4] registered Jul 2 00:23:20.099032 kernel: acpiphp: Slot [5] registered Jul 2 00:23:20.099048 kernel: acpiphp: Slot [6] registered Jul 2 00:23:20.099065 kernel: acpiphp: Slot [7] registered Jul 2 00:23:20.099082 kernel: acpiphp: Slot [8] registered Jul 2 00:23:20.099098 kernel: acpiphp: Slot [9] registered Jul 2 00:23:20.099115 kernel: acpiphp: Slot [10] registered Jul 2 00:23:20.099131 kernel: acpiphp: Slot [11] registered Jul 2 00:23:20.099147 kernel: acpiphp: Slot [12] registered Jul 2 00:23:20.099164 kernel: acpiphp: Slot [13] registered Jul 2 00:23:20.099185 kernel: acpiphp: Slot [14] registered Jul 2 00:23:20.099202 kernel: acpiphp: Slot [15] registered Jul 2 00:23:20.099218 kernel: acpiphp: Slot [16] registered Jul 2 00:23:20.099234 kernel: acpiphp: Slot [17] registered Jul 2 00:23:20.099250 kernel: acpiphp: Slot [18] registered Jul 2 00:23:20.099266 kernel: acpiphp: Slot [19] registered Jul 2 00:23:20.099282 kernel: acpiphp: Slot [20] registered Jul 2 00:23:20.099299 kernel: acpiphp: Slot [21] registered Jul 2 00:23:20.099315 kernel: acpiphp: Slot [22] registered Jul 2 00:23:20.099335 kernel: acpiphp: Slot [23] registered Jul 2 00:23:20.099351 kernel: acpiphp: Slot [24] registered Jul 2 00:23:20.099367 kernel: acpiphp: Slot [25] registered Jul 2 00:23:20.099384 kernel: acpiphp: Slot [26] registered Jul 2 00:23:20.099400 kernel: acpiphp: Slot [27] registered Jul 2 00:23:20.099416 kernel: acpiphp: Slot [28] registered Jul 2 00:23:20.099432 kernel: acpiphp: Slot [29] registered Jul 2 00:23:20.099449 kernel: acpiphp: Slot [30] registered Jul 2 00:23:20.099466 kernel: acpiphp: Slot [31] registered Jul 2 00:23:20.099482 kernel: PCI host bridge to bus 0000:00 Jul 2 00:23:20.099677 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:23:20.100204 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:23:20.100343 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:23:20.100484 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:23:20.102776 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:23:20.103047 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:23:20.103241 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:23:20.103550 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 2 00:23:20.105810 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 00:23:20.105956 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 00:23:20.106088 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 2 00:23:20.106293 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 2 00:23:20.106431 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 2 00:23:20.106850 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 2 00:23:20.106987 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 2 00:23:20.107119 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 2 00:23:20.107257 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 2 00:23:20.107389 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jul 2 00:23:20.107593 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 00:23:20.110839 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:23:20.111151 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:23:20.111307 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jul 2 00:23:20.111466 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:23:20.111617 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jul 2 00:23:20.111639 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:23:20.113700 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:23:20.113721 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:23:20.113741 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:23:20.113756 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:23:20.113770 kernel: iommu: Default domain type: Translated Jul 2 00:23:20.113784 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:23:20.113798 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:23:20.113813 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:23:20.113828 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:23:20.113842 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jul 2 00:23:20.114010 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 2 00:23:20.114147 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 2 00:23:20.114280 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:23:20.114300 kernel: vgaarb: loaded Jul 2 00:23:20.114315 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 2 00:23:20.114331 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 2 00:23:20.114346 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:23:20.114361 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:23:20.114377 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:23:20.114588 kernel: pnp: PnP ACPI init Jul 2 00:23:20.114604 kernel: pnp: PnP ACPI: found 5 devices Jul 2 00:23:20.114620 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:23:20.114635 kernel: NET: Registered PF_INET protocol family Jul 2 00:23:20.116678 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:23:20.116710 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:23:20.116728 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:23:20.116747 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:23:20.116765 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:23:20.116791 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:23:20.116810 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:23:20.116829 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:23:20.116844 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:23:20.116858 kernel: NET: Registered PF_XDP protocol family Jul 2 00:23:20.117108 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:23:20.117245 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:23:20.117373 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:23:20.117496 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:23:20.117793 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:23:20.117820 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:23:20.117834 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:23:20.117849 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 2 00:23:20.117865 kernel: clocksource: Switched to clocksource tsc Jul 2 00:23:20.117881 kernel: Initialise system trusted keyrings Jul 2 00:23:20.117895 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:23:20.118007 kernel: Key type asymmetric registered Jul 2 00:23:20.118025 kernel: Asymmetric key parser 'x509' registered Jul 2 00:23:20.118039 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:23:20.118190 kernel: io scheduler mq-deadline registered Jul 2 00:23:20.118206 kernel: io scheduler kyber registered Jul 2 00:23:20.118220 kernel: io scheduler bfq registered Jul 2 00:23:20.118235 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:23:20.118248 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:23:20.118263 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:23:20.118276 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:23:20.118296 kernel: i8042: Warning: Keylock active Jul 2 00:23:20.118310 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:23:20.118323 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:23:20.118489 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 00:23:20.119992 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 00:23:20.120147 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T00:23:19 UTC (1719879799) Jul 2 00:23:20.120331 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 00:23:20.120356 kernel: intel_pstate: CPU model not supported Jul 2 00:23:20.120371 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:23:20.120385 kernel: Segment Routing with IPv6 Jul 2 00:23:20.120399 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:23:20.120413 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:23:20.120427 kernel: Key type dns_resolver registered Jul 2 00:23:20.120440 kernel: IPI shorthand broadcast: enabled Jul 2 00:23:20.120453 kernel: sched_clock: Marking stable (679003503, 238919018)->(1011371980, -93449459) Jul 2 00:23:20.120467 kernel: registered taskstats version 1 Jul 2 00:23:20.120481 kernel: Loading compiled-in X.509 certificates Jul 2 00:23:20.120498 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:23:20.120512 kernel: Key type .fscrypt registered Jul 2 00:23:20.120525 kernel: Key type fscrypt-provisioning registered Jul 2 00:23:20.120539 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:23:20.120553 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:23:20.120567 kernel: ima: No architecture policies found Jul 2 00:23:20.120634 kernel: clk: Disabling unused clocks Jul 2 00:23:20.123705 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:23:20.123731 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:23:20.123749 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:23:20.123766 kernel: Run /init as init process Jul 2 00:23:20.123783 kernel: with arguments: Jul 2 00:23:20.123800 kernel: /init Jul 2 00:23:20.123816 kernel: with environment: Jul 2 00:23:20.123831 kernel: HOME=/ Jul 2 00:23:20.123847 kernel: TERM=linux Jul 2 00:23:20.123862 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:23:20.123883 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:20.123909 systemd[1]: Detected virtualization amazon. Jul 2 00:23:20.123948 systemd[1]: Detected architecture x86-64. Jul 2 00:23:20.123965 systemd[1]: Running in initrd. Jul 2 00:23:20.123982 systemd[1]: No hostname configured, using default hostname. Jul 2 00:23:20.124003 systemd[1]: Hostname set to . Jul 2 00:23:20.124021 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:23:20.124039 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:23:20.124056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:20.124074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:20.124093 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:23:20.124111 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:20.124130 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:23:20.124150 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:23:20.124171 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:23:20.124190 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:23:20.124207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:20.124225 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:20.124243 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:20.124262 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:20.124283 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:20.124300 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:20.124407 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:20.124427 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:20.124445 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:23:20.124464 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:23:20.124481 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:20.124499 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:20.124566 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:20.124678 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:20.124698 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:23:20.124716 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:23:20.124734 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:20.124752 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:23:20.124769 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:23:20.124788 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:20.124811 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:20.124830 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:20.124887 systemd-journald[178]: Collecting audit messages is disabled. Jul 2 00:23:20.124932 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:20.124951 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:20.124969 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:23:20.124989 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:23:20.125011 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:23:20.125034 systemd-journald[178]: Journal started Jul 2 00:23:20.125069 systemd-journald[178]: Runtime Journal (/run/log/journal/ec2dbc15341145b8900e358c5c1ce33e) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:23:20.103789 systemd-modules-load[179]: Inserted module 'overlay' Jul 2 00:23:20.135695 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:20.191683 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:23:20.195607 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 2 00:23:20.278987 kernel: Bridge firewalling registered Jul 2 00:23:20.283515 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:20.284030 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:20.285863 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:20.289047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:20.298907 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:20.322184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:20.328334 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:20.360949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:20.368881 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:20.369332 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:20.381024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:23:20.385518 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:20.404126 dracut-cmdline[210]: dracut-dracut-053 Jul 2 00:23:20.409570 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:23:20.459326 systemd-resolved[211]: Positive Trust Anchors: Jul 2 00:23:20.459348 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:20.459461 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:20.474803 systemd-resolved[211]: Defaulting to hostname 'linux'. Jul 2 00:23:20.478032 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:20.479433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:20.534687 kernel: SCSI subsystem initialized Jul 2 00:23:20.556069 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:23:20.573677 kernel: iscsi: registered transport (tcp) Jul 2 00:23:20.603768 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:23:20.603847 kernel: QLogic iSCSI HBA Driver Jul 2 00:23:20.672469 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:20.682935 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:23:20.734813 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:23:20.734893 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:23:20.734915 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:23:20.793698 kernel: raid6: avx512x4 gen() 12842 MB/s Jul 2 00:23:20.810708 kernel: raid6: avx512x2 gen() 13794 MB/s Jul 2 00:23:20.827710 kernel: raid6: avx512x1 gen() 15805 MB/s Jul 2 00:23:20.844710 kernel: raid6: avx2x4 gen() 14937 MB/s Jul 2 00:23:20.861710 kernel: raid6: avx2x2 gen() 14195 MB/s Jul 2 00:23:20.878718 kernel: raid6: avx2x1 gen() 10164 MB/s Jul 2 00:23:20.878804 kernel: raid6: using algorithm avx512x1 gen() 15805 MB/s Jul 2 00:23:20.896750 kernel: raid6: .... xor() 11004 MB/s, rmw enabled Jul 2 00:23:20.896838 kernel: raid6: using avx512x2 recovery algorithm Jul 2 00:23:20.947675 kernel: xor: automatically using best checksumming function avx Jul 2 00:23:21.222707 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:23:21.244282 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:21.251886 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:21.278626 systemd-udevd[394]: Using default interface naming scheme 'v255'. Jul 2 00:23:21.287201 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:21.297851 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:23:21.342768 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Jul 2 00:23:21.396280 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:21.404132 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:21.526789 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:21.537998 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:23:21.598682 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:21.600376 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:21.607796 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:21.610812 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:21.618872 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:23:21.642473 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:21.699000 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:23:21.715409 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:23:21.715628 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 2 00:23:21.715830 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:23:21.715853 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:01:2a:6b:f5:37 Jul 2 00:23:21.720800 (udev-worker)[449]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:23:21.735156 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:21.735751 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:21.740560 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:21.744460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:21.744998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:21.747015 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:21.765002 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:21.773079 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:23:21.773153 kernel: AES CTR mode by8 optimization enabled Jul 2 00:23:21.775684 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:23:21.775930 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:23:21.791690 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:23:21.797698 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:23:21.797764 kernel: GPT:9289727 != 16777215 Jul 2 00:23:21.797786 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:23:21.797812 kernel: GPT:9289727 != 16777215 Jul 2 00:23:21.797831 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:23:21.797896 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:23:21.961377 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (449) Jul 2 00:23:21.962998 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:21.973853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:23:21.988680 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (454) Jul 2 00:23:22.008104 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 00:23:22.034755 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:22.102939 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:23:22.111982 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 00:23:22.121191 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 00:23:22.121361 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 00:23:22.133487 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:23:22.147378 disk-uuid[630]: Primary Header is updated. Jul 2 00:23:22.147378 disk-uuid[630]: Secondary Entries is updated. Jul 2 00:23:22.147378 disk-uuid[630]: Secondary Header is updated. Jul 2 00:23:22.154696 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:23:22.162689 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:23:22.173687 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:23:23.173785 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:23:23.174013 disk-uuid[631]: The operation has completed successfully. Jul 2 00:23:23.362424 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:23:23.362549 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:23:23.393866 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:23:23.401587 sh[974]: Success Jul 2 00:23:23.428785 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:23:23.557096 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:23:23.572762 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:23:23.577970 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:23:23.611006 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:23:23.611070 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:23.611090 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:23:23.611764 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:23:23.612749 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:23:23.727679 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 00:23:23.759271 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:23:23.763033 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:23:23.772894 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:23:23.784960 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:23:23.812373 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:23.812452 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:23.812477 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:23:23.818708 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:23:23.832628 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:23:23.834234 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:23.855276 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:23:23.864870 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:23:23.907444 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:23.913891 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:23.946587 systemd-networkd[1166]: lo: Link UP Jul 2 00:23:23.946600 systemd-networkd[1166]: lo: Gained carrier Jul 2 00:23:23.949803 systemd-networkd[1166]: Enumeration completed Jul 2 00:23:23.950085 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:23.951508 systemd[1]: Reached target network.target - Network. Jul 2 00:23:23.966160 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:23.966164 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:23.978381 systemd-networkd[1166]: eth0: Link UP Jul 2 00:23:23.978391 systemd-networkd[1166]: eth0: Gained carrier Jul 2 00:23:23.978438 systemd-networkd[1166]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:23.995824 systemd-networkd[1166]: eth0: DHCPv4 address 172.31.19.189/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:23:24.363956 ignition[1115]: Ignition 2.18.0 Jul 2 00:23:24.363973 ignition[1115]: Stage: fetch-offline Jul 2 00:23:24.364322 ignition[1115]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:24.364342 ignition[1115]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:24.364677 ignition[1115]: Ignition finished successfully Jul 2 00:23:24.371676 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:24.382560 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:23:24.430758 ignition[1175]: Ignition 2.18.0 Jul 2 00:23:24.430788 ignition[1175]: Stage: fetch Jul 2 00:23:24.431395 ignition[1175]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:24.431467 ignition[1175]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:24.432079 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:24.443012 ignition[1175]: PUT result: OK Jul 2 00:23:24.445481 ignition[1175]: parsed url from cmdline: "" Jul 2 00:23:24.445579 ignition[1175]: no config URL provided Jul 2 00:23:24.445715 ignition[1175]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:23:24.445732 ignition[1175]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:23:24.445756 ignition[1175]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:24.448663 ignition[1175]: PUT result: OK Jul 2 00:23:24.449898 ignition[1175]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:23:24.454532 ignition[1175]: GET result: OK Jul 2 00:23:24.455307 ignition[1175]: parsing config with SHA512: 4488c44e2fa3d90c042d21749d572232e212d9a35b71324e416f9855f69a765c6ed463a2ffa7e8edcd1a4b8cde9837c59550b2d83eefee95d2f720ee0c2e418f Jul 2 00:23:24.460262 unknown[1175]: fetched base config from "system" Jul 2 00:23:24.460278 unknown[1175]: fetched base config from "system" Jul 2 00:23:24.460288 unknown[1175]: fetched user config from "aws" Jul 2 00:23:24.468292 ignition[1175]: fetch: fetch complete Jul 2 00:23:24.468301 ignition[1175]: fetch: fetch passed Jul 2 00:23:24.469713 ignition[1175]: Ignition finished successfully Jul 2 00:23:24.478123 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:23:24.487214 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:23:24.512963 ignition[1182]: Ignition 2.18.0 Jul 2 00:23:24.512978 ignition[1182]: Stage: kargs Jul 2 00:23:24.514038 ignition[1182]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:24.514054 ignition[1182]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:24.514176 ignition[1182]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:24.520284 ignition[1182]: PUT result: OK Jul 2 00:23:24.523937 ignition[1182]: kargs: kargs passed Jul 2 00:23:24.524442 ignition[1182]: Ignition finished successfully Jul 2 00:23:24.529295 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:23:24.539907 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:23:24.603127 ignition[1189]: Ignition 2.18.0 Jul 2 00:23:24.603144 ignition[1189]: Stage: disks Jul 2 00:23:24.604045 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:24.604063 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:24.604196 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:24.608284 ignition[1189]: PUT result: OK Jul 2 00:23:24.614230 ignition[1189]: disks: disks passed Jul 2 00:23:24.614316 ignition[1189]: Ignition finished successfully Jul 2 00:23:24.616432 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:23:24.621930 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:24.623814 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:23:24.628345 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:24.630501 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:24.634501 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:24.651248 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:23:24.712966 systemd-fsck[1198]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:23:24.723869 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:23:24.732876 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:23:24.921668 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:23:24.922450 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:23:24.923257 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:24.942847 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:24.947066 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:23:24.949408 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:23:24.949482 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:23:24.949532 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:24.960681 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1217) Jul 2 00:23:24.963250 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:24.963311 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:24.963326 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:23:24.969223 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:23:24.973716 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:23:24.976878 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:23:24.981473 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:25.050900 systemd-networkd[1166]: eth0: Gained IPv6LL Jul 2 00:23:25.439424 initrd-setup-root[1241]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:23:25.471589 initrd-setup-root[1248]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:23:25.478521 initrd-setup-root[1255]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:23:25.485673 initrd-setup-root[1262]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:23:25.769510 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:25.776776 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:23:25.783030 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:23:25.794027 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:23:25.795109 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:25.836687 ignition[1335]: INFO : Ignition 2.18.0 Jul 2 00:23:25.836687 ignition[1335]: INFO : Stage: mount Jul 2 00:23:25.839971 ignition[1335]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:25.839971 ignition[1335]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:25.839971 ignition[1335]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:25.839971 ignition[1335]: INFO : PUT result: OK Jul 2 00:23:25.846877 ignition[1335]: INFO : mount: mount passed Jul 2 00:23:25.847716 ignition[1335]: INFO : Ignition finished successfully Jul 2 00:23:25.850208 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:23:25.850528 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:23:25.862796 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:23:25.929347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:23:25.961683 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1347) Jul 2 00:23:25.965933 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:23:25.966012 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:23:25.966047 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:23:25.976677 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:23:25.977143 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:23:26.004907 ignition[1364]: INFO : Ignition 2.18.0 Jul 2 00:23:26.004907 ignition[1364]: INFO : Stage: files Jul 2 00:23:26.006994 ignition[1364]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:26.006994 ignition[1364]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:26.009702 ignition[1364]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:26.011970 ignition[1364]: INFO : PUT result: OK Jul 2 00:23:26.015988 ignition[1364]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:23:26.026989 ignition[1364]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:23:26.026989 ignition[1364]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:23:26.056677 ignition[1364]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:23:26.058773 ignition[1364]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:23:26.060799 ignition[1364]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:23:26.060588 unknown[1364]: wrote ssh authorized keys file for user: core Jul 2 00:23:26.065565 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:26.068506 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:23:26.068506 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:26.068506 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:23:26.068506 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:23:26.068506 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:23:26.068506 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:23:26.068506 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 00:23:26.441663 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 2 00:23:27.033169 ignition[1364]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:23:27.038536 ignition[1364]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:27.038536 ignition[1364]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:23:27.038536 ignition[1364]: INFO : files: files passed Jul 2 00:23:27.038536 ignition[1364]: INFO : Ignition finished successfully Jul 2 00:23:27.043487 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:23:27.058551 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:23:27.063543 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:23:27.068213 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:23:27.069644 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:23:27.093280 initrd-setup-root-after-ignition[1393]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:27.093280 initrd-setup-root-after-ignition[1393]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:27.097336 initrd-setup-root-after-ignition[1397]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:23:27.102125 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:27.102564 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:23:27.110844 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:23:27.183558 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:23:27.184735 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:23:27.187419 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:23:27.190275 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:23:27.192379 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:23:27.205915 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:23:27.233095 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:27.240403 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:23:27.275120 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:27.278325 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:27.281012 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:23:27.282945 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:23:27.283134 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:23:27.292675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:23:27.294065 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:23:27.297662 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:23:27.299552 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:23:27.302022 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:23:27.304606 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:23:27.308097 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:23:27.310514 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:23:27.314765 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:23:27.316248 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:23:27.319704 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:23:27.321465 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:23:27.324729 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:27.327390 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:27.336831 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:23:27.346203 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:27.348228 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:23:27.348413 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:23:27.354194 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:23:27.354602 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:23:27.357084 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:23:27.357267 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:23:27.368166 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:23:27.369547 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:23:27.369865 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:27.382503 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:23:27.387859 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:23:27.388156 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:27.389885 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:23:27.394766 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:23:27.406944 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:23:27.407070 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:23:27.422025 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:23:27.427585 ignition[1417]: INFO : Ignition 2.18.0 Jul 2 00:23:27.427585 ignition[1417]: INFO : Stage: umount Jul 2 00:23:27.429783 ignition[1417]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:23:27.429783 ignition[1417]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:23:27.429783 ignition[1417]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:23:27.429783 ignition[1417]: INFO : PUT result: OK Jul 2 00:23:27.436677 ignition[1417]: INFO : umount: umount passed Jul 2 00:23:27.437677 ignition[1417]: INFO : Ignition finished successfully Jul 2 00:23:27.440515 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:23:27.440624 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:23:27.442486 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:23:27.442548 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:23:27.444344 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:23:27.444409 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:23:27.447408 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:23:27.447475 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:23:27.449777 systemd[1]: Stopped target network.target - Network. Jul 2 00:23:27.451853 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:23:27.451913 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:23:27.453576 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:23:27.455371 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:23:27.456417 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:27.459008 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:23:27.459937 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:23:27.469703 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:23:27.469771 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:23:27.473137 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:23:27.473204 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:23:27.478731 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:23:27.479863 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:23:27.483532 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:23:27.484415 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:23:27.486776 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:23:27.488262 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:23:27.489714 systemd-networkd[1166]: eth0: DHCPv6 lease lost Jul 2 00:23:27.492819 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:23:27.492924 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:23:27.511740 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:23:27.511824 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:27.552161 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:23:27.558183 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:23:27.558290 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:23:27.564368 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:27.569519 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:23:27.569701 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:23:27.587363 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:23:27.587529 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:27.591872 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:23:27.591947 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:27.594634 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:23:27.594908 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:27.603161 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:23:27.603312 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:27.606181 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:23:27.606251 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:27.611169 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:23:27.611216 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:27.612846 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:23:27.612964 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:23:27.616440 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:23:27.616726 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:23:27.618402 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:23:27.618457 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:23:27.633861 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:23:27.636440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:23:27.636592 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:27.643534 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:23:27.643618 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:27.648557 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:23:27.649818 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:23:27.653143 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:23:27.654845 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:23:27.869307 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:23:27.869447 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:23:27.873139 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:23:27.874465 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:23:27.874539 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:23:27.885858 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:23:27.904756 systemd[1]: Switching root. Jul 2 00:23:27.959155 systemd-journald[178]: Journal stopped Jul 2 00:23:30.454206 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 2 00:23:30.454292 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:23:30.454319 kernel: SELinux: policy capability open_perms=1 Jul 2 00:23:30.454343 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:23:30.454361 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:23:30.454378 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:23:30.454578 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:23:30.454602 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:23:30.454638 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:23:30.454690 kernel: audit: type=1403 audit(1719879808.717:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:23:30.454714 systemd[1]: Successfully loaded SELinux policy in 65.949ms. Jul 2 00:23:30.454740 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 35.260ms. Jul 2 00:23:30.454761 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:23:30.454779 systemd[1]: Detected virtualization amazon. Jul 2 00:23:30.454798 systemd[1]: Detected architecture x86-64. Jul 2 00:23:30.454821 systemd[1]: Detected first boot. Jul 2 00:23:30.454840 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:23:30.454858 zram_generator::config[1460]: No configuration found. Jul 2 00:23:30.454878 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:23:30.454896 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:23:30.454920 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 00:23:30.454939 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:30.454960 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:23:30.454979 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:23:30.454998 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:23:30.455016 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:23:30.455035 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:23:30.455054 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:23:30.455076 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:23:30.455095 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:23:30.455120 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:23:30.455139 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:23:30.455158 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:23:30.455177 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:23:30.455196 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:23:30.455215 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:23:30.455234 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:23:30.455256 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:23:30.455274 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 00:23:30.455405 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 00:23:30.455431 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 00:23:30.455450 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:23:30.455470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:23:30.455496 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:23:30.455515 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:23:30.455536 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:23:30.455555 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:23:30.455621 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:23:30.455642 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:23:30.456561 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:23:30.456588 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:23:30.456646 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:23:30.456813 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:23:30.456837 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:23:30.456862 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:23:30.456881 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:30.456901 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:23:30.456923 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:23:30.456943 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:23:30.456966 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:23:30.456988 systemd[1]: Reached target machines.target - Containers. Jul 2 00:23:30.457007 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:23:30.457028 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:30.457054 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:23:30.457075 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:23:30.457099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:30.457122 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:30.457146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:30.457169 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:23:30.457478 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:30.457509 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:23:30.457535 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:23:30.457700 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 00:23:30.457723 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:23:30.457742 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:23:30.457762 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:23:30.457781 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:23:30.457800 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:23:30.457883 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:23:30.457907 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:23:30.457933 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:23:30.457952 systemd[1]: Stopped verity-setup.service. Jul 2 00:23:30.457971 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:30.457990 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:23:30.458010 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:23:30.458029 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:23:30.458051 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:23:30.458070 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:23:30.458089 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:23:30.458107 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:23:30.458127 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:23:30.458179 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:23:30.458199 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:30.458222 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:30.458241 kernel: fuse: init (API version 7.39) Jul 2 00:23:30.458261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:30.458279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:30.458298 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:23:30.458317 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:23:30.458335 kernel: loop: module loaded Jul 2 00:23:30.458460 systemd-journald[1531]: Collecting audit messages is disabled. Jul 2 00:23:30.458508 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:23:30.458534 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:30.458555 systemd-journald[1531]: Journal started Jul 2 00:23:30.458593 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec2dbc15341145b8900e358c5c1ce33e) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:23:29.923793 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:23:29.952866 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 00:23:29.953299 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:23:30.461440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:30.464843 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:23:30.469048 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:23:30.473765 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:23:30.498394 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:23:30.522556 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:23:30.532745 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:23:30.538948 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:23:30.539001 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:23:30.543053 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:23:30.580567 kernel: ACPI: bus type drm_connector registered Jul 2 00:23:30.572808 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:23:30.589464 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:23:30.591027 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:30.595872 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:23:30.600450 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:23:30.601853 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:30.611907 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:23:30.613370 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:30.622881 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:23:30.626830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:23:30.634928 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:30.635136 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:30.636969 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:23:30.640764 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:23:30.642943 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:23:30.672074 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec2dbc15341145b8900e358c5c1ce33e is 132.354ms for 940 entries. Jul 2 00:23:30.672074 systemd-journald[1531]: System Journal (/var/log/journal/ec2dbc15341145b8900e358c5c1ce33e) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:23:30.813854 systemd-journald[1531]: Received client request to flush runtime journal. Jul 2 00:23:30.814267 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:23:30.814325 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:23:30.696852 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:23:30.698786 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:23:30.709118 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:23:30.759702 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:23:30.786824 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:23:30.800841 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:23:30.827725 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:23:30.860671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:23:30.881102 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:23:30.910600 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:23:30.911687 udevadm[1599]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:23:30.913898 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:23:30.968689 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:23:30.991777 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:23:30.997847 kernel: loop1: detected capacity change from 0 to 80568 Jul 2 00:23:30.998919 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:23:31.078926 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jul 2 00:23:31.080593 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Jul 2 00:23:31.093308 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:23:31.141672 kernel: loop2: detected capacity change from 0 to 60984 Jul 2 00:23:31.303682 kernel: loop3: detected capacity change from 0 to 209816 Jul 2 00:23:31.371198 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:23:31.417683 kernel: loop5: detected capacity change from 0 to 80568 Jul 2 00:23:31.441680 kernel: loop6: detected capacity change from 0 to 60984 Jul 2 00:23:31.471686 kernel: loop7: detected capacity change from 0 to 209816 Jul 2 00:23:31.516058 (sd-merge)[1610]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 00:23:31.516910 (sd-merge)[1610]: Merged extensions into '/usr'. Jul 2 00:23:31.523910 systemd[1]: Reloading requested from client PID 1576 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:23:31.523941 systemd[1]: Reloading... Jul 2 00:23:31.647681 zram_generator::config[1632]: No configuration found. Jul 2 00:23:31.986607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:32.089191 systemd[1]: Reloading finished in 564 ms. Jul 2 00:23:32.125390 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:23:32.145249 systemd[1]: Starting ensure-sysext.service... Jul 2 00:23:32.167663 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:23:32.220515 systemd[1]: Reloading requested from client PID 1682 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:23:32.222435 systemd[1]: Reloading... Jul 2 00:23:32.288449 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:23:32.288937 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:23:32.290645 systemd-tmpfiles[1683]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:23:32.292229 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Jul 2 00:23:32.292459 systemd-tmpfiles[1683]: ACLs are not supported, ignoring. Jul 2 00:23:32.305987 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:32.306804 systemd-tmpfiles[1683]: Skipping /boot Jul 2 00:23:32.344764 zram_generator::config[1706]: No configuration found. Jul 2 00:23:32.348113 systemd-tmpfiles[1683]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:23:32.348844 systemd-tmpfiles[1683]: Skipping /boot Jul 2 00:23:32.579161 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:32.624681 ldconfig[1564]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:23:32.659315 systemd[1]: Reloading finished in 436 ms. Jul 2 00:23:32.677245 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:23:32.678844 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:23:32.686269 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:23:32.704881 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:32.716257 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:23:32.728037 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:23:32.743091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:23:32.747862 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:23:32.754180 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:23:32.787164 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:32.787845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:32.801015 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:23:32.808599 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:23:32.837267 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:23:32.842205 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:32.857785 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:23:32.858999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:32.880350 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:32.882734 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:23:32.900773 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:23:32.903403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:23:32.904029 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:23:32.911421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:23:32.916027 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:23:32.928393 systemd[1]: Finished ensure-sysext.service. Jul 2 00:23:32.933749 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:23:32.934095 systemd-udevd[1767]: Using default interface naming scheme 'v255'. Jul 2 00:23:32.943113 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:23:32.943344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:23:32.962958 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:23:32.964759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:23:32.964986 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:23:32.966948 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:23:32.986445 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:23:32.986869 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:23:33.003751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:23:33.004088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:23:33.009509 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:23:33.031532 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:23:33.035209 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:23:33.037782 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:23:33.038531 augenrules[1795]: No rules Jul 2 00:23:33.043143 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:33.047880 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:23:33.071681 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:23:33.082884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:23:33.266219 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1809) Jul 2 00:23:33.268502 (udev-worker)[1817]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:23:33.287182 systemd-resolved[1766]: Positive Trust Anchors: Jul 2 00:23:33.287208 systemd-resolved[1766]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:23:33.287258 systemd-resolved[1766]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:23:33.304122 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 00:23:33.305294 systemd-networkd[1812]: lo: Link UP Jul 2 00:23:33.305309 systemd-networkd[1812]: lo: Gained carrier Jul 2 00:23:33.307017 systemd-resolved[1766]: Defaulting to hostname 'linux'. Jul 2 00:23:33.309307 systemd-networkd[1812]: Enumeration completed Jul 2 00:23:33.309427 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:23:33.313183 systemd-networkd[1812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:33.313188 systemd-networkd[1812]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:23:33.318359 systemd-networkd[1812]: eth0: Link UP Jul 2 00:23:33.318619 systemd-networkd[1812]: eth0: Gained carrier Jul 2 00:23:33.318688 systemd-networkd[1812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:33.322082 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:23:33.323416 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:23:33.325688 systemd[1]: Reached target network.target - Network. Jul 2 00:23:33.327771 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:23:33.329748 systemd-networkd[1812]: eth0: DHCPv4 address 172.31.19.189/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:23:33.346989 systemd-networkd[1812]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:23:33.441418 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jul 2 00:23:33.451226 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 2 00:23:33.454390 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:23:33.454482 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 2 00:23:33.456697 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 00:23:33.481685 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1815) Jul 2 00:23:33.519588 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Jul 2 00:23:33.569043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:23:33.612674 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:23:33.741773 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:23:33.752891 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:23:33.873687 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:23:33.875682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:23:33.885032 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:23:33.900588 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:23:33.919793 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:33.957722 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:23:33.959362 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:23:33.960632 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:23:33.962523 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:23:33.964329 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:23:33.969765 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:23:33.971945 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:23:33.976351 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:23:33.978705 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:23:33.978753 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:23:33.980188 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:23:33.983527 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:23:33.988803 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:23:34.003498 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:23:34.007236 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:23:34.009918 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:23:34.012736 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:23:34.013780 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:23:34.014936 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:34.014966 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:23:34.017856 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:23:34.022900 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:23:34.026827 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:23:34.032797 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:23:34.037794 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:23:34.041762 lvm[1933]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:23:34.039880 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:23:34.047945 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:23:34.058039 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 00:23:34.065015 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 00:23:34.079781 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:23:34.105317 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:23:34.129734 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:23:34.144157 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:23:34.146585 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:23:34.203682 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:23:34.215083 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:23:34.222397 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:23:34.234098 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:23:34.234568 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:23:34.255148 extend-filesystems[1938]: Found loop4 Jul 2 00:23:34.255148 extend-filesystems[1938]: Found loop5 Jul 2 00:23:34.255148 extend-filesystems[1938]: Found loop6 Jul 2 00:23:34.255148 extend-filesystems[1938]: Found loop7 Jul 2 00:23:34.255148 extend-filesystems[1938]: Found nvme0n1 Jul 2 00:23:34.255148 extend-filesystems[1938]: Found nvme0n1p1 Jul 2 00:23:34.255148 extend-filesystems[1938]: Found nvme0n1p2 Jul 2 00:23:34.255148 extend-filesystems[1938]: Found nvme0n1p3 Jul 2 00:23:34.255148 extend-filesystems[1938]: Found usr Jul 2 00:23:34.255148 extend-filesystems[1938]: Found nvme0n1p4 Jul 2 00:23:34.291835 extend-filesystems[1938]: Found nvme0n1p6 Jul 2 00:23:34.291835 extend-filesystems[1938]: Found nvme0n1p7 Jul 2 00:23:34.291835 extend-filesystems[1938]: Found nvme0n1p9 Jul 2 00:23:34.291835 extend-filesystems[1938]: Checking size of /dev/nvme0n1p9 Jul 2 00:23:34.274515 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:23:34.309203 jq[1937]: false Jul 2 00:23:34.275705 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:23:34.314625 jq[1950]: true Jul 2 00:23:34.359506 extend-filesystems[1938]: Resized partition /dev/nvme0n1p9 Jul 2 00:23:34.360946 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:23:34.361202 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:23:34.375906 extend-filesystems[1980]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:23:34.380891 update_engine[1946]: I0702 00:23:34.380568 1946 main.cc:92] Flatcar Update Engine starting Jul 2 00:23:34.382573 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:23:34.387732 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:23:34.391222 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:23:34.391222 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:23:34.391222 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: ---------------------------------------------------- Jul 2 00:23:34.391222 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:23:34.391222 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:23:34.391222 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: corporation. Support and training for ntp-4 are Jul 2 00:23:34.391222 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: available at https://www.nwtime.org/support Jul 2 00:23:34.391222 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: ---------------------------------------------------- Jul 2 00:23:34.389131 ntpd[1940]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:23:34.389145 ntpd[1940]: ---------------------------------------------------- Jul 2 00:23:34.389207 ntpd[1940]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:23:34.389220 ntpd[1940]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:23:34.389230 ntpd[1940]: corporation. Support and training for ntp-4 are Jul 2 00:23:34.389240 ntpd[1940]: available at https://www.nwtime.org/support Jul 2 00:23:34.389250 ntpd[1940]: ---------------------------------------------------- Jul 2 00:23:34.407058 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: proto: precision = 0.067 usec (-24) Jul 2 00:23:34.405994 ntpd[1940]: proto: precision = 0.067 usec (-24) Jul 2 00:23:34.412082 ntpd[1940]: basedate set to 2024-06-19 Jul 2 00:23:34.413111 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: basedate set to 2024-06-19 Jul 2 00:23:34.413111 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: gps base set to 2024-06-23 (week 2320) Jul 2 00:23:34.412769 ntpd[1940]: gps base set to 2024-06-23 (week 2320) Jul 2 00:23:34.426349 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:23:34.426627 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:23:34.426627 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:23:34.426627 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:23:34.426627 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: Listen normally on 3 eth0 172.31.19.189:123 Jul 2 00:23:34.426627 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: Listen normally on 4 lo [::1]:123 Jul 2 00:23:34.425535 ntpd[1940]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:23:34.425587 ntpd[1940]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:23:34.425863 ntpd[1940]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:23:34.425867 dbus-daemon[1936]: [system] SELinux support is enabled Jul 2 00:23:34.426078 ntpd[1940]: Listen normally on 3 eth0 172.31.19.189:123 Jul 2 00:23:34.426140 ntpd[1940]: Listen normally on 4 lo [::1]:123 Jul 2 00:23:34.432564 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:23:34.438569 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: bind(21) AF_INET6 fe80::401:2aff:fe6b:f537%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:23:34.438569 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: unable to create socket on eth0 (5) for fe80::401:2aff:fe6b:f537%2#123 Jul 2 00:23:34.438569 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: failed to init interface for address fe80::401:2aff:fe6b:f537%2 Jul 2 00:23:34.438569 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Jul 2 00:23:34.432675 ntpd[1940]: bind(21) AF_INET6 fe80::401:2aff:fe6b:f537%2#123 flags 0x11 failed: Cannot assign requested address Jul 2 00:23:34.433738 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:23:34.432719 ntpd[1940]: unable to create socket on eth0 (5) for fe80::401:2aff:fe6b:f537%2#123 Jul 2 00:23:34.441993 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:23:34.432737 ntpd[1940]: failed to init interface for address fe80::401:2aff:fe6b:f537%2 Jul 2 00:23:34.442025 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:23:34.432790 ntpd[1940]: Listening on routing socket on fd #21 for interface updates Jul 2 00:23:34.455512 jq[1968]: true Jul 2 00:23:34.458532 (ntainerd)[1982]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:23:34.476112 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1822) Jul 2 00:23:34.461999 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:34.476292 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:34.476292 ntpd[1940]: 2 Jul 00:23:34 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:34.462035 ntpd[1940]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:23:34.499199 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1812 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:23:34.500229 update_engine[1946]: I0702 00:23:34.500013 1946 update_check_scheduler.cc:74] Next update check in 5m52s Jul 2 00:23:34.502335 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:23:34.514962 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 00:23:34.523741 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:23:34.556745 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 00:23:34.570680 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:23:34.646456 systemd-logind[1945]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:23:34.646498 systemd-logind[1945]: Watching system buttons on /dev/input/event2 (Sleep Button) Jul 2 00:23:34.646524 systemd-logind[1945]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:23:34.649859 systemd-logind[1945]: New seat seat0. Jul 2 00:23:34.654310 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:23:34.655858 extend-filesystems[1980]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:23:34.655858 extend-filesystems[1980]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:23:34.655858 extend-filesystems[1980]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:23:34.679107 extend-filesystems[1938]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:23:34.663913 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:23:34.665193 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:23:34.774265 coreos-metadata[1935]: Jul 02 00:23:34.774 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:23:34.775987 coreos-metadata[1935]: Jul 02 00:23:34.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 00:23:34.784993 coreos-metadata[1935]: Jul 02 00:23:34.784 INFO Fetch successful Jul 2 00:23:34.784993 coreos-metadata[1935]: Jul 02 00:23:34.784 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 00:23:34.789609 coreos-metadata[1935]: Jul 02 00:23:34.789 INFO Fetch successful Jul 2 00:23:34.789609 coreos-metadata[1935]: Jul 02 00:23:34.789 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 00:23:34.790965 coreos-metadata[1935]: Jul 02 00:23:34.790 INFO Fetch successful Jul 2 00:23:34.794033 bash[2023]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:34.796906 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:23:34.797699 coreos-metadata[1935]: Jul 02 00:23:34.790 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 00:23:34.809464 coreos-metadata[1935]: Jul 02 00:23:34.808 INFO Fetch successful Jul 2 00:23:34.809464 coreos-metadata[1935]: Jul 02 00:23:34.809 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 00:23:34.811822 coreos-metadata[1935]: Jul 02 00:23:34.811 INFO Fetch failed with 404: resource not found Jul 2 00:23:34.813715 systemd[1]: Starting sshkeys.service... Jul 2 00:23:34.815863 coreos-metadata[1935]: Jul 02 00:23:34.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 00:23:34.826880 coreos-metadata[1935]: Jul 02 00:23:34.825 INFO Fetch successful Jul 2 00:23:34.826880 coreos-metadata[1935]: Jul 02 00:23:34.825 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 00:23:34.827692 coreos-metadata[1935]: Jul 02 00:23:34.827 INFO Fetch successful Jul 2 00:23:34.827692 coreos-metadata[1935]: Jul 02 00:23:34.827 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 00:23:34.846984 coreos-metadata[1935]: Jul 02 00:23:34.846 INFO Fetch successful Jul 2 00:23:34.846984 coreos-metadata[1935]: Jul 02 00:23:34.846 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 00:23:34.847949 coreos-metadata[1935]: Jul 02 00:23:34.847 INFO Fetch successful Jul 2 00:23:34.848094 coreos-metadata[1935]: Jul 02 00:23:34.848 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 00:23:34.850138 coreos-metadata[1935]: Jul 02 00:23:34.848 INFO Fetch successful Jul 2 00:23:34.908734 systemd-networkd[1812]: eth0: Gained IPv6LL Jul 2 00:23:34.921938 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:23:34.925355 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:23:34.965271 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 00:23:34.989946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:35.010340 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:23:35.010572 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:23:35.013308 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:23:35.011293 dbus-daemon[1936]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1993 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:23:35.015186 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 00:23:35.048873 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:23:35.063493 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 00:23:35.079018 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:23:35.095407 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:23:35.196117 polkitd[2082]: Started polkitd version 121 Jul 2 00:23:35.226615 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:23:35.250338 amazon-ssm-agent[2056]: Initializing new seelog logger Jul 2 00:23:35.250729 amazon-ssm-agent[2056]: New Seelog Logger Creation Complete Jul 2 00:23:35.250729 amazon-ssm-agent[2056]: 2024/07/02 00:23:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:35.250729 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:35.255270 amazon-ssm-agent[2056]: 2024/07/02 00:23:35 processing appconfig overrides Jul 2 00:23:35.260051 amazon-ssm-agent[2056]: 2024/07/02 00:23:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:35.260051 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:35.260189 amazon-ssm-agent[2056]: 2024/07/02 00:23:35 processing appconfig overrides Jul 2 00:23:35.261230 amazon-ssm-agent[2056]: 2024/07/02 00:23:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:35.261230 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:35.263700 amazon-ssm-agent[2056]: 2024/07/02 00:23:35 processing appconfig overrides Jul 2 00:23:35.263700 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO Proxy environment variables: Jul 2 00:23:35.304776 amazon-ssm-agent[2056]: 2024/07/02 00:23:35 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:35.304776 amazon-ssm-agent[2056]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:23:35.304945 amazon-ssm-agent[2056]: 2024/07/02 00:23:35 processing appconfig overrides Jul 2 00:23:35.325888 polkitd[2082]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:23:35.325978 polkitd[2082]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:23:35.335742 polkitd[2082]: Finished loading, compiling and executing 2 rules Jul 2 00:23:35.336606 dbus-daemon[1936]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:23:35.336833 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 00:23:35.338850 polkitd[2082]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:23:35.375388 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO https_proxy: Jul 2 00:23:35.439070 systemd-resolved[1766]: System hostname changed to 'ip-172-31-19-189'. Jul 2 00:23:35.439179 systemd-hostnamed[1993]: Hostname set to (transient) Jul 2 00:23:35.470927 coreos-metadata[2087]: Jul 02 00:23:35.466 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:23:35.472935 coreos-metadata[2087]: Jul 02 00:23:35.471 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 00:23:35.474302 coreos-metadata[2087]: Jul 02 00:23:35.474 INFO Fetch successful Jul 2 00:23:35.474302 coreos-metadata[2087]: Jul 02 00:23:35.474 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:23:35.476460 coreos-metadata[2087]: Jul 02 00:23:35.475 INFO Fetch successful Jul 2 00:23:35.499239 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO http_proxy: Jul 2 00:23:35.497367 unknown[2087]: wrote ssh authorized keys file for user: core Jul 2 00:23:35.531468 locksmithd[1995]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:23:35.598397 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO no_proxy: Jul 2 00:23:35.616408 update-ssh-keys[2138]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:23:35.619735 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:23:35.632945 systemd[1]: Finished sshkeys.service. Jul 2 00:23:35.662327 containerd[1982]: time="2024-07-02T00:23:35.662202106Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:23:35.670772 sshd_keygen[1983]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:23:35.695737 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO Checking if agent identity type OnPrem can be assumed Jul 2 00:23:35.792110 containerd[1982]: time="2024-07-02T00:23:35.791992523Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:23:35.792110 containerd[1982]: time="2024-07-02T00:23:35.792066043Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:35.800274 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO Checking if agent identity type EC2 can be assumed Jul 2 00:23:35.801587 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:23:35.807823 containerd[1982]: time="2024-07-02T00:23:35.803829068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:35.807823 containerd[1982]: time="2024-07-02T00:23:35.803881584Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:35.807823 containerd[1982]: time="2024-07-02T00:23:35.804230471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:35.807823 containerd[1982]: time="2024-07-02T00:23:35.804259890Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:23:35.807823 containerd[1982]: time="2024-07-02T00:23:35.804369367Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:35.807823 containerd[1982]: time="2024-07-02T00:23:35.804429323Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:35.807823 containerd[1982]: time="2024-07-02T00:23:35.804444823Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:35.807823 containerd[1982]: time="2024-07-02T00:23:35.804522274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:35.809922 containerd[1982]: time="2024-07-02T00:23:35.809881649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:35.810061 containerd[1982]: time="2024-07-02T00:23:35.810038158Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:23:35.810134 containerd[1982]: time="2024-07-02T00:23:35.810118568Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:23:35.810534 containerd[1982]: time="2024-07-02T00:23:35.810502194Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:23:35.811971 containerd[1982]: time="2024-07-02T00:23:35.811700400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:23:35.811971 containerd[1982]: time="2024-07-02T00:23:35.811819854Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:23:35.811971 containerd[1982]: time="2024-07-02T00:23:35.811842077Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:23:35.814993 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.823333927Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.823486119Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.823509738Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.823661714Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.823698554Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.823716764Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.823750576Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.824029497Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.824057769Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.824080469Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.824102491Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.824126395Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.824151624Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:23:35.824714 containerd[1982]: time="2024-07-02T00:23:35.824170653Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824186973Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824206057Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824223815Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824241665Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824254447Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824367539Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824590978Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824610879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824623210Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824647196Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824751043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824768950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824784573Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825317 containerd[1982]: time="2024-07-02T00:23:35.824799774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.824817402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.824834804Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.824854669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.824873289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.824893387Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.825051492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.825074805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.825095872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.825114958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.825136108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.825170101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.825191945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.825823 containerd[1982]: time="2024-07-02T00:23:35.825220330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:23:35.830855 containerd[1982]: time="2024-07-02T00:23:35.825611419Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:23:35.831138 containerd[1982]: time="2024-07-02T00:23:35.830941354Z" level=info msg="Connect containerd service" Jul 2 00:23:35.831138 containerd[1982]: time="2024-07-02T00:23:35.831025213Z" level=info msg="using legacy CRI server" Jul 2 00:23:35.831138 containerd[1982]: time="2024-07-02T00:23:35.831038722Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:23:35.831349 containerd[1982]: time="2024-07-02T00:23:35.831278034Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.835916411Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836015513Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836045267Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836090091Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836110328Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836172842Z" level=info msg="Start subscribing containerd event" Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836416128Z" level=info msg="Start recovering state" Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836869269Z" level=info msg="Start event monitor" Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836894798Z" level=info msg="Start snapshots syncer" Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836908359Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:23:35.837017 containerd[1982]: time="2024-07-02T00:23:35.836955785Z" level=info msg="Start streaming server" Jul 2 00:23:35.837565 containerd[1982]: time="2024-07-02T00:23:35.837127743Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:23:35.837565 containerd[1982]: time="2024-07-02T00:23:35.837222351Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:23:35.837565 containerd[1982]: time="2024-07-02T00:23:35.837367678Z" level=info msg="containerd successfully booted in 0.179424s" Jul 2 00:23:35.839798 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:23:35.863964 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:23:35.864330 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:23:35.873396 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:23:35.897115 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO Agent will take identity from EC2 Jul 2 00:23:35.915708 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:23:35.926148 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:23:35.939238 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:23:35.941469 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:23:35.996553 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:23:36.096929 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:23:36.133077 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:23:36.133077 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 00:23:36.133077 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 2 00:23:36.133077 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 00:23:36.133077 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 00:23:36.133077 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO [Registrar] Starting registrar module Jul 2 00:23:36.133330 amazon-ssm-agent[2056]: 2024-07-02 00:23:35 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 00:23:36.133330 amazon-ssm-agent[2056]: 2024-07-02 00:23:36 INFO [EC2Identity] EC2 registration was successful. Jul 2 00:23:36.133330 amazon-ssm-agent[2056]: 2024-07-02 00:23:36 INFO [CredentialRefresher] credentialRefresher has started Jul 2 00:23:36.133330 amazon-ssm-agent[2056]: 2024-07-02 00:23:36 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 00:23:36.133330 amazon-ssm-agent[2056]: 2024-07-02 00:23:36 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 00:23:36.196740 amazon-ssm-agent[2056]: 2024-07-02 00:23:36 INFO [CredentialRefresher] Next credential rotation will be in 30.908326575333334 minutes Jul 2 00:23:36.833208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:36.835561 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:23:36.837566 systemd[1]: Startup finished in 891ms (kernel) + 8.915s (initrd) + 8.183s (userspace) = 17.990s. Jul 2 00:23:36.982067 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:23:37.165917 amazon-ssm-agent[2056]: 2024-07-02 00:23:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 00:23:37.266363 amazon-ssm-agent[2056]: 2024-07-02 00:23:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2183) started Jul 2 00:23:37.367831 amazon-ssm-agent[2056]: 2024-07-02 00:23:37 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 00:23:37.389795 ntpd[1940]: Listen normally on 6 eth0 [fe80::401:2aff:fe6b:f537%2]:123 Jul 2 00:23:37.390334 ntpd[1940]: 2 Jul 00:23:37 ntpd[1940]: Listen normally on 6 eth0 [fe80::401:2aff:fe6b:f537%2]:123 Jul 2 00:23:38.043239 kubelet[2177]: E0702 00:23:38.043130 2177 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:23:38.046017 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:23:38.046155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:23:38.046607 systemd[1]: kubelet.service: Consumed 1.075s CPU time. Jul 2 00:23:42.276282 systemd-resolved[1766]: Clock change detected. Flushing caches. Jul 2 00:23:44.503052 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:23:44.510881 systemd[1]: Started sshd@0-172.31.19.189:22-147.75.109.163:57304.service - OpenSSH per-connection server daemon (147.75.109.163:57304). Jul 2 00:23:44.763521 sshd[2201]: Accepted publickey for core from 147.75.109.163 port 57304 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:44.768577 sshd[2201]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:44.797124 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:23:44.806316 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:23:44.810792 systemd-logind[1945]: New session 1 of user core. Jul 2 00:23:44.834082 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:23:44.842232 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:23:44.851360 (systemd)[2205]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:45.010107 systemd[2205]: Queued start job for default target default.target. Jul 2 00:23:45.016732 systemd[2205]: Created slice app.slice - User Application Slice. Jul 2 00:23:45.016778 systemd[2205]: Reached target paths.target - Paths. Jul 2 00:23:45.016798 systemd[2205]: Reached target timers.target - Timers. Jul 2 00:23:45.021854 systemd[2205]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:23:45.049749 systemd[2205]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:23:45.049912 systemd[2205]: Reached target sockets.target - Sockets. Jul 2 00:23:45.049934 systemd[2205]: Reached target basic.target - Basic System. Jul 2 00:23:45.049989 systemd[2205]: Reached target default.target - Main User Target. Jul 2 00:23:45.050030 systemd[2205]: Startup finished in 190ms. Jul 2 00:23:45.050241 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:23:45.056681 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:23:45.206873 systemd[1]: Started sshd@1-172.31.19.189:22-147.75.109.163:57310.service - OpenSSH per-connection server daemon (147.75.109.163:57310). Jul 2 00:23:45.371306 sshd[2216]: Accepted publickey for core from 147.75.109.163 port 57310 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:45.373115 sshd[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:45.381329 systemd-logind[1945]: New session 2 of user core. Jul 2 00:23:45.392815 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:23:45.516171 sshd[2216]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:45.520593 systemd[1]: sshd@1-172.31.19.189:22-147.75.109.163:57310.service: Deactivated successfully. Jul 2 00:23:45.522653 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:23:45.524433 systemd-logind[1945]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:23:45.525720 systemd-logind[1945]: Removed session 2. Jul 2 00:23:45.555901 systemd[1]: Started sshd@2-172.31.19.189:22-147.75.109.163:57324.service - OpenSSH per-connection server daemon (147.75.109.163:57324). Jul 2 00:23:45.710486 sshd[2223]: Accepted publickey for core from 147.75.109.163 port 57324 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:45.712147 sshd[2223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:45.717736 systemd-logind[1945]: New session 3 of user core. Jul 2 00:23:45.725692 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:23:45.846509 sshd[2223]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:45.851496 systemd[1]: sshd@2-172.31.19.189:22-147.75.109.163:57324.service: Deactivated successfully. Jul 2 00:23:45.855524 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:23:45.861185 systemd-logind[1945]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:23:45.867556 systemd-logind[1945]: Removed session 3. Jul 2 00:23:45.890259 systemd[1]: Started sshd@3-172.31.19.189:22-147.75.109.163:57326.service - OpenSSH per-connection server daemon (147.75.109.163:57326). Jul 2 00:23:46.069634 sshd[2230]: Accepted publickey for core from 147.75.109.163 port 57326 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:46.071243 sshd[2230]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:46.077841 systemd-logind[1945]: New session 4 of user core. Jul 2 00:23:46.086726 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:23:46.206605 sshd[2230]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:46.211514 systemd[1]: sshd@3-172.31.19.189:22-147.75.109.163:57326.service: Deactivated successfully. Jul 2 00:23:46.214737 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:23:46.216079 systemd-logind[1945]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:23:46.217266 systemd-logind[1945]: Removed session 4. Jul 2 00:23:46.254813 systemd[1]: Started sshd@4-172.31.19.189:22-147.75.109.163:57342.service - OpenSSH per-connection server daemon (147.75.109.163:57342). Jul 2 00:23:46.412965 sshd[2237]: Accepted publickey for core from 147.75.109.163 port 57342 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:46.416036 sshd[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:46.423346 systemd-logind[1945]: New session 5 of user core. Jul 2 00:23:46.429697 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:23:46.579017 sudo[2240]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:23:46.579748 sudo[2240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:46.593492 sudo[2240]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:46.616444 sshd[2237]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:46.620477 systemd[1]: sshd@4-172.31.19.189:22-147.75.109.163:57342.service: Deactivated successfully. Jul 2 00:23:46.625191 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:23:46.627534 systemd-logind[1945]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:23:46.629461 systemd-logind[1945]: Removed session 5. Jul 2 00:23:46.652206 systemd[1]: Started sshd@5-172.31.19.189:22-147.75.109.163:57344.service - OpenSSH per-connection server daemon (147.75.109.163:57344). Jul 2 00:23:46.813829 sshd[2245]: Accepted publickey for core from 147.75.109.163 port 57344 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:46.815685 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:46.821201 systemd-logind[1945]: New session 6 of user core. Jul 2 00:23:46.828716 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:23:46.933922 sudo[2249]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:23:46.934642 sudo[2249]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:46.945849 sudo[2249]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:46.954859 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:23:46.955224 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:46.972035 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:46.985182 auditctl[2252]: No rules Jul 2 00:23:46.985719 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:23:46.986032 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:46.995566 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:23:47.029661 augenrules[2270]: No rules Jul 2 00:23:47.031837 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:23:47.034507 sudo[2248]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:47.059901 sshd[2245]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:47.068700 systemd[1]: sshd@5-172.31.19.189:22-147.75.109.163:57344.service: Deactivated successfully. Jul 2 00:23:47.071601 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:23:47.073154 systemd-logind[1945]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:23:47.076303 systemd-logind[1945]: Removed session 6. Jul 2 00:23:47.091321 systemd[1]: Started sshd@6-172.31.19.189:22-147.75.109.163:57358.service - OpenSSH per-connection server daemon (147.75.109.163:57358). Jul 2 00:23:47.275220 sshd[2278]: Accepted publickey for core from 147.75.109.163 port 57358 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:23:47.276805 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:23:47.285952 systemd-logind[1945]: New session 7 of user core. Jul 2 00:23:47.292747 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:23:47.391774 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:23:47.392154 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:23:48.937727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:23:48.943856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:48.948246 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:23:48.948359 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:23:48.948854 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:48.964237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:48.996770 systemd[1]: Reloading requested from client PID 2320 ('systemctl') (unit session-7.scope)... Jul 2 00:23:48.996794 systemd[1]: Reloading... Jul 2 00:23:49.146567 zram_generator::config[2358]: No configuration found. Jul 2 00:23:49.352499 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:23:49.498955 systemd[1]: Reloading finished in 501 ms. Jul 2 00:23:49.561555 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:23:49.561689 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:23:49.562164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:49.571532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:23:50.210600 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:23:50.237530 (kubelet)[2415]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:23:50.359479 kubelet[2415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:23:50.359479 kubelet[2415]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:23:50.359479 kubelet[2415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:23:50.359479 kubelet[2415]: I0702 00:23:50.357932 2415 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:23:51.263351 kubelet[2415]: I0702 00:23:51.263311 2415 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:23:51.263351 kubelet[2415]: I0702 00:23:51.263347 2415 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:23:51.263646 kubelet[2415]: I0702 00:23:51.263624 2415 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:23:51.291975 kubelet[2415]: I0702 00:23:51.291027 2415 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:23:51.320427 kubelet[2415]: I0702 00:23:51.320387 2415 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:23:51.321054 kubelet[2415]: I0702 00:23:51.321030 2415 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:23:51.321260 kubelet[2415]: I0702 00:23:51.321238 2415 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:23:51.321399 kubelet[2415]: I0702 00:23:51.321269 2415 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:23:51.321399 kubelet[2415]: I0702 00:23:51.321283 2415 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:23:51.323612 kubelet[2415]: I0702 00:23:51.323578 2415 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:51.325659 kubelet[2415]: I0702 00:23:51.325631 2415 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:23:51.325659 kubelet[2415]: I0702 00:23:51.325667 2415 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:23:51.325799 kubelet[2415]: I0702 00:23:51.325703 2415 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:23:51.325799 kubelet[2415]: I0702 00:23:51.325719 2415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:23:51.328438 kubelet[2415]: E0702 00:23:51.327533 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:51.328438 kubelet[2415]: E0702 00:23:51.327568 2415 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:51.328605 kubelet[2415]: I0702 00:23:51.328582 2415 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:23:51.333397 kubelet[2415]: W0702 00:23:51.333348 2415 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:23:51.334847 kubelet[2415]: I0702 00:23:51.334487 2415 server.go:1232] "Started kubelet" Jul 2 00:23:51.339407 kubelet[2415]: I0702 00:23:51.339372 2415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:23:51.340541 kubelet[2415]: E0702 00:23:51.340175 2415 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:23:51.340541 kubelet[2415]: E0702 00:23:51.340208 2415 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:23:51.347336 kubelet[2415]: E0702 00:23:51.347188 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d98230c2bf1", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 334431729, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 334431729, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.347650 kubelet[2415]: W0702 00:23:51.347613 2415 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 00:23:51.347650 kubelet[2415]: E0702 00:23:51.347641 2415 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 00:23:51.347762 kubelet[2415]: W0702 00:23:51.347699 2415 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.19.189" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 00:23:51.347762 kubelet[2415]: E0702 00:23:51.347714 2415 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.19.189" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 00:23:51.349257 kubelet[2415]: E0702 00:23:51.349163 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982364165d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 340193373, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 340193373, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.349437 kubelet[2415]: I0702 00:23:51.349298 2415 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:23:51.350191 kubelet[2415]: I0702 00:23:51.350158 2415 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:23:51.351996 kubelet[2415]: I0702 00:23:51.351976 2415 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:23:51.352384 kubelet[2415]: I0702 00:23:51.352112 2415 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:23:51.352384 kubelet[2415]: I0702 00:23:51.352349 2415 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:23:51.354228 kubelet[2415]: I0702 00:23:51.354202 2415 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:23:51.354317 kubelet[2415]: I0702 00:23:51.354292 2415 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:23:51.356309 kubelet[2415]: E0702 00:23:51.355158 2415 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.19.189\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 2 00:23:51.356309 kubelet[2415]: W0702 00:23:51.355308 2415 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 00:23:51.356309 kubelet[2415]: E0702 00:23:51.355333 2415 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 00:23:51.440299 kubelet[2415]: E0702 00:23:51.439476 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982934066c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.19.189 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437706860, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437706860, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.442093 kubelet[2415]: I0702 00:23:51.442063 2415 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:23:51.442521 kubelet[2415]: I0702 00:23:51.442209 2415 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:23:51.442521 kubelet[2415]: I0702 00:23:51.442236 2415 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:23:51.448566 kubelet[2415]: I0702 00:23:51.447765 2415 policy_none.go:49] "None policy: Start" Jul 2 00:23:51.448566 kubelet[2415]: E0702 00:23:51.448104 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d9829342c9c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.19.189 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437716636, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437716636, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.450021 kubelet[2415]: I0702 00:23:51.449705 2415 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:23:51.450021 kubelet[2415]: I0702 00:23:51.449730 2415 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:23:51.456562 kubelet[2415]: E0702 00:23:51.455933 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982935a585", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.19.189 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437813125, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437813125, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.457478 kubelet[2415]: I0702 00:23:51.457225 2415 kubelet_node_status.go:70] "Attempting to register node" node="172.31.19.189" Jul 2 00:23:51.459093 kubelet[2415]: E0702 00:23:51.459062 2415 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.19.189" Jul 2 00:23:51.459780 kubelet[2415]: E0702 00:23:51.459710 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982934066c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.19.189 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437706860, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 457142172, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events "172.31.19.189.17de3d982934066c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.461939 kubelet[2415]: E0702 00:23:51.461482 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d9829342c9c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.19.189 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437716636, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 457149143, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events "172.31.19.189.17de3d9829342c9c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.464259 kubelet[2415]: E0702 00:23:51.464170 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982935a585", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.19.189 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437813125, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 457154871, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events "172.31.19.189.17de3d982935a585" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.470054 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 00:23:51.485121 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 00:23:51.496899 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 00:23:51.509631 kubelet[2415]: I0702 00:23:51.507046 2415 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:23:51.509631 kubelet[2415]: I0702 00:23:51.507374 2415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:23:51.523524 kubelet[2415]: E0702 00:23:51.520359 2415 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.19.189\" not found" Jul 2 00:23:51.525302 kubelet[2415]: E0702 00:23:51.525108 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982dee6501", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 517029633, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 517029633, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.533443 kubelet[2415]: I0702 00:23:51.533406 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:23:51.536347 kubelet[2415]: I0702 00:23:51.536149 2415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:23:51.536347 kubelet[2415]: I0702 00:23:51.536183 2415 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:23:51.536347 kubelet[2415]: I0702 00:23:51.536212 2415 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:23:51.536347 kubelet[2415]: E0702 00:23:51.536271 2415 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 00:23:51.539356 kubelet[2415]: W0702 00:23:51.539254 2415 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 00:23:51.539356 kubelet[2415]: E0702 00:23:51.539297 2415 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 00:23:51.558809 kubelet[2415]: E0702 00:23:51.558774 2415 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.19.189\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jul 2 00:23:51.661642 kubelet[2415]: I0702 00:23:51.661590 2415 kubelet_node_status.go:70] "Attempting to register node" node="172.31.19.189" Jul 2 00:23:51.663213 kubelet[2415]: E0702 00:23:51.663119 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982934066c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.19.189 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437706860, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 661531409, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events "172.31.19.189.17de3d982934066c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.663513 kubelet[2415]: E0702 00:23:51.663443 2415 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.19.189" Jul 2 00:23:51.664629 kubelet[2415]: E0702 00:23:51.664538 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d9829342c9c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.19.189 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437716636, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 661547019, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events "172.31.19.189.17de3d9829342c9c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.667249 kubelet[2415]: E0702 00:23:51.666804 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982935a585", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.19.189 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437813125, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 661551707, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events "172.31.19.189.17de3d982935a585" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:51.960877 kubelet[2415]: E0702 00:23:51.960723 2415 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.19.189\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Jul 2 00:23:52.065160 kubelet[2415]: I0702 00:23:52.065125 2415 kubelet_node_status.go:70] "Attempting to register node" node="172.31.19.189" Jul 2 00:23:52.066315 kubelet[2415]: E0702 00:23:52.066284 2415 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.19.189" Jul 2 00:23:52.066423 kubelet[2415]: E0702 00:23:52.066278 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982934066c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.19.189 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437706860, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 52, 65061642, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events "172.31.19.189.17de3d982934066c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:52.067425 kubelet[2415]: E0702 00:23:52.067351 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d9829342c9c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.19.189 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437716636, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 52, 65076154, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events "172.31.19.189.17de3d9829342c9c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:52.068475 kubelet[2415]: E0702 00:23:52.068388 2415 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189.17de3d982935a585", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.19.189", UID:"172.31.19.189", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.19.189 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.19.189"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 23, 51, 437813125, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 23, 52, 65084565, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.19.189"}': 'events "172.31.19.189.17de3d982935a585" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:23:52.153101 kubelet[2415]: W0702 00:23:52.153062 2415 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 00:23:52.153101 kubelet[2415]: E0702 00:23:52.153104 2415 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 00:23:52.219634 kubelet[2415]: W0702 00:23:52.219135 2415 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.19.189" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 00:23:52.219634 kubelet[2415]: E0702 00:23:52.219200 2415 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.19.189" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 00:23:52.266671 kubelet[2415]: I0702 00:23:52.266618 2415 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 00:23:52.328300 kubelet[2415]: E0702 00:23:52.328253 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:52.738618 kubelet[2415]: E0702 00:23:52.738579 2415 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.19.189" not found Jul 2 00:23:52.776860 kubelet[2415]: E0702 00:23:52.776818 2415 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.19.189\" not found" node="172.31.19.189" Jul 2 00:23:52.868598 kubelet[2415]: I0702 00:23:52.868567 2415 kubelet_node_status.go:70] "Attempting to register node" node="172.31.19.189" Jul 2 00:23:52.885079 kubelet[2415]: I0702 00:23:52.885039 2415 kubelet_node_status.go:73] "Successfully registered node" node="172.31.19.189" Jul 2 00:23:52.908221 kubelet[2415]: E0702 00:23:52.908067 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.008902 kubelet[2415]: E0702 00:23:53.008731 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.109276 kubelet[2415]: E0702 00:23:53.109241 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.209599 kubelet[2415]: E0702 00:23:53.209551 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.250874 sudo[2281]: pam_unix(sudo:session): session closed for user root Jul 2 00:23:53.274088 sshd[2278]: pam_unix(sshd:session): session closed for user core Jul 2 00:23:53.280489 systemd-logind[1945]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:23:53.281958 systemd[1]: sshd@6-172.31.19.189:22-147.75.109.163:57358.service: Deactivated successfully. Jul 2 00:23:53.286361 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:23:53.287654 systemd-logind[1945]: Removed session 7. Jul 2 00:23:53.310612 kubelet[2415]: E0702 00:23:53.310570 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.329014 kubelet[2415]: E0702 00:23:53.328971 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:53.411116 kubelet[2415]: E0702 00:23:53.411076 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.511822 kubelet[2415]: E0702 00:23:53.511779 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.612511 kubelet[2415]: E0702 00:23:53.612288 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.713144 kubelet[2415]: E0702 00:23:53.713084 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.813234 kubelet[2415]: E0702 00:23:53.813181 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:53.914022 kubelet[2415]: E0702 00:23:53.913894 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:54.014569 kubelet[2415]: E0702 00:23:54.014525 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:54.115383 kubelet[2415]: E0702 00:23:54.115337 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:54.216091 kubelet[2415]: E0702 00:23:54.215969 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:54.317114 kubelet[2415]: E0702 00:23:54.316964 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:54.329296 kubelet[2415]: E0702 00:23:54.329224 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:54.417395 kubelet[2415]: E0702 00:23:54.417256 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:54.518478 kubelet[2415]: E0702 00:23:54.518331 2415 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.19.189\" not found" Jul 2 00:23:54.620087 kubelet[2415]: I0702 00:23:54.620048 2415 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 00:23:54.620850 containerd[1982]: time="2024-07-02T00:23:54.620801356Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:23:54.622385 kubelet[2415]: I0702 00:23:54.622358 2415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 00:23:55.329352 kubelet[2415]: I0702 00:23:55.329276 2415 apiserver.go:52] "Watching apiserver" Jul 2 00:23:55.329352 kubelet[2415]: E0702 00:23:55.329317 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:55.347283 kubelet[2415]: I0702 00:23:55.347223 2415 topology_manager.go:215] "Topology Admit Handler" podUID="bd297f40-e66d-4485-9da5-7fff69335c13" podNamespace="calico-system" podName="calico-node-p8n78" Jul 2 00:23:55.347698 kubelet[2415]: I0702 00:23:55.347676 2415 topology_manager.go:215] "Topology Admit Handler" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" podNamespace="calico-system" podName="csi-node-driver-cfsz4" Jul 2 00:23:55.351078 kubelet[2415]: I0702 00:23:55.347760 2415 topology_manager.go:215] "Topology Admit Handler" podUID="93fc7db6-e5e9-47da-a777-e36905cf6057" podNamespace="kube-system" podName="kube-proxy-vf5r5" Jul 2 00:23:55.351078 kubelet[2415]: E0702 00:23:55.349641 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfsz4" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" Jul 2 00:23:55.355341 kubelet[2415]: I0702 00:23:55.355309 2415 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:23:55.371291 systemd[1]: Created slice kubepods-besteffort-pod93fc7db6_e5e9_47da_a777_e36905cf6057.slice - libcontainer container kubepods-besteffort-pod93fc7db6_e5e9_47da_a777_e36905cf6057.slice. Jul 2 00:23:55.383512 kubelet[2415]: I0702 00:23:55.382153 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z4ft\" (UniqueName: \"kubernetes.io/projected/93fc7db6-e5e9-47da-a777-e36905cf6057-kube-api-access-2z4ft\") pod \"kube-proxy-vf5r5\" (UID: \"93fc7db6-e5e9-47da-a777-e36905cf6057\") " pod="kube-system/kube-proxy-vf5r5" Jul 2 00:23:55.383512 kubelet[2415]: I0702 00:23:55.382252 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd297f40-e66d-4485-9da5-7fff69335c13-xtables-lock\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.383512 kubelet[2415]: I0702 00:23:55.382291 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bd297f40-e66d-4485-9da5-7fff69335c13-policysync\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.383512 kubelet[2415]: I0702 00:23:55.382399 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bd297f40-e66d-4485-9da5-7fff69335c13-cni-bin-dir\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.383512 kubelet[2415]: I0702 00:23:55.382437 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bd297f40-e66d-4485-9da5-7fff69335c13-cni-log-dir\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.384395 kubelet[2415]: I0702 00:23:55.382491 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj6q4\" (UniqueName: \"kubernetes.io/projected/21a555aa-f137-4779-be5f-76c30fd4078f-kube-api-access-dj6q4\") pod \"csi-node-driver-cfsz4\" (UID: \"21a555aa-f137-4779-be5f-76c30fd4078f\") " pod="calico-system/csi-node-driver-cfsz4" Jul 2 00:23:55.384395 kubelet[2415]: I0702 00:23:55.382527 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9hpv\" (UniqueName: \"kubernetes.io/projected/bd297f40-e66d-4485-9da5-7fff69335c13-kube-api-access-t9hpv\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.384395 kubelet[2415]: I0702 00:23:55.382563 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/21a555aa-f137-4779-be5f-76c30fd4078f-socket-dir\") pod \"csi-node-driver-cfsz4\" (UID: \"21a555aa-f137-4779-be5f-76c30fd4078f\") " pod="calico-system/csi-node-driver-cfsz4" Jul 2 00:23:55.384395 kubelet[2415]: I0702 00:23:55.382605 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/21a555aa-f137-4779-be5f-76c30fd4078f-registration-dir\") pod \"csi-node-driver-cfsz4\" (UID: \"21a555aa-f137-4779-be5f-76c30fd4078f\") " pod="calico-system/csi-node-driver-cfsz4" Jul 2 00:23:55.384395 kubelet[2415]: I0702 00:23:55.382637 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd297f40-e66d-4485-9da5-7fff69335c13-lib-modules\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.384836 kubelet[2415]: I0702 00:23:55.382669 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd297f40-e66d-4485-9da5-7fff69335c13-tigera-ca-bundle\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.384836 kubelet[2415]: I0702 00:23:55.382861 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bd297f40-e66d-4485-9da5-7fff69335c13-var-run-calico\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.384836 kubelet[2415]: I0702 00:23:55.382899 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bd297f40-e66d-4485-9da5-7fff69335c13-cni-net-dir\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.384836 kubelet[2415]: I0702 00:23:55.382935 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bd297f40-e66d-4485-9da5-7fff69335c13-flexvol-driver-host\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.384836 kubelet[2415]: I0702 00:23:55.382967 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/93fc7db6-e5e9-47da-a777-e36905cf6057-kube-proxy\") pod \"kube-proxy-vf5r5\" (UID: \"93fc7db6-e5e9-47da-a777-e36905cf6057\") " pod="kube-system/kube-proxy-vf5r5" Jul 2 00:23:55.385059 kubelet[2415]: I0702 00:23:55.383008 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/93fc7db6-e5e9-47da-a777-e36905cf6057-lib-modules\") pod \"kube-proxy-vf5r5\" (UID: \"93fc7db6-e5e9-47da-a777-e36905cf6057\") " pod="kube-system/kube-proxy-vf5r5" Jul 2 00:23:55.385059 kubelet[2415]: I0702 00:23:55.383046 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bd297f40-e66d-4485-9da5-7fff69335c13-node-certs\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.385059 kubelet[2415]: I0702 00:23:55.383100 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd297f40-e66d-4485-9da5-7fff69335c13-var-lib-calico\") pod \"calico-node-p8n78\" (UID: \"bd297f40-e66d-4485-9da5-7fff69335c13\") " pod="calico-system/calico-node-p8n78" Jul 2 00:23:55.385059 kubelet[2415]: I0702 00:23:55.383140 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/21a555aa-f137-4779-be5f-76c30fd4078f-varrun\") pod \"csi-node-driver-cfsz4\" (UID: \"21a555aa-f137-4779-be5f-76c30fd4078f\") " pod="calico-system/csi-node-driver-cfsz4" Jul 2 00:23:55.385059 kubelet[2415]: I0702 00:23:55.383179 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/21a555aa-f137-4779-be5f-76c30fd4078f-kubelet-dir\") pod \"csi-node-driver-cfsz4\" (UID: \"21a555aa-f137-4779-be5f-76c30fd4078f\") " pod="calico-system/csi-node-driver-cfsz4" Jul 2 00:23:55.385373 kubelet[2415]: I0702 00:23:55.383213 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/93fc7db6-e5e9-47da-a777-e36905cf6057-xtables-lock\") pod \"kube-proxy-vf5r5\" (UID: \"93fc7db6-e5e9-47da-a777-e36905cf6057\") " pod="kube-system/kube-proxy-vf5r5" Jul 2 00:23:55.402844 systemd[1]: Created slice kubepods-besteffort-podbd297f40_e66d_4485_9da5_7fff69335c13.slice - libcontainer container kubepods-besteffort-podbd297f40_e66d_4485_9da5_7fff69335c13.slice. Jul 2 00:23:55.489381 kubelet[2415]: E0702 00:23:55.487178 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.489381 kubelet[2415]: W0702 00:23:55.487582 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.489381 kubelet[2415]: E0702 00:23:55.487621 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.489381 kubelet[2415]: E0702 00:23:55.489322 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.489381 kubelet[2415]: W0702 00:23:55.489340 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.490382 kubelet[2415]: E0702 00:23:55.489893 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.490972 kubelet[2415]: E0702 00:23:55.490843 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.490972 kubelet[2415]: W0702 00:23:55.490860 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.490972 kubelet[2415]: E0702 00:23:55.490934 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.492545 kubelet[2415]: E0702 00:23:55.491522 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.492545 kubelet[2415]: W0702 00:23:55.491536 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.492982 kubelet[2415]: E0702 00:23:55.492754 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.493245 kubelet[2415]: E0702 00:23:55.493094 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.493245 kubelet[2415]: W0702 00:23:55.493109 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.493563 kubelet[2415]: E0702 00:23:55.493550 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.497661 kubelet[2415]: W0702 00:23:55.495916 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.498820 kubelet[2415]: E0702 00:23:55.498392 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.498820 kubelet[2415]: W0702 00:23:55.498414 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.498820 kubelet[2415]: E0702 00:23:55.498705 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.499447 kubelet[2415]: E0702 00:23:55.499249 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.499447 kubelet[2415]: W0702 00:23:55.499263 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.499447 kubelet[2415]: E0702 00:23:55.499284 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.500234 kubelet[2415]: E0702 00:23:55.500078 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.500234 kubelet[2415]: W0702 00:23:55.500092 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.500234 kubelet[2415]: E0702 00:23:55.500111 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.500234 kubelet[2415]: E0702 00:23:55.500147 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.500749 kubelet[2415]: E0702 00:23:55.500580 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.500749 kubelet[2415]: W0702 00:23:55.500593 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.500749 kubelet[2415]: E0702 00:23:55.500612 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.501191 kubelet[2415]: E0702 00:23:55.501045 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.501191 kubelet[2415]: W0702 00:23:55.501058 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.501191 kubelet[2415]: E0702 00:23:55.501077 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.501881 kubelet[2415]: E0702 00:23:55.501797 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.501881 kubelet[2415]: W0702 00:23:55.501810 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.501881 kubelet[2415]: E0702 00:23:55.501828 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.501881 kubelet[2415]: E0702 00:23:55.501861 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.502898 kubelet[2415]: E0702 00:23:55.502885 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.503058 kubelet[2415]: W0702 00:23:55.502983 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.503058 kubelet[2415]: E0702 00:23:55.503006 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.562709 kubelet[2415]: E0702 00:23:55.562666 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.564526 kubelet[2415]: W0702 00:23:55.563774 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.564526 kubelet[2415]: E0702 00:23:55.563815 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.570545 kubelet[2415]: E0702 00:23:55.570518 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.570709 kubelet[2415]: W0702 00:23:55.570691 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.570846 kubelet[2415]: E0702 00:23:55.570831 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.573407 kubelet[2415]: E0702 00:23:55.573385 2415 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:23:55.573723 kubelet[2415]: W0702 00:23:55.573599 2415 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:23:55.574392 kubelet[2415]: E0702 00:23:55.574374 2415 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:23:55.699190 containerd[1982]: time="2024-07-02T00:23:55.699070808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vf5r5,Uid:93fc7db6-e5e9-47da-a777-e36905cf6057,Namespace:kube-system,Attempt:0,}" Jul 2 00:23:55.714593 containerd[1982]: time="2024-07-02T00:23:55.714030264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p8n78,Uid:bd297f40-e66d-4485-9da5-7fff69335c13,Namespace:calico-system,Attempt:0,}" Jul 2 00:23:56.330114 kubelet[2415]: E0702 00:23:56.330062 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:56.410631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount148524681.mount: Deactivated successfully. Jul 2 00:23:56.430909 containerd[1982]: time="2024-07-02T00:23:56.430847122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:56.436488 containerd[1982]: time="2024-07-02T00:23:56.435325115Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:56.436627 containerd[1982]: time="2024-07-02T00:23:56.436518351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:23:56.438105 containerd[1982]: time="2024-07-02T00:23:56.438050363Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:23:56.440193 containerd[1982]: time="2024-07-02T00:23:56.440152848Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:56.448893 containerd[1982]: time="2024-07-02T00:23:56.448849312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:23:56.450184 containerd[1982]: time="2024-07-02T00:23:56.450145866Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 748.271596ms" Jul 2 00:23:56.454106 containerd[1982]: time="2024-07-02T00:23:56.454061748Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 739.904351ms" Jul 2 00:23:56.537239 kubelet[2415]: E0702 00:23:56.536865 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfsz4" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" Jul 2 00:23:56.774252 containerd[1982]: time="2024-07-02T00:23:56.773828572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:56.774252 containerd[1982]: time="2024-07-02T00:23:56.773891054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:56.774252 containerd[1982]: time="2024-07-02T00:23:56.773919213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:56.774252 containerd[1982]: time="2024-07-02T00:23:56.774014297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:56.795703 containerd[1982]: time="2024-07-02T00:23:56.793918377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:56.795965 containerd[1982]: time="2024-07-02T00:23:56.795658810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:56.800596 containerd[1982]: time="2024-07-02T00:23:56.796141579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:56.800596 containerd[1982]: time="2024-07-02T00:23:56.796524087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:57.074683 systemd[1]: Started cri-containerd-e54bc5a304bd9e3cd618ba7afefd7a2c5a7ed1307107448a69138d9378fde23c.scope - libcontainer container e54bc5a304bd9e3cd618ba7afefd7a2c5a7ed1307107448a69138d9378fde23c. Jul 2 00:23:57.083464 systemd[1]: Started cri-containerd-ef297b2abe819a2233ba8bc0882f54f0715d07ba12806d99d33eccc2576c22fc.scope - libcontainer container ef297b2abe819a2233ba8bc0882f54f0715d07ba12806d99d33eccc2576c22fc. Jul 2 00:23:57.132811 containerd[1982]: time="2024-07-02T00:23:57.132611660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p8n78,Uid:bd297f40-e66d-4485-9da5-7fff69335c13,Namespace:calico-system,Attempt:0,} returns sandbox id \"e54bc5a304bd9e3cd618ba7afefd7a2c5a7ed1307107448a69138d9378fde23c\"" Jul 2 00:23:57.137203 containerd[1982]: time="2024-07-02T00:23:57.137145802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:23:57.140266 containerd[1982]: time="2024-07-02T00:23:57.140229065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vf5r5,Uid:93fc7db6-e5e9-47da-a777-e36905cf6057,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef297b2abe819a2233ba8bc0882f54f0715d07ba12806d99d33eccc2576c22fc\"" Jul 2 00:23:57.330698 kubelet[2415]: E0702 00:23:57.330574 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:58.331568 kubelet[2415]: E0702 00:23:58.331521 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:58.515042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742390753.mount: Deactivated successfully. Jul 2 00:23:58.537468 kubelet[2415]: E0702 00:23:58.537409 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfsz4" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" Jul 2 00:23:58.714298 containerd[1982]: time="2024-07-02T00:23:58.714163624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:58.724724 containerd[1982]: time="2024-07-02T00:23:58.722647985Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:58.724724 containerd[1982]: time="2024-07-02T00:23:58.722745351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=6588466" Jul 2 00:23:58.727241 containerd[1982]: time="2024-07-02T00:23:58.727178863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:58.728184 containerd[1982]: time="2024-07-02T00:23:58.727982535Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.590793845s" Jul 2 00:23:58.728184 containerd[1982]: time="2024-07-02T00:23:58.728021540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:23:58.729370 containerd[1982]: time="2024-07-02T00:23:58.729325419Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:23:58.730429 containerd[1982]: time="2024-07-02T00:23:58.730398222Z" level=info msg="CreateContainer within sandbox \"e54bc5a304bd9e3cd618ba7afefd7a2c5a7ed1307107448a69138d9378fde23c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:23:58.758375 containerd[1982]: time="2024-07-02T00:23:58.758321384Z" level=info msg="CreateContainer within sandbox \"e54bc5a304bd9e3cd618ba7afefd7a2c5a7ed1307107448a69138d9378fde23c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e\"" Jul 2 00:23:58.760626 containerd[1982]: time="2024-07-02T00:23:58.759132078Z" level=info msg="StartContainer for \"0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e\"" Jul 2 00:23:58.815979 systemd[1]: Started cri-containerd-0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e.scope - libcontainer container 0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e. Jul 2 00:23:58.873615 containerd[1982]: time="2024-07-02T00:23:58.873560742Z" level=info msg="StartContainer for \"0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e\" returns successfully" Jul 2 00:23:58.886274 systemd[1]: cri-containerd-0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e.scope: Deactivated successfully. Jul 2 00:23:59.037858 containerd[1982]: time="2024-07-02T00:23:59.037199904Z" level=info msg="shim disconnected" id=0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e namespace=k8s.io Jul 2 00:23:59.037858 containerd[1982]: time="2024-07-02T00:23:59.037263484Z" level=warning msg="cleaning up after shim disconnected" id=0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e namespace=k8s.io Jul 2 00:23:59.037858 containerd[1982]: time="2024-07-02T00:23:59.037275384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:23:59.070751 containerd[1982]: time="2024-07-02T00:23:59.070685712Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:23:59Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:23:59.338429 kubelet[2415]: E0702 00:23:59.337666 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:59.517979 systemd[1]: run-containerd-runc-k8s.io-0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e-runc.4piAuV.mount: Deactivated successfully. Jul 2 00:23:59.518105 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b875eb032fe3057a6acb7554c99e42fb2207d205022b3892c99451bd78f777e-rootfs.mount: Deactivated successfully. Jul 2 00:24:00.109137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379124474.mount: Deactivated successfully. Jul 2 00:24:00.342577 kubelet[2415]: E0702 00:24:00.341077 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:00.537157 kubelet[2415]: E0702 00:24:00.536969 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfsz4" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" Jul 2 00:24:00.762075 containerd[1982]: time="2024-07-02T00:24:00.762016967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:00.764299 containerd[1982]: time="2024-07-02T00:24:00.764079443Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jul 2 00:24:00.767245 containerd[1982]: time="2024-07-02T00:24:00.766136262Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:00.770334 containerd[1982]: time="2024-07-02T00:24:00.770291998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:00.771203 containerd[1982]: time="2024-07-02T00:24:00.771162359Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.041646803s" Jul 2 00:24:00.771309 containerd[1982]: time="2024-07-02T00:24:00.771209731Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 00:24:00.773104 containerd[1982]: time="2024-07-02T00:24:00.773078568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:24:00.773808 containerd[1982]: time="2024-07-02T00:24:00.773775205Z" level=info msg="CreateContainer within sandbox \"ef297b2abe819a2233ba8bc0882f54f0715d07ba12806d99d33eccc2576c22fc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:24:00.809492 containerd[1982]: time="2024-07-02T00:24:00.809344508Z" level=info msg="CreateContainer within sandbox \"ef297b2abe819a2233ba8bc0882f54f0715d07ba12806d99d33eccc2576c22fc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f0303c624c090e0647e33822aef079df7cba4efc19549d0a8ce4b8d81335edeb\"" Jul 2 00:24:00.810047 containerd[1982]: time="2024-07-02T00:24:00.809945762Z" level=info msg="StartContainer for \"f0303c624c090e0647e33822aef079df7cba4efc19549d0a8ce4b8d81335edeb\"" Jul 2 00:24:00.854300 systemd[1]: run-containerd-runc-k8s.io-f0303c624c090e0647e33822aef079df7cba4efc19549d0a8ce4b8d81335edeb-runc.tm2yr2.mount: Deactivated successfully. Jul 2 00:24:00.863687 systemd[1]: Started cri-containerd-f0303c624c090e0647e33822aef079df7cba4efc19549d0a8ce4b8d81335edeb.scope - libcontainer container f0303c624c090e0647e33822aef079df7cba4efc19549d0a8ce4b8d81335edeb. Jul 2 00:24:00.904986 containerd[1982]: time="2024-07-02T00:24:00.904938475Z" level=info msg="StartContainer for \"f0303c624c090e0647e33822aef079df7cba4efc19549d0a8ce4b8d81335edeb\" returns successfully" Jul 2 00:24:01.343063 kubelet[2415]: E0702 00:24:01.342987 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:01.838380 kubelet[2415]: I0702 00:24:01.826102 2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vf5r5" podStartSLOduration=6.187766348 podCreationTimestamp="2024-07-02 00:23:52 +0000 UTC" firstStartedPulling="2024-07-02 00:23:57.141352305 +0000 UTC m=+6.880439048" lastFinishedPulling="2024-07-02 00:24:00.771990818 +0000 UTC m=+10.511077575" observedRunningTime="2024-07-02 00:24:01.818045265 +0000 UTC m=+11.557132029" watchObservedRunningTime="2024-07-02 00:24:01.818404875 +0000 UTC m=+11.557491639" Jul 2 00:24:02.343838 kubelet[2415]: E0702 00:24:02.343774 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:02.549974 kubelet[2415]: E0702 00:24:02.537923 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfsz4" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" Jul 2 00:24:03.345069 kubelet[2415]: E0702 00:24:03.345004 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:04.346036 kubelet[2415]: E0702 00:24:04.345962 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:04.537088 kubelet[2415]: E0702 00:24:04.537048 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfsz4" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" Jul 2 00:24:05.346913 kubelet[2415]: E0702 00:24:05.346855 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:06.196303 containerd[1982]: time="2024-07-02T00:24:06.196240665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:06.198296 containerd[1982]: time="2024-07-02T00:24:06.198065329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:24:06.201065 containerd[1982]: time="2024-07-02T00:24:06.200590396Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:06.204288 containerd[1982]: time="2024-07-02T00:24:06.204240165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:06.205268 containerd[1982]: time="2024-07-02T00:24:06.205224528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.431925358s" Jul 2 00:24:06.205373 containerd[1982]: time="2024-07-02T00:24:06.205273351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:24:06.207618 containerd[1982]: time="2024-07-02T00:24:06.207576656Z" level=info msg="CreateContainer within sandbox \"e54bc5a304bd9e3cd618ba7afefd7a2c5a7ed1307107448a69138d9378fde23c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:24:06.240319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321672397.mount: Deactivated successfully. Jul 2 00:24:06.255115 containerd[1982]: time="2024-07-02T00:24:06.255065425Z" level=info msg="CreateContainer within sandbox \"e54bc5a304bd9e3cd618ba7afefd7a2c5a7ed1307107448a69138d9378fde23c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f4cde8ac7438e8a4d6c640bbced60a1ed7ebea835feb70b7c8b8b3bbc72fc6e0\"" Jul 2 00:24:06.255878 containerd[1982]: time="2024-07-02T00:24:06.255843188Z" level=info msg="StartContainer for \"f4cde8ac7438e8a4d6c640bbced60a1ed7ebea835feb70b7c8b8b3bbc72fc6e0\"" Jul 2 00:24:06.305690 systemd[1]: Started cri-containerd-f4cde8ac7438e8a4d6c640bbced60a1ed7ebea835feb70b7c8b8b3bbc72fc6e0.scope - libcontainer container f4cde8ac7438e8a4d6c640bbced60a1ed7ebea835feb70b7c8b8b3bbc72fc6e0. Jul 2 00:24:06.347642 kubelet[2415]: E0702 00:24:06.347594 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:06.348072 containerd[1982]: time="2024-07-02T00:24:06.347915655Z" level=info msg="StartContainer for \"f4cde8ac7438e8a4d6c640bbced60a1ed7ebea835feb70b7c8b8b3bbc72fc6e0\" returns successfully" Jul 2 00:24:06.359117 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:24:06.537843 kubelet[2415]: E0702 00:24:06.537276 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cfsz4" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" Jul 2 00:24:07.062036 systemd[1]: cri-containerd-f4cde8ac7438e8a4d6c640bbced60a1ed7ebea835feb70b7c8b8b3bbc72fc6e0.scope: Deactivated successfully. Jul 2 00:24:07.111756 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4cde8ac7438e8a4d6c640bbced60a1ed7ebea835feb70b7c8b8b3bbc72fc6e0-rootfs.mount: Deactivated successfully. Jul 2 00:24:07.162710 kubelet[2415]: I0702 00:24:07.161996 2415 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:24:07.348878 kubelet[2415]: E0702 00:24:07.348634 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:07.803355 containerd[1982]: time="2024-07-02T00:24:07.802836757Z" level=info msg="shim disconnected" id=f4cde8ac7438e8a4d6c640bbced60a1ed7ebea835feb70b7c8b8b3bbc72fc6e0 namespace=k8s.io Jul 2 00:24:07.803355 containerd[1982]: time="2024-07-02T00:24:07.803078542Z" level=warning msg="cleaning up after shim disconnected" id=f4cde8ac7438e8a4d6c640bbced60a1ed7ebea835feb70b7c8b8b3bbc72fc6e0 namespace=k8s.io Jul 2 00:24:07.803355 containerd[1982]: time="2024-07-02T00:24:07.803097462Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:24:08.349839 kubelet[2415]: E0702 00:24:08.349785 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:08.544254 systemd[1]: Created slice kubepods-besteffort-pod21a555aa_f137_4779_be5f_76c30fd4078f.slice - libcontainer container kubepods-besteffort-pod21a555aa_f137_4779_be5f_76c30fd4078f.slice. Jul 2 00:24:08.550086 containerd[1982]: time="2024-07-02T00:24:08.550039991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfsz4,Uid:21a555aa-f137-4779-be5f-76c30fd4078f,Namespace:calico-system,Attempt:0,}" Jul 2 00:24:08.643143 containerd[1982]: time="2024-07-02T00:24:08.643017204Z" level=error msg="Failed to destroy network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:08.645383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8-shm.mount: Deactivated successfully. Jul 2 00:24:08.645831 containerd[1982]: time="2024-07-02T00:24:08.645776002Z" level=error msg="encountered an error cleaning up failed sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:08.645913 containerd[1982]: time="2024-07-02T00:24:08.645859314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfsz4,Uid:21a555aa-f137-4779-be5f-76c30fd4078f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:08.646653 kubelet[2415]: E0702 00:24:08.646191 2415 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:08.646653 kubelet[2415]: E0702 00:24:08.646261 2415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfsz4" Jul 2 00:24:08.646653 kubelet[2415]: E0702 00:24:08.646287 2415 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cfsz4" Jul 2 00:24:08.646838 kubelet[2415]: E0702 00:24:08.646347 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cfsz4_calico-system(21a555aa-f137-4779-be5f-76c30fd4078f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cfsz4_calico-system(21a555aa-f137-4779-be5f-76c30fd4078f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cfsz4" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" Jul 2 00:24:08.700922 containerd[1982]: time="2024-07-02T00:24:08.700876596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:24:08.701606 kubelet[2415]: I0702 00:24:08.701576 2415 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:08.702245 containerd[1982]: time="2024-07-02T00:24:08.702212924Z" level=info msg="StopPodSandbox for \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\"" Jul 2 00:24:08.702589 containerd[1982]: time="2024-07-02T00:24:08.702555884Z" level=info msg="Ensure that sandbox 62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8 in task-service has been cleanup successfully" Jul 2 00:24:08.736881 containerd[1982]: time="2024-07-02T00:24:08.736757860Z" level=error msg="StopPodSandbox for \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\" failed" error="failed to destroy network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:08.738098 kubelet[2415]: E0702 00:24:08.737903 2415 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:08.738098 kubelet[2415]: E0702 00:24:08.737982 2415 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8"} Jul 2 00:24:08.738098 kubelet[2415]: E0702 00:24:08.738030 2415 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"21a555aa-f137-4779-be5f-76c30fd4078f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:08.738098 kubelet[2415]: E0702 00:24:08.738070 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"21a555aa-f137-4779-be5f-76c30fd4078f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cfsz4" podUID="21a555aa-f137-4779-be5f-76c30fd4078f" Jul 2 00:24:09.351018 kubelet[2415]: E0702 00:24:09.350962 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:10.351850 kubelet[2415]: E0702 00:24:10.351785 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:11.326748 kubelet[2415]: E0702 00:24:11.326694 2415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:11.352935 kubelet[2415]: E0702 00:24:11.352886 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:12.354169 kubelet[2415]: E0702 00:24:12.354052 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:13.218061 kubelet[2415]: I0702 00:24:13.217902 2415 topology_manager.go:215] "Topology Admit Handler" podUID="026c6050-7b15-41b3-a391-397e0a5ded8a" podNamespace="default" podName="nginx-deployment-6d5f899847-275pf" Jul 2 00:24:13.233294 systemd[1]: Created slice kubepods-besteffort-pod026c6050_7b15_41b3_a391_397e0a5ded8a.slice - libcontainer container kubepods-besteffort-pod026c6050_7b15_41b3_a391_397e0a5ded8a.slice. Jul 2 00:24:13.296282 kubelet[2415]: I0702 00:24:13.296137 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56qzp\" (UniqueName: \"kubernetes.io/projected/026c6050-7b15-41b3-a391-397e0a5ded8a-kube-api-access-56qzp\") pod \"nginx-deployment-6d5f899847-275pf\" (UID: \"026c6050-7b15-41b3-a391-397e0a5ded8a\") " pod="default/nginx-deployment-6d5f899847-275pf" Jul 2 00:24:13.355705 kubelet[2415]: E0702 00:24:13.355670 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:13.541914 containerd[1982]: time="2024-07-02T00:24:13.541604640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-275pf,Uid:026c6050-7b15-41b3-a391-397e0a5ded8a,Namespace:default,Attempt:0,}" Jul 2 00:24:13.732160 containerd[1982]: time="2024-07-02T00:24:13.731917559Z" level=error msg="Failed to destroy network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:13.737936 containerd[1982]: time="2024-07-02T00:24:13.737778008Z" level=error msg="encountered an error cleaning up failed sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:13.737936 containerd[1982]: time="2024-07-02T00:24:13.737878081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-275pf,Uid:026c6050-7b15-41b3-a391-397e0a5ded8a,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:13.740265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50-shm.mount: Deactivated successfully. Jul 2 00:24:13.743048 kubelet[2415]: E0702 00:24:13.741231 2415 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:13.743048 kubelet[2415]: E0702 00:24:13.741297 2415 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-275pf" Jul 2 00:24:13.743048 kubelet[2415]: E0702 00:24:13.741326 2415 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-275pf" Jul 2 00:24:13.743270 kubelet[2415]: E0702 00:24:13.741397 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-275pf_default(026c6050-7b15-41b3-a391-397e0a5ded8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-275pf_default(026c6050-7b15-41b3-a391-397e0a5ded8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-275pf" podUID="026c6050-7b15-41b3-a391-397e0a5ded8a" Jul 2 00:24:14.356247 kubelet[2415]: E0702 00:24:14.356091 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:14.721933 kubelet[2415]: I0702 00:24:14.721893 2415 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:14.722990 containerd[1982]: time="2024-07-02T00:24:14.722874188Z" level=info msg="StopPodSandbox for \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\"" Jul 2 00:24:14.725343 containerd[1982]: time="2024-07-02T00:24:14.724841652Z" level=info msg="Ensure that sandbox 75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50 in task-service has been cleanup successfully" Jul 2 00:24:14.805505 containerd[1982]: time="2024-07-02T00:24:14.805437795Z" level=error msg="StopPodSandbox for \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\" failed" error="failed to destroy network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:24:14.806400 kubelet[2415]: E0702 00:24:14.806284 2415 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:14.806535 kubelet[2415]: E0702 00:24:14.806413 2415 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50"} Jul 2 00:24:14.806535 kubelet[2415]: E0702 00:24:14.806487 2415 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"026c6050-7b15-41b3-a391-397e0a5ded8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:24:14.806535 kubelet[2415]: E0702 00:24:14.806528 2415 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"026c6050-7b15-41b3-a391-397e0a5ded8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-275pf" podUID="026c6050-7b15-41b3-a391-397e0a5ded8a" Jul 2 00:24:15.357295 kubelet[2415]: E0702 00:24:15.357258 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:16.175876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999707028.mount: Deactivated successfully. Jul 2 00:24:16.337038 containerd[1982]: time="2024-07-02T00:24:16.336985074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.347538 containerd[1982]: time="2024-07-02T00:24:16.347468585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:24:16.354386 containerd[1982]: time="2024-07-02T00:24:16.354116819Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.358005 containerd[1982]: time="2024-07-02T00:24:16.357878611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:16.358292 kubelet[2415]: E0702 00:24:16.358211 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:16.359917 containerd[1982]: time="2024-07-02T00:24:16.358742562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 7.657811465s" Jul 2 00:24:16.359917 containerd[1982]: time="2024-07-02T00:24:16.358790505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:24:16.378680 containerd[1982]: time="2024-07-02T00:24:16.378637803Z" level=info msg="CreateContainer within sandbox \"e54bc5a304bd9e3cd618ba7afefd7a2c5a7ed1307107448a69138d9378fde23c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:24:16.444433 containerd[1982]: time="2024-07-02T00:24:16.444317236Z" level=info msg="CreateContainer within sandbox \"e54bc5a304bd9e3cd618ba7afefd7a2c5a7ed1307107448a69138d9378fde23c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3b9e0fd126b55fde103b64486753bdaf57107bc1407421a74a6b4d9858b139c9\"" Jul 2 00:24:16.446567 containerd[1982]: time="2024-07-02T00:24:16.445405564Z" level=info msg="StartContainer for \"3b9e0fd126b55fde103b64486753bdaf57107bc1407421a74a6b4d9858b139c9\"" Jul 2 00:24:16.446741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365843474.mount: Deactivated successfully. Jul 2 00:24:16.628706 systemd[1]: Started cri-containerd-3b9e0fd126b55fde103b64486753bdaf57107bc1407421a74a6b4d9858b139c9.scope - libcontainer container 3b9e0fd126b55fde103b64486753bdaf57107bc1407421a74a6b4d9858b139c9. Jul 2 00:24:16.701493 containerd[1982]: time="2024-07-02T00:24:16.701272864Z" level=info msg="StartContainer for \"3b9e0fd126b55fde103b64486753bdaf57107bc1407421a74a6b4d9858b139c9\" returns successfully" Jul 2 00:24:16.902996 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:24:16.903165 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:24:17.359181 kubelet[2415]: E0702 00:24:17.359147 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:17.759153 systemd[1]: run-containerd-runc-k8s.io-3b9e0fd126b55fde103b64486753bdaf57107bc1407421a74a6b4d9858b139c9-runc.3zAiqy.mount: Deactivated successfully. Jul 2 00:24:18.359599 kubelet[2415]: E0702 00:24:18.359550 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:18.900492 kernel: Initializing XFRM netlink socket Jul 2 00:24:19.091913 (udev-worker)[3037]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:19.092435 systemd-networkd[1812]: vxlan.calico: Link UP Jul 2 00:24:19.092440 systemd-networkd[1812]: vxlan.calico: Gained carrier Jul 2 00:24:19.138771 (udev-worker)[3235]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:19.360079 kubelet[2415]: E0702 00:24:19.359871 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:20.360746 kubelet[2415]: E0702 00:24:20.360689 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:20.401563 systemd-networkd[1812]: vxlan.calico: Gained IPv6LL Jul 2 00:24:20.734231 update_engine[1946]: I0702 00:24:20.734174 1946 update_attempter.cc:509] Updating boot flags... Jul 2 00:24:20.832486 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3241) Jul 2 00:24:21.088560 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3242) Jul 2 00:24:21.361234 kubelet[2415]: E0702 00:24:21.361186 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:22.362498 kubelet[2415]: E0702 00:24:22.362056 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:22.537917 containerd[1982]: time="2024-07-02T00:24:22.537422771Z" level=info msg="StopPodSandbox for \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\"" Jul 2 00:24:22.739105 kubelet[2415]: I0702 00:24:22.738576 2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-p8n78" podStartSLOduration=11.515266576 podCreationTimestamp="2024-07-02 00:23:52 +0000 UTC" firstStartedPulling="2024-07-02 00:23:57.135921335 +0000 UTC m=+6.875008084" lastFinishedPulling="2024-07-02 00:24:16.359145444 +0000 UTC m=+26.098232197" observedRunningTime="2024-07-02 00:24:16.757636999 +0000 UTC m=+26.496723764" watchObservedRunningTime="2024-07-02 00:24:22.738490689 +0000 UTC m=+32.477577456" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.736 [INFO][3468] k8s.go 608: Cleaning up netns ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.738 [INFO][3468] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" iface="eth0" netns="/var/run/netns/cni-f8e4f972-f3f1-2d29-774d-48653d00676d" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.738 [INFO][3468] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" iface="eth0" netns="/var/run/netns/cni-f8e4f972-f3f1-2d29-774d-48653d00676d" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.741 [INFO][3468] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" iface="eth0" netns="/var/run/netns/cni-f8e4f972-f3f1-2d29-774d-48653d00676d" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.741 [INFO][3468] k8s.go 615: Releasing IP address(es) ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.741 [INFO][3468] utils.go 188: Calico CNI releasing IP address ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.810 [INFO][3474] ipam_plugin.go 411: Releasing address using handleID ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" HandleID="k8s-pod-network.62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.810 [INFO][3474] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.810 [INFO][3474] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.825 [WARNING][3474] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" HandleID="k8s-pod-network.62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.825 [INFO][3474] ipam_plugin.go 439: Releasing address using workloadID ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" HandleID="k8s-pod-network.62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.827 [INFO][3474] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:22.830572 containerd[1982]: 2024-07-02 00:24:22.828 [INFO][3468] k8s.go 621: Teardown processing complete. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:22.833197 containerd[1982]: time="2024-07-02T00:24:22.832587050Z" level=info msg="TearDown network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\" successfully" Jul 2 00:24:22.833197 containerd[1982]: time="2024-07-02T00:24:22.832627724Z" level=info msg="StopPodSandbox for \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\" returns successfully" Jul 2 00:24:22.833578 systemd[1]: run-netns-cni\x2df8e4f972\x2df3f1\x2d2d29\x2d774d\x2d48653d00676d.mount: Deactivated successfully. Jul 2 00:24:22.837609 containerd[1982]: time="2024-07-02T00:24:22.837566802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfsz4,Uid:21a555aa-f137-4779-be5f-76c30fd4078f,Namespace:calico-system,Attempt:1,}" Jul 2 00:24:23.073580 systemd-networkd[1812]: caliae7826e0a50: Link UP Jul 2 00:24:23.077333 systemd-networkd[1812]: caliae7826e0a50: Gained carrier Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:22.928 [INFO][3480] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.189-k8s-csi--node--driver--cfsz4-eth0 csi-node-driver- calico-system 21a555aa-f137-4779-be5f-76c30fd4078f 928 0 2024-07-02 00:23:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.19.189 csi-node-driver-cfsz4 eth0 default [] [] [kns.calico-system ksa.calico-system.default] caliae7826e0a50 [] []}} ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Namespace="calico-system" Pod="csi-node-driver-cfsz4" WorkloadEndpoint="172.31.19.189-k8s-csi--node--driver--cfsz4-" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:22.928 [INFO][3480] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Namespace="calico-system" Pod="csi-node-driver-cfsz4" WorkloadEndpoint="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:22.974 [INFO][3491] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" HandleID="k8s-pod-network.c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.001 [INFO][3491] ipam_plugin.go 264: Auto assigning IP ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" HandleID="k8s-pod-network.c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050700), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.19.189", "pod":"csi-node-driver-cfsz4", "timestamp":"2024-07-02 00:24:22.974036813 +0000 UTC"}, Hostname:"172.31.19.189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.001 [INFO][3491] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.001 [INFO][3491] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.001 [INFO][3491] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.189' Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.009 [INFO][3491] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" host="172.31.19.189" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.017 [INFO][3491] ipam.go 372: Looking up existing affinities for host host="172.31.19.189" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.028 [INFO][3491] ipam.go 489: Trying affinity for 192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.035 [INFO][3491] ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.040 [INFO][3491] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.040 [INFO][3491] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" host="172.31.19.189" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.043 [INFO][3491] ipam.go 1685: Creating new handle: k8s-pod-network.c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.048 [INFO][3491] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" host="172.31.19.189" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.060 [INFO][3491] ipam.go 1216: Successfully claimed IPs: [192.168.32.193/26] block=192.168.32.192/26 handle="k8s-pod-network.c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" host="172.31.19.189" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.060 [INFO][3491] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.193/26] handle="k8s-pod-network.c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" host="172.31.19.189" Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.060 [INFO][3491] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:23.108246 containerd[1982]: 2024-07-02 00:24:23.060 [INFO][3491] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.193/26] IPv6=[] ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" HandleID="k8s-pod-network.c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:23.110694 containerd[1982]: 2024-07-02 00:24:23.065 [INFO][3480] k8s.go 386: Populated endpoint ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Namespace="calico-system" Pod="csi-node-driver-cfsz4" WorkloadEndpoint="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-csi--node--driver--cfsz4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21a555aa-f137-4779-be5f-76c30fd4078f", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"", Pod:"csi-node-driver-cfsz4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliae7826e0a50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:23.110694 containerd[1982]: 2024-07-02 00:24:23.066 [INFO][3480] k8s.go 387: Calico CNI using IPs: [192.168.32.193/32] ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Namespace="calico-system" Pod="csi-node-driver-cfsz4" WorkloadEndpoint="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:23.110694 containerd[1982]: 2024-07-02 00:24:23.066 [INFO][3480] dataplane_linux.go 68: Setting the host side veth name to caliae7826e0a50 ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Namespace="calico-system" Pod="csi-node-driver-cfsz4" WorkloadEndpoint="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:23.110694 containerd[1982]: 2024-07-02 00:24:23.071 [INFO][3480] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Namespace="calico-system" Pod="csi-node-driver-cfsz4" WorkloadEndpoint="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:23.110694 containerd[1982]: 2024-07-02 00:24:23.073 [INFO][3480] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Namespace="calico-system" Pod="csi-node-driver-cfsz4" WorkloadEndpoint="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-csi--node--driver--cfsz4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21a555aa-f137-4779-be5f-76c30fd4078f", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff", Pod:"csi-node-driver-cfsz4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliae7826e0a50", MAC:"36:ca:e6:0b:12:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:23.110694 containerd[1982]: 2024-07-02 00:24:23.106 [INFO][3480] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff" Namespace="calico-system" Pod="csi-node-driver-cfsz4" WorkloadEndpoint="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:23.176924 containerd[1982]: time="2024-07-02T00:24:23.176428544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:23.176924 containerd[1982]: time="2024-07-02T00:24:23.176576743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:23.176924 containerd[1982]: time="2024-07-02T00:24:23.176609689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:23.176924 containerd[1982]: time="2024-07-02T00:24:23.176626512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:23.222703 systemd[1]: Started cri-containerd-c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff.scope - libcontainer container c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff. Jul 2 00:24:23.256316 containerd[1982]: time="2024-07-02T00:24:23.256169567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cfsz4,Uid:21a555aa-f137-4779-be5f-76c30fd4078f,Namespace:calico-system,Attempt:1,} returns sandbox id \"c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff\"" Jul 2 00:24:23.262360 containerd[1982]: time="2024-07-02T00:24:23.262247529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:24:23.363542 kubelet[2415]: E0702 00:24:23.363378 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:23.833012 systemd[1]: run-containerd-runc-k8s.io-c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff-runc.n2AABE.mount: Deactivated successfully. Jul 2 00:24:24.364437 kubelet[2415]: E0702 00:24:24.364300 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:24.646170 containerd[1982]: time="2024-07-02T00:24:24.646058884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:24.648188 containerd[1982]: time="2024-07-02T00:24:24.648124701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:24:24.653011 containerd[1982]: time="2024-07-02T00:24:24.651972460Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:24.659793 containerd[1982]: time="2024-07-02T00:24:24.659704498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:24.661778 containerd[1982]: time="2024-07-02T00:24:24.660795509Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.39849685s" Jul 2 00:24:24.661778 containerd[1982]: time="2024-07-02T00:24:24.660843110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:24:24.663266 containerd[1982]: time="2024-07-02T00:24:24.663230809Z" level=info msg="CreateContainer within sandbox \"c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:24:24.750766 systemd-networkd[1812]: caliae7826e0a50: Gained IPv6LL Jul 2 00:24:24.769205 containerd[1982]: time="2024-07-02T00:24:24.765999898Z" level=info msg="CreateContainer within sandbox \"c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3ac43e7c4429acb595a224fa1684d5587c49a17153e04681aa930d610242b256\"" Jul 2 00:24:24.770752 containerd[1982]: time="2024-07-02T00:24:24.769512451Z" level=info msg="StartContainer for \"3ac43e7c4429acb595a224fa1684d5587c49a17153e04681aa930d610242b256\"" Jul 2 00:24:24.852825 systemd[1]: run-containerd-runc-k8s.io-3ac43e7c4429acb595a224fa1684d5587c49a17153e04681aa930d610242b256-runc.dzSs24.mount: Deactivated successfully. Jul 2 00:24:24.861680 systemd[1]: Started cri-containerd-3ac43e7c4429acb595a224fa1684d5587c49a17153e04681aa930d610242b256.scope - libcontainer container 3ac43e7c4429acb595a224fa1684d5587c49a17153e04681aa930d610242b256. Jul 2 00:24:24.912631 containerd[1982]: time="2024-07-02T00:24:24.912308525Z" level=info msg="StartContainer for \"3ac43e7c4429acb595a224fa1684d5587c49a17153e04681aa930d610242b256\" returns successfully" Jul 2 00:24:24.916196 containerd[1982]: time="2024-07-02T00:24:24.915755097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:24:25.365617 kubelet[2415]: E0702 00:24:25.365357 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:26.361241 containerd[1982]: time="2024-07-02T00:24:26.361188491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:26.363067 containerd[1982]: time="2024-07-02T00:24:26.362903866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:24:26.365549 containerd[1982]: time="2024-07-02T00:24:26.365175370Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:26.366292 kubelet[2415]: E0702 00:24:26.366261 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:26.368715 containerd[1982]: time="2024-07-02T00:24:26.368647413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:26.369536 containerd[1982]: time="2024-07-02T00:24:26.369339574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.453534562s" Jul 2 00:24:26.369536 containerd[1982]: time="2024-07-02T00:24:26.369384632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:24:26.372396 containerd[1982]: time="2024-07-02T00:24:26.372354513Z" level=info msg="CreateContainer within sandbox \"c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:24:26.405294 containerd[1982]: time="2024-07-02T00:24:26.405240311Z" level=info msg="CreateContainer within sandbox \"c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"55af0d56d5366420fe58b992bae2a5a9f98dcae70418c93b42d85edb5cce2ef1\"" Jul 2 00:24:26.406234 containerd[1982]: time="2024-07-02T00:24:26.405933113Z" level=info msg="StartContainer for \"55af0d56d5366420fe58b992bae2a5a9f98dcae70418c93b42d85edb5cce2ef1\"" Jul 2 00:24:26.450684 systemd[1]: Started cri-containerd-55af0d56d5366420fe58b992bae2a5a9f98dcae70418c93b42d85edb5cce2ef1.scope - libcontainer container 55af0d56d5366420fe58b992bae2a5a9f98dcae70418c93b42d85edb5cce2ef1. Jul 2 00:24:26.491256 containerd[1982]: time="2024-07-02T00:24:26.490957110Z" level=info msg="StartContainer for \"55af0d56d5366420fe58b992bae2a5a9f98dcae70418c93b42d85edb5cce2ef1\" returns successfully" Jul 2 00:24:26.543473 kubelet[2415]: I0702 00:24:26.543426 2415 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:24:26.543473 kubelet[2415]: I0702 00:24:26.543480 2415 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:24:26.842242 kubelet[2415]: I0702 00:24:26.842191 2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-cfsz4" podStartSLOduration=31.734255814 podCreationTimestamp="2024-07-02 00:23:52 +0000 UTC" firstStartedPulling="2024-07-02 00:24:23.26173404 +0000 UTC m=+33.000820784" lastFinishedPulling="2024-07-02 00:24:26.369634306 +0000 UTC m=+36.108721065" observedRunningTime="2024-07-02 00:24:26.841880481 +0000 UTC m=+36.580967267" watchObservedRunningTime="2024-07-02 00:24:26.842156095 +0000 UTC m=+36.581242859" Jul 2 00:24:27.273604 ntpd[1940]: Listen normally on 7 vxlan.calico 192.168.32.192:123 Jul 2 00:24:27.273698 ntpd[1940]: Listen normally on 8 vxlan.calico [fe80::644d:72ff:feb9:621d%3]:123 Jul 2 00:24:27.274110 ntpd[1940]: 2 Jul 00:24:27 ntpd[1940]: Listen normally on 7 vxlan.calico 192.168.32.192:123 Jul 2 00:24:27.274110 ntpd[1940]: 2 Jul 00:24:27 ntpd[1940]: Listen normally on 8 vxlan.calico [fe80::644d:72ff:feb9:621d%3]:123 Jul 2 00:24:27.274110 ntpd[1940]: 2 Jul 00:24:27 ntpd[1940]: Listen normally on 9 caliae7826e0a50 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 2 00:24:27.273772 ntpd[1940]: Listen normally on 9 caliae7826e0a50 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 2 00:24:27.367790 kubelet[2415]: E0702 00:24:27.367743 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:28.369007 kubelet[2415]: E0702 00:24:28.368764 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:28.537638 containerd[1982]: time="2024-07-02T00:24:28.537543200Z" level=info msg="StopPodSandbox for \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\"" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.629 [INFO][3649] k8s.go 608: Cleaning up netns ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.629 [INFO][3649] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" iface="eth0" netns="/var/run/netns/cni-f67cbba4-0d52-5a10-4aef-11fb7b76059a" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.630 [INFO][3649] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" iface="eth0" netns="/var/run/netns/cni-f67cbba4-0d52-5a10-4aef-11fb7b76059a" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.631 [INFO][3649] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" iface="eth0" netns="/var/run/netns/cni-f67cbba4-0d52-5a10-4aef-11fb7b76059a" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.631 [INFO][3649] k8s.go 615: Releasing IP address(es) ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.632 [INFO][3649] utils.go 188: Calico CNI releasing IP address ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.676 [INFO][3655] ipam_plugin.go 411: Releasing address using handleID ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" HandleID="k8s-pod-network.75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.681 [INFO][3655] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.681 [INFO][3655] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.689 [WARNING][3655] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" HandleID="k8s-pod-network.75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.689 [INFO][3655] ipam_plugin.go 439: Releasing address using workloadID ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" HandleID="k8s-pod-network.75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.692 [INFO][3655] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:28.695361 containerd[1982]: 2024-07-02 00:24:28.693 [INFO][3649] k8s.go 621: Teardown processing complete. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:28.704555 containerd[1982]: time="2024-07-02T00:24:28.702668892Z" level=info msg="TearDown network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\" successfully" Jul 2 00:24:28.704555 containerd[1982]: time="2024-07-02T00:24:28.702835279Z" level=info msg="StopPodSandbox for \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\" returns successfully" Jul 2 00:24:28.704555 containerd[1982]: time="2024-07-02T00:24:28.703858090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-275pf,Uid:026c6050-7b15-41b3-a391-397e0a5ded8a,Namespace:default,Attempt:1,}" Jul 2 00:24:28.703639 systemd[1]: run-netns-cni\x2df67cbba4\x2d0d52\x2d5a10\x2d4aef\x2d11fb7b76059a.mount: Deactivated successfully. Jul 2 00:24:28.952412 systemd-networkd[1812]: cali5d79c5c474c: Link UP Jul 2 00:24:28.952775 systemd-networkd[1812]: cali5d79c5c474c: Gained carrier Jul 2 00:24:28.955716 (udev-worker)[3681]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.800 [INFO][3663] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0 nginx-deployment-6d5f899847- default 026c6050-7b15-41b3-a391-397e0a5ded8a 959 0 2024-07-02 00:24:13 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.19.189 nginx-deployment-6d5f899847-275pf eth0 default [] [] [kns.default ksa.default.default] cali5d79c5c474c [] []}} ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Namespace="default" Pod="nginx-deployment-6d5f899847-275pf" WorkloadEndpoint="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.800 [INFO][3663] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Namespace="default" Pod="nginx-deployment-6d5f899847-275pf" WorkloadEndpoint="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.840 [INFO][3673] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" HandleID="k8s-pod-network.facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.867 [INFO][3673] ipam_plugin.go 264: Auto assigning IP ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" HandleID="k8s-pod-network.facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265ed0), Attrs:map[string]string{"namespace":"default", "node":"172.31.19.189", "pod":"nginx-deployment-6d5f899847-275pf", "timestamp":"2024-07-02 00:24:28.84077497 +0000 UTC"}, Hostname:"172.31.19.189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.867 [INFO][3673] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.867 [INFO][3673] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.867 [INFO][3673] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.189' Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.879 [INFO][3673] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" host="172.31.19.189" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.902 [INFO][3673] ipam.go 372: Looking up existing affinities for host host="172.31.19.189" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.919 [INFO][3673] ipam.go 489: Trying affinity for 192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.925 [INFO][3673] ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.930 [INFO][3673] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.930 [INFO][3673] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" host="172.31.19.189" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.933 [INFO][3673] ipam.go 1685: Creating new handle: k8s-pod-network.facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364 Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.938 [INFO][3673] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" host="172.31.19.189" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.945 [INFO][3673] ipam.go 1216: Successfully claimed IPs: [192.168.32.194/26] block=192.168.32.192/26 handle="k8s-pod-network.facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" host="172.31.19.189" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.945 [INFO][3673] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.194/26] handle="k8s-pod-network.facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" host="172.31.19.189" Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.946 [INFO][3673] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:28.972291 containerd[1982]: 2024-07-02 00:24:28.946 [INFO][3673] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.194/26] IPv6=[] ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" HandleID="k8s-pod-network.facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:28.975297 containerd[1982]: 2024-07-02 00:24:28.948 [INFO][3663] k8s.go 386: Populated endpoint ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Namespace="default" Pod="nginx-deployment-6d5f899847-275pf" WorkloadEndpoint="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"026c6050-7b15-41b3-a391-397e0a5ded8a", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"", Pod:"nginx-deployment-6d5f899847-275pf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5d79c5c474c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:28.975297 containerd[1982]: 2024-07-02 00:24:28.948 [INFO][3663] k8s.go 387: Calico CNI using IPs: [192.168.32.194/32] ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Namespace="default" Pod="nginx-deployment-6d5f899847-275pf" WorkloadEndpoint="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:28.975297 containerd[1982]: 2024-07-02 00:24:28.948 [INFO][3663] dataplane_linux.go 68: Setting the host side veth name to cali5d79c5c474c ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Namespace="default" Pod="nginx-deployment-6d5f899847-275pf" WorkloadEndpoint="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:28.975297 containerd[1982]: 2024-07-02 00:24:28.951 [INFO][3663] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Namespace="default" Pod="nginx-deployment-6d5f899847-275pf" WorkloadEndpoint="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:28.975297 containerd[1982]: 2024-07-02 00:24:28.953 [INFO][3663] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Namespace="default" Pod="nginx-deployment-6d5f899847-275pf" WorkloadEndpoint="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"026c6050-7b15-41b3-a391-397e0a5ded8a", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364", Pod:"nginx-deployment-6d5f899847-275pf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5d79c5c474c", MAC:"ba:a9:79:95:a4:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:28.975297 containerd[1982]: 2024-07-02 00:24:28.962 [INFO][3663] k8s.go 500: Wrote updated endpoint to datastore ContainerID="facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364" Namespace="default" Pod="nginx-deployment-6d5f899847-275pf" WorkloadEndpoint="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:29.005446 containerd[1982]: time="2024-07-02T00:24:29.005291482Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:29.005446 containerd[1982]: time="2024-07-02T00:24:29.005364971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:29.005446 containerd[1982]: time="2024-07-02T00:24:29.005392955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:29.005446 containerd[1982]: time="2024-07-02T00:24:29.005415262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:29.042636 systemd[1]: Started cri-containerd-facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364.scope - libcontainer container facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364. Jul 2 00:24:29.096857 containerd[1982]: time="2024-07-02T00:24:29.096814010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-275pf,Uid:026c6050-7b15-41b3-a391-397e0a5ded8a,Namespace:default,Attempt:1,} returns sandbox id \"facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364\"" Jul 2 00:24:29.099895 containerd[1982]: time="2024-07-02T00:24:29.099860038Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 00:24:29.369589 kubelet[2415]: E0702 00:24:29.369435 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:30.370279 kubelet[2415]: E0702 00:24:30.370214 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:30.638999 systemd-networkd[1812]: cali5d79c5c474c: Gained IPv6LL Jul 2 00:24:31.326541 kubelet[2415]: E0702 00:24:31.326431 2415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:31.370890 kubelet[2415]: E0702 00:24:31.370811 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:32.371737 kubelet[2415]: E0702 00:24:32.371477 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:32.510966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3070480348.mount: Deactivated successfully. Jul 2 00:24:33.273638 ntpd[1940]: Listen normally on 10 cali5d79c5c474c [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:24:33.274756 ntpd[1940]: 2 Jul 00:24:33 ntpd[1940]: Listen normally on 10 cali5d79c5c474c [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:24:33.372522 kubelet[2415]: E0702 00:24:33.372486 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:34.244254 containerd[1982]: time="2024-07-02T00:24:34.244192899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:34.246319 containerd[1982]: time="2024-07-02T00:24:34.246233659Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71000000" Jul 2 00:24:34.248643 containerd[1982]: time="2024-07-02T00:24:34.248575927Z" level=info msg="ImageCreate event name:\"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:34.253376 containerd[1982]: time="2024-07-02T00:24:34.253329069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:34.254541 containerd[1982]: time="2024-07-02T00:24:34.254497935Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\", size \"70999878\" in 5.154363516s" Jul 2 00:24:34.254541 containerd[1982]: time="2024-07-02T00:24:34.254539006Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 00:24:34.257594 containerd[1982]: time="2024-07-02T00:24:34.257556830Z" level=info msg="CreateContainer within sandbox \"facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 00:24:34.293231 containerd[1982]: time="2024-07-02T00:24:34.293178280Z" level=info msg="CreateContainer within sandbox \"facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2f0660fc4132ab1feeaffecec5abefffa58011bf43cf6c4a8c0a23967ea87bdb\"" Jul 2 00:24:34.294131 containerd[1982]: time="2024-07-02T00:24:34.294069595Z" level=info msg="StartContainer for \"2f0660fc4132ab1feeaffecec5abefffa58011bf43cf6c4a8c0a23967ea87bdb\"" Jul 2 00:24:34.358509 systemd[1]: run-containerd-runc-k8s.io-2f0660fc4132ab1feeaffecec5abefffa58011bf43cf6c4a8c0a23967ea87bdb-runc.42h8YX.mount: Deactivated successfully. Jul 2 00:24:34.373567 kubelet[2415]: E0702 00:24:34.373526 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:34.373767 systemd[1]: Started cri-containerd-2f0660fc4132ab1feeaffecec5abefffa58011bf43cf6c4a8c0a23967ea87bdb.scope - libcontainer container 2f0660fc4132ab1feeaffecec5abefffa58011bf43cf6c4a8c0a23967ea87bdb. Jul 2 00:24:34.418588 containerd[1982]: time="2024-07-02T00:24:34.418536622Z" level=info msg="StartContainer for \"2f0660fc4132ab1feeaffecec5abefffa58011bf43cf6c4a8c0a23967ea87bdb\" returns successfully" Jul 2 00:24:34.856147 kubelet[2415]: I0702 00:24:34.856105 2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-275pf" podStartSLOduration=16.700663275 podCreationTimestamp="2024-07-02 00:24:13 +0000 UTC" firstStartedPulling="2024-07-02 00:24:29.09961076 +0000 UTC m=+38.838697519" lastFinishedPulling="2024-07-02 00:24:34.254941283 +0000 UTC m=+43.994028038" observedRunningTime="2024-07-02 00:24:34.855991459 +0000 UTC m=+44.595078223" watchObservedRunningTime="2024-07-02 00:24:34.855993794 +0000 UTC m=+44.595080557" Jul 2 00:24:35.373724 kubelet[2415]: E0702 00:24:35.373664 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:36.374254 kubelet[2415]: E0702 00:24:36.373963 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:37.374191 kubelet[2415]: E0702 00:24:37.374139 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:38.375067 kubelet[2415]: E0702 00:24:38.375014 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:39.222053 kubelet[2415]: I0702 00:24:39.221369 2415 topology_manager.go:215] "Topology Admit Handler" podUID="e0ebe74b-0dc0-4a54-be1c-95cb383fee1d" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 00:24:39.234109 systemd[1]: Created slice kubepods-besteffort-pode0ebe74b_0dc0_4a54_be1c_95cb383fee1d.slice - libcontainer container kubepods-besteffort-pode0ebe74b_0dc0_4a54_be1c_95cb383fee1d.slice. Jul 2 00:24:39.307645 kubelet[2415]: I0702 00:24:39.307585 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh68n\" (UniqueName: \"kubernetes.io/projected/e0ebe74b-0dc0-4a54-be1c-95cb383fee1d-kube-api-access-zh68n\") pod \"nfs-server-provisioner-0\" (UID: \"e0ebe74b-0dc0-4a54-be1c-95cb383fee1d\") " pod="default/nfs-server-provisioner-0" Jul 2 00:24:39.307645 kubelet[2415]: I0702 00:24:39.307644 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/e0ebe74b-0dc0-4a54-be1c-95cb383fee1d-data\") pod \"nfs-server-provisioner-0\" (UID: \"e0ebe74b-0dc0-4a54-be1c-95cb383fee1d\") " pod="default/nfs-server-provisioner-0" Jul 2 00:24:39.375825 kubelet[2415]: E0702 00:24:39.375787 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:39.542094 containerd[1982]: time="2024-07-02T00:24:39.541965991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e0ebe74b-0dc0-4a54-be1c-95cb383fee1d,Namespace:default,Attempt:0,}" Jul 2 00:24:39.972943 (udev-worker)[3839]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:39.974655 systemd-networkd[1812]: cali60e51b789ff: Link UP Jul 2 00:24:39.976885 systemd-networkd[1812]: cali60e51b789ff: Gained carrier Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.832 [INFO][3856] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.189-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default e0ebe74b-0dc0-4a54-be1c-95cb383fee1d 1009 0 2024-07-02 00:24:39 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.19.189 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.189-k8s-nfs--server--provisioner--0-" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.832 [INFO][3856] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.898 [INFO][3866] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" HandleID="k8s-pod-network.be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Workload="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.914 [INFO][3866] ipam_plugin.go 264: Auto assigning IP ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" HandleID="k8s-pod-network.be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Workload="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002912d0), Attrs:map[string]string{"namespace":"default", "node":"172.31.19.189", "pod":"nfs-server-provisioner-0", "timestamp":"2024-07-02 00:24:39.898969949 +0000 UTC"}, Hostname:"172.31.19.189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.914 [INFO][3866] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.915 [INFO][3866] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.915 [INFO][3866] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.189' Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.917 [INFO][3866] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" host="172.31.19.189" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.929 [INFO][3866] ipam.go 372: Looking up existing affinities for host host="172.31.19.189" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.939 [INFO][3866] ipam.go 489: Trying affinity for 192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.942 [INFO][3866] ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.946 [INFO][3866] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.946 [INFO][3866] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" host="172.31.19.189" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.948 [INFO][3866] ipam.go 1685: Creating new handle: k8s-pod-network.be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.954 [INFO][3866] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" host="172.31.19.189" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.962 [INFO][3866] ipam.go 1216: Successfully claimed IPs: [192.168.32.195/26] block=192.168.32.192/26 handle="k8s-pod-network.be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" host="172.31.19.189" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.962 [INFO][3866] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.195/26] handle="k8s-pod-network.be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" host="172.31.19.189" Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.962 [INFO][3866] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:40.029920 containerd[1982]: 2024-07-02 00:24:39.964 [INFO][3866] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.195/26] IPv6=[] ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" HandleID="k8s-pod-network.be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Workload="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:24:40.033189 containerd[1982]: 2024-07-02 00:24:39.967 [INFO][3856] k8s.go 386: Populated endpoint ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e0ebe74b-0dc0-4a54-be1c-95cb383fee1d", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:40.033189 containerd[1982]: 2024-07-02 00:24:39.967 [INFO][3856] k8s.go 387: Calico CNI using IPs: [192.168.32.195/32] ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:24:40.033189 containerd[1982]: 2024-07-02 00:24:39.967 [INFO][3856] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:24:40.033189 containerd[1982]: 2024-07-02 00:24:39.978 [INFO][3856] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:24:40.033699 containerd[1982]: 2024-07-02 00:24:39.979 [INFO][3856] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"e0ebe74b-0dc0-4a54-be1c-95cb383fee1d", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.32.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"86:2c:9f:93:66:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:40.033699 containerd[1982]: 2024-07-02 00:24:39.999 [INFO][3856] k8s.go 500: Wrote updated endpoint to datastore ContainerID="be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.19.189-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:24:40.090800 containerd[1982]: time="2024-07-02T00:24:40.090678615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:40.090800 containerd[1982]: time="2024-07-02T00:24:40.090748516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:40.091233 containerd[1982]: time="2024-07-02T00:24:40.090782104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:40.091233 containerd[1982]: time="2024-07-02T00:24:40.090809055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:40.130770 systemd[1]: Started cri-containerd-be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde.scope - libcontainer container be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde. Jul 2 00:24:40.183049 containerd[1982]: time="2024-07-02T00:24:40.183012335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:e0ebe74b-0dc0-4a54-be1c-95cb383fee1d,Namespace:default,Attempt:0,} returns sandbox id \"be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde\"" Jul 2 00:24:40.185347 containerd[1982]: time="2024-07-02T00:24:40.185274991Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 00:24:40.376339 kubelet[2415]: E0702 00:24:40.376298 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:40.443214 systemd[1]: run-containerd-runc-k8s.io-be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde-runc.vw3ya3.mount: Deactivated successfully. Jul 2 00:24:41.199750 systemd-networkd[1812]: cali60e51b789ff: Gained IPv6LL Jul 2 00:24:41.382490 kubelet[2415]: E0702 00:24:41.378744 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:42.380000 kubelet[2415]: E0702 00:24:42.379943 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:42.892940 kubelet[2415]: I0702 00:24:42.892895 2415 topology_manager.go:215] "Topology Admit Handler" podUID="9fce0968-6c95-40bd-b16e-7546a0065af1" podNamespace="calico-apiserver" podName="calico-apiserver-67c75b55c8-zb8t5" Jul 2 00:24:42.903737 systemd[1]: Created slice kubepods-besteffort-pod9fce0968_6c95_40bd_b16e_7546a0065af1.slice - libcontainer container kubepods-besteffort-pod9fce0968_6c95_40bd_b16e_7546a0065af1.slice. Jul 2 00:24:43.060111 kubelet[2415]: I0702 00:24:43.059500 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9fce0968-6c95-40bd-b16e-7546a0065af1-calico-apiserver-certs\") pod \"calico-apiserver-67c75b55c8-zb8t5\" (UID: \"9fce0968-6c95-40bd-b16e-7546a0065af1\") " pod="calico-apiserver/calico-apiserver-67c75b55c8-zb8t5" Jul 2 00:24:43.060111 kubelet[2415]: I0702 00:24:43.059647 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gjj\" (UniqueName: \"kubernetes.io/projected/9fce0968-6c95-40bd-b16e-7546a0065af1-kube-api-access-r8gjj\") pod \"calico-apiserver-67c75b55c8-zb8t5\" (UID: \"9fce0968-6c95-40bd-b16e-7546a0065af1\") " pod="calico-apiserver/calico-apiserver-67c75b55c8-zb8t5" Jul 2 00:24:43.160813 kubelet[2415]: E0702 00:24:43.160696 2415 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:24:43.163497 kubelet[2415]: E0702 00:24:43.163250 2415 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9fce0968-6c95-40bd-b16e-7546a0065af1-calico-apiserver-certs podName:9fce0968-6c95-40bd-b16e-7546a0065af1 nodeName:}" failed. No retries permitted until 2024-07-02 00:24:43.660758217 +0000 UTC m=+53.399844978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/9fce0968-6c95-40bd-b16e-7546a0065af1-calico-apiserver-certs") pod "calico-apiserver-67c75b55c8-zb8t5" (UID: "9fce0968-6c95-40bd-b16e-7546a0065af1") : secret "calico-apiserver-certs" not found Jul 2 00:24:43.273740 ntpd[1940]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:24:43.274764 ntpd[1940]: 2 Jul 00:24:43 ntpd[1940]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:24:43.382480 kubelet[2415]: E0702 00:24:43.380601 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:43.721213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2799442509.mount: Deactivated successfully. Jul 2 00:24:43.858167 containerd[1982]: time="2024-07-02T00:24:43.858114240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c75b55c8-zb8t5,Uid:9fce0968-6c95-40bd-b16e-7546a0065af1,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:24:44.176381 systemd-networkd[1812]: cali678a325b7e5: Link UP Jul 2 00:24:44.179285 systemd-networkd[1812]: cali678a325b7e5: Gained carrier Jul 2 00:24:44.179431 (udev-worker)[3983]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:43.995 [INFO][3964] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0 calico-apiserver-67c75b55c8- calico-apiserver 9fce0968-6c95-40bd-b16e-7546a0065af1 1070 0 2024-07-02 00:24:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67c75b55c8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172.31.19.189 calico-apiserver-67c75b55c8-zb8t5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali678a325b7e5 [] []}} ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Namespace="calico-apiserver" Pod="calico-apiserver-67c75b55c8-zb8t5" WorkloadEndpoint="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:43.995 [INFO][3964] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Namespace="calico-apiserver" Pod="calico-apiserver-67c75b55c8-zb8t5" WorkloadEndpoint="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.063 [INFO][3975] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" HandleID="k8s-pod-network.8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Workload="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.115 [INFO][3975] ipam_plugin.go 264: Auto assigning IP ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" HandleID="k8s-pod-network.8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Workload="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035e710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172.31.19.189", "pod":"calico-apiserver-67c75b55c8-zb8t5", "timestamp":"2024-07-02 00:24:44.06373369 +0000 UTC"}, Hostname:"172.31.19.189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.115 [INFO][3975] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.115 [INFO][3975] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.115 [INFO][3975] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.189' Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.118 [INFO][3975] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" host="172.31.19.189" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.128 [INFO][3975] ipam.go 372: Looking up existing affinities for host host="172.31.19.189" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.138 [INFO][3975] ipam.go 489: Trying affinity for 192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.142 [INFO][3975] ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.149 [INFO][3975] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.150 [INFO][3975] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" host="172.31.19.189" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.153 [INFO][3975] ipam.go 1685: Creating new handle: k8s-pod-network.8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020 Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.160 [INFO][3975] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" host="172.31.19.189" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.167 [INFO][3975] ipam.go 1216: Successfully claimed IPs: [192.168.32.196/26] block=192.168.32.192/26 handle="k8s-pod-network.8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" host="172.31.19.189" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.167 [INFO][3975] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.196/26] handle="k8s-pod-network.8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" host="172.31.19.189" Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.167 [INFO][3975] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:44.203798 containerd[1982]: 2024-07-02 00:24:44.167 [INFO][3975] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.196/26] IPv6=[] ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" HandleID="k8s-pod-network.8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Workload="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" Jul 2 00:24:44.204846 containerd[1982]: 2024-07-02 00:24:44.170 [INFO][3964] k8s.go 386: Populated endpoint ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Namespace="calico-apiserver" Pod="calico-apiserver-67c75b55c8-zb8t5" WorkloadEndpoint="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0", GenerateName:"calico-apiserver-67c75b55c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"9fce0968-6c95-40bd-b16e-7546a0065af1", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c75b55c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"", Pod:"calico-apiserver-67c75b55c8-zb8t5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali678a325b7e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:44.204846 containerd[1982]: 2024-07-02 00:24:44.170 [INFO][3964] k8s.go 387: Calico CNI using IPs: [192.168.32.196/32] ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Namespace="calico-apiserver" Pod="calico-apiserver-67c75b55c8-zb8t5" WorkloadEndpoint="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" Jul 2 00:24:44.204846 containerd[1982]: 2024-07-02 00:24:44.170 [INFO][3964] dataplane_linux.go 68: Setting the host side veth name to cali678a325b7e5 ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Namespace="calico-apiserver" Pod="calico-apiserver-67c75b55c8-zb8t5" WorkloadEndpoint="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" Jul 2 00:24:44.204846 containerd[1982]: 2024-07-02 00:24:44.181 [INFO][3964] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Namespace="calico-apiserver" Pod="calico-apiserver-67c75b55c8-zb8t5" WorkloadEndpoint="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" Jul 2 00:24:44.204846 containerd[1982]: 2024-07-02 00:24:44.183 [INFO][3964] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Namespace="calico-apiserver" Pod="calico-apiserver-67c75b55c8-zb8t5" WorkloadEndpoint="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0", GenerateName:"calico-apiserver-67c75b55c8-", Namespace:"calico-apiserver", SelfLink:"", UID:"9fce0968-6c95-40bd-b16e-7546a0065af1", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c75b55c8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020", Pod:"calico-apiserver-67c75b55c8-zb8t5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali678a325b7e5", MAC:"5e:b2:70:3d:8b:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:44.204846 containerd[1982]: 2024-07-02 00:24:44.199 [INFO][3964] k8s.go 500: Wrote updated endpoint to datastore ContainerID="8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020" Namespace="calico-apiserver" Pod="calico-apiserver-67c75b55c8-zb8t5" WorkloadEndpoint="172.31.19.189-k8s-calico--apiserver--67c75b55c8--zb8t5-eth0" Jul 2 00:24:44.263432 containerd[1982]: time="2024-07-02T00:24:44.262817452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:24:44.263432 containerd[1982]: time="2024-07-02T00:24:44.262936186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:44.263432 containerd[1982]: time="2024-07-02T00:24:44.262971696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:24:44.263432 containerd[1982]: time="2024-07-02T00:24:44.262989516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:24:44.301833 systemd[1]: run-containerd-runc-k8s.io-8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020-runc.RZoWxR.mount: Deactivated successfully. Jul 2 00:24:44.314683 systemd[1]: Started cri-containerd-8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020.scope - libcontainer container 8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020. Jul 2 00:24:44.380924 kubelet[2415]: E0702 00:24:44.380891 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:44.428380 containerd[1982]: time="2024-07-02T00:24:44.428194682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c75b55c8-zb8t5,Uid:9fce0968-6c95-40bd-b16e-7546a0065af1,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020\"" Jul 2 00:24:45.381841 kubelet[2415]: E0702 00:24:45.381769 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:45.871015 systemd-networkd[1812]: cali678a325b7e5: Gained IPv6LL Jul 2 00:24:46.382893 kubelet[2415]: E0702 00:24:46.382485 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:46.729041 containerd[1982]: time="2024-07-02T00:24:46.727868323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:46.730800 containerd[1982]: time="2024-07-02T00:24:46.729986872Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jul 2 00:24:46.747012 containerd[1982]: time="2024-07-02T00:24:46.746536206Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:46.750574 containerd[1982]: time="2024-07-02T00:24:46.750521735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:46.751726 containerd[1982]: time="2024-07-02T00:24:46.751682807Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.566357148s" Jul 2 00:24:46.751874 containerd[1982]: time="2024-07-02T00:24:46.751851902Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 2 00:24:46.753608 containerd[1982]: time="2024-07-02T00:24:46.753577233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:24:46.754692 containerd[1982]: time="2024-07-02T00:24:46.754655648Z" level=info msg="CreateContainer within sandbox \"be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 00:24:46.796748 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3904978853.mount: Deactivated successfully. Jul 2 00:24:46.801161 containerd[1982]: time="2024-07-02T00:24:46.801108382Z" level=info msg="CreateContainer within sandbox \"be913cdaed232462c611e2e8d5a525c5281e42e3568b948c0536c3b235651dde\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"efe23a745f43e0e52fa8c2bce1a18f4c496408fb6aa9e1696b88a970cc7496d3\"" Jul 2 00:24:46.802750 containerd[1982]: time="2024-07-02T00:24:46.802233860Z" level=info msg="StartContainer for \"efe23a745f43e0e52fa8c2bce1a18f4c496408fb6aa9e1696b88a970cc7496d3\"" Jul 2 00:24:46.847152 systemd[1]: Started cri-containerd-efe23a745f43e0e52fa8c2bce1a18f4c496408fb6aa9e1696b88a970cc7496d3.scope - libcontainer container efe23a745f43e0e52fa8c2bce1a18f4c496408fb6aa9e1696b88a970cc7496d3. Jul 2 00:24:46.938722 containerd[1982]: time="2024-07-02T00:24:46.938584996Z" level=info msg="StartContainer for \"efe23a745f43e0e52fa8c2bce1a18f4c496408fb6aa9e1696b88a970cc7496d3\" returns successfully" Jul 2 00:24:47.383612 kubelet[2415]: E0702 00:24:47.383561 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:47.926284 kubelet[2415]: I0702 00:24:47.926242 2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.358270901 podCreationTimestamp="2024-07-02 00:24:39 +0000 UTC" firstStartedPulling="2024-07-02 00:24:40.184722744 +0000 UTC m=+49.923809487" lastFinishedPulling="2024-07-02 00:24:46.752602188 +0000 UTC m=+56.491688930" observedRunningTime="2024-07-02 00:24:47.925055874 +0000 UTC m=+57.664142638" watchObservedRunningTime="2024-07-02 00:24:47.926150344 +0000 UTC m=+57.665237108" Jul 2 00:24:48.273889 ntpd[1940]: Listen normally on 12 cali678a325b7e5 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:24:48.278152 ntpd[1940]: 2 Jul 00:24:48 ntpd[1940]: Listen normally on 12 cali678a325b7e5 [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:24:48.384749 kubelet[2415]: E0702 00:24:48.384625 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:49.385791 kubelet[2415]: E0702 00:24:49.385738 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:50.386474 kubelet[2415]: E0702 00:24:50.386119 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:51.327730 kubelet[2415]: E0702 00:24:51.327688 2415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:51.369911 containerd[1982]: time="2024-07-02T00:24:51.369684201Z" level=info msg="StopPodSandbox for \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\"" Jul 2 00:24:51.394751 kubelet[2415]: E0702 00:24:51.394012 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.479 [WARNING][4143] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"026c6050-7b15-41b3-a391-397e0a5ded8a", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364", Pod:"nginx-deployment-6d5f899847-275pf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5d79c5c474c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.479 [INFO][4143] k8s.go 608: Cleaning up netns ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.479 [INFO][4143] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" iface="eth0" netns="" Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.479 [INFO][4143] k8s.go 615: Releasing IP address(es) ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.480 [INFO][4143] utils.go 188: Calico CNI releasing IP address ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.530 [INFO][4149] ipam_plugin.go 411: Releasing address using handleID ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" HandleID="k8s-pod-network.75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.530 [INFO][4149] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.531 [INFO][4149] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.543 [WARNING][4149] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" HandleID="k8s-pod-network.75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.543 [INFO][4149] ipam_plugin.go 439: Releasing address using workloadID ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" HandleID="k8s-pod-network.75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.548 [INFO][4149] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:51.557535 containerd[1982]: 2024-07-02 00:24:51.552 [INFO][4143] k8s.go 621: Teardown processing complete. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:51.557535 containerd[1982]: time="2024-07-02T00:24:51.557425880Z" level=info msg="TearDown network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\" successfully" Jul 2 00:24:51.557535 containerd[1982]: time="2024-07-02T00:24:51.557510842Z" level=info msg="StopPodSandbox for \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\" returns successfully" Jul 2 00:24:51.674731 containerd[1982]: time="2024-07-02T00:24:51.674462874Z" level=info msg="RemovePodSandbox for \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\"" Jul 2 00:24:51.674731 containerd[1982]: time="2024-07-02T00:24:51.674518584Z" level=info msg="Forcibly stopping sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\"" Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.794 [WARNING][4170] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"026c6050-7b15-41b3-a391-397e0a5ded8a", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"facf1a6a65841afcb6c3993369d12fde388e7b2c0780258d840fbfe5d2246364", Pod:"nginx-deployment-6d5f899847-275pf", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5d79c5c474c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.794 [INFO][4170] k8s.go 608: Cleaning up netns ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.794 [INFO][4170] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" iface="eth0" netns="" Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.794 [INFO][4170] k8s.go 615: Releasing IP address(es) ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.794 [INFO][4170] utils.go 188: Calico CNI releasing IP address ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.872 [INFO][4177] ipam_plugin.go 411: Releasing address using handleID ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" HandleID="k8s-pod-network.75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.873 [INFO][4177] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.873 [INFO][4177] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.882 [WARNING][4177] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" HandleID="k8s-pod-network.75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.883 [INFO][4177] ipam_plugin.go 439: Releasing address using workloadID ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" HandleID="k8s-pod-network.75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Workload="172.31.19.189-k8s-nginx--deployment--6d5f899847--275pf-eth0" Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.886 [INFO][4177] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:51.895826 containerd[1982]: 2024-07-02 00:24:51.889 [INFO][4170] k8s.go 621: Teardown processing complete. ContainerID="75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50" Jul 2 00:24:51.897093 containerd[1982]: time="2024-07-02T00:24:51.896016017Z" level=info msg="TearDown network for sandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\" successfully" Jul 2 00:24:51.944779 containerd[1982]: time="2024-07-02T00:24:51.944636893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:24:51.944779 containerd[1982]: time="2024-07-02T00:24:51.944731279Z" level=info msg="RemovePodSandbox \"75c410e51a418cd32ae823906a98fafb24133541db3cab440f7cc680ed760b50\" returns successfully" Jul 2 00:24:51.955400 containerd[1982]: time="2024-07-02T00:24:51.945558925Z" level=info msg="StopPodSandbox for \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\"" Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.058 [WARNING][4197] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-csi--node--driver--cfsz4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21a555aa-f137-4779-be5f-76c30fd4078f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff", Pod:"csi-node-driver-cfsz4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliae7826e0a50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.058 [INFO][4197] k8s.go 608: Cleaning up netns ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.058 [INFO][4197] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" iface="eth0" netns="" Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.058 [INFO][4197] k8s.go 615: Releasing IP address(es) ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.058 [INFO][4197] utils.go 188: Calico CNI releasing IP address ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.140 [INFO][4204] ipam_plugin.go 411: Releasing address using handleID ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" HandleID="k8s-pod-network.62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.141 [INFO][4204] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.141 [INFO][4204] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.157 [WARNING][4204] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" HandleID="k8s-pod-network.62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.157 [INFO][4204] ipam_plugin.go 439: Releasing address using workloadID ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" HandleID="k8s-pod-network.62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.159 [INFO][4204] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.163556 containerd[1982]: 2024-07-02 00:24:52.161 [INFO][4197] k8s.go 621: Teardown processing complete. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:52.164408 containerd[1982]: time="2024-07-02T00:24:52.164353490Z" level=info msg="TearDown network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\" successfully" Jul 2 00:24:52.164408 containerd[1982]: time="2024-07-02T00:24:52.164392478Z" level=info msg="StopPodSandbox for \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\" returns successfully" Jul 2 00:24:52.172436 containerd[1982]: time="2024-07-02T00:24:52.171741614Z" level=info msg="RemovePodSandbox for \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\"" Jul 2 00:24:52.172436 containerd[1982]: time="2024-07-02T00:24:52.171785582Z" level=info msg="Forcibly stopping sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\"" Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.307 [WARNING][4223] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-csi--node--driver--cfsz4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"21a555aa-f137-4779-be5f-76c30fd4078f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"c8143065e5a6df23ecc4d6581f5d13b4d2e1767fd9b69eb9971a6ef36eb7fcff", Pod:"csi-node-driver-cfsz4", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"caliae7826e0a50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.307 [INFO][4223] k8s.go 608: Cleaning up netns ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.307 [INFO][4223] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" iface="eth0" netns="" Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.307 [INFO][4223] k8s.go 615: Releasing IP address(es) ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.307 [INFO][4223] utils.go 188: Calico CNI releasing IP address ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.368 [INFO][4230] ipam_plugin.go 411: Releasing address using handleID ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" HandleID="k8s-pod-network.62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.369 [INFO][4230] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.369 [INFO][4230] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.383 [WARNING][4230] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" HandleID="k8s-pod-network.62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.383 [INFO][4230] ipam_plugin.go 439: Releasing address using workloadID ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" HandleID="k8s-pod-network.62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Workload="172.31.19.189-k8s-csi--node--driver--cfsz4-eth0" Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.387 [INFO][4230] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:24:52.390895 containerd[1982]: 2024-07-02 00:24:52.389 [INFO][4223] k8s.go 621: Teardown processing complete. ContainerID="62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8" Jul 2 00:24:52.395409 containerd[1982]: time="2024-07-02T00:24:52.391199673Z" level=info msg="TearDown network for sandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\" successfully" Jul 2 00:24:52.395498 kubelet[2415]: E0702 00:24:52.394724 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:52.401316 containerd[1982]: time="2024-07-02T00:24:52.400674441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:24:52.401316 containerd[1982]: time="2024-07-02T00:24:52.400746497Z" level=info msg="RemovePodSandbox \"62fe51690f18e38d89de3be6c57b04a67e4a7cf2a01359e46631129cd04b9cc8\" returns successfully" Jul 2 00:24:52.444962 containerd[1982]: time="2024-07-02T00:24:52.444906702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:52.446919 containerd[1982]: time="2024-07-02T00:24:52.446646658Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:24:52.449689 containerd[1982]: time="2024-07-02T00:24:52.449345068Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:52.453941 containerd[1982]: time="2024-07-02T00:24:52.453868365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:24:52.454904 containerd[1982]: time="2024-07-02T00:24:52.454861054Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 5.701243722s" Jul 2 00:24:52.455018 containerd[1982]: time="2024-07-02T00:24:52.454906362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:24:52.457133 containerd[1982]: time="2024-07-02T00:24:52.457090774Z" level=info msg="CreateContainer within sandbox \"8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:24:52.478165 containerd[1982]: time="2024-07-02T00:24:52.478123292Z" level=info msg="CreateContainer within sandbox \"8d627671d51c3644bad6177f7b3a9435254af67420f651cc39e82e9b2b651020\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"02dece94aa5f6aaf10ae0b161961b63a09d48ffa902762ca83e83351d32539f3\"" Jul 2 00:24:52.479147 containerd[1982]: time="2024-07-02T00:24:52.479114457Z" level=info msg="StartContainer for \"02dece94aa5f6aaf10ae0b161961b63a09d48ffa902762ca83e83351d32539f3\"" Jul 2 00:24:52.533040 systemd[1]: Started cri-containerd-02dece94aa5f6aaf10ae0b161961b63a09d48ffa902762ca83e83351d32539f3.scope - libcontainer container 02dece94aa5f6aaf10ae0b161961b63a09d48ffa902762ca83e83351d32539f3. Jul 2 00:24:52.605504 containerd[1982]: time="2024-07-02T00:24:52.605428272Z" level=info msg="StartContainer for \"02dece94aa5f6aaf10ae0b161961b63a09d48ffa902762ca83e83351d32539f3\" returns successfully" Jul 2 00:24:52.938547 kubelet[2415]: I0702 00:24:52.938355 2415 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67c75b55c8-zb8t5" podStartSLOduration=2.918639381 podCreationTimestamp="2024-07-02 00:24:42 +0000 UTC" firstStartedPulling="2024-07-02 00:24:44.435979528 +0000 UTC m=+54.175066282" lastFinishedPulling="2024-07-02 00:24:52.455428181 +0000 UTC m=+62.194514936" observedRunningTime="2024-07-02 00:24:52.936531468 +0000 UTC m=+62.675618232" watchObservedRunningTime="2024-07-02 00:24:52.938088035 +0000 UTC m=+62.677174800" Jul 2 00:24:53.394943 kubelet[2415]: E0702 00:24:53.394898 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:53.930424 kubelet[2415]: I0702 00:24:53.930369 2415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:24:54.395360 kubelet[2415]: E0702 00:24:54.395325 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:55.396417 kubelet[2415]: E0702 00:24:55.396364 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:56.397019 kubelet[2415]: E0702 00:24:56.396961 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:57.397396 kubelet[2415]: E0702 00:24:57.397336 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:58.398300 kubelet[2415]: E0702 00:24:58.398241 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:59.399398 kubelet[2415]: E0702 00:24:59.399345 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:00.399891 kubelet[2415]: E0702 00:25:00.399839 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:01.400928 kubelet[2415]: E0702 00:25:01.400810 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:02.401309 kubelet[2415]: E0702 00:25:02.401255 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:03.211646 kubelet[2415]: I0702 00:25:03.211497 2415 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:25:03.402289 kubelet[2415]: E0702 00:25:03.402094 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:04.402663 kubelet[2415]: E0702 00:25:04.402620 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:05.403868 kubelet[2415]: E0702 00:25:05.403820 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:06.404787 kubelet[2415]: E0702 00:25:06.404687 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:07.406023 kubelet[2415]: E0702 00:25:07.405904 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:08.406863 kubelet[2415]: E0702 00:25:08.406807 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:09.407643 kubelet[2415]: E0702 00:25:09.407590 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:10.408889 kubelet[2415]: E0702 00:25:10.408222 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:11.186069 systemd[1]: run-containerd-runc-k8s.io-3b9e0fd126b55fde103b64486753bdaf57107bc1407421a74a6b4d9858b139c9-runc.VDrGi4.mount: Deactivated successfully. Jul 2 00:25:11.326021 kubelet[2415]: E0702 00:25:11.325968 2415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:11.335755 kubelet[2415]: I0702 00:25:11.335718 2415 topology_manager.go:215] "Topology Admit Handler" podUID="5b7776f8-0b1a-4296-a36a-5f6e03ed21e2" podNamespace="default" podName="test-pod-1" Jul 2 00:25:11.346092 systemd[1]: Created slice kubepods-besteffort-pod5b7776f8_0b1a_4296_a36a_5f6e03ed21e2.slice - libcontainer container kubepods-besteffort-pod5b7776f8_0b1a_4296_a36a_5f6e03ed21e2.slice. Jul 2 00:25:11.410027 kubelet[2415]: E0702 00:25:11.409978 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:11.466373 kubelet[2415]: I0702 00:25:11.466234 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hnhg\" (UniqueName: \"kubernetes.io/projected/5b7776f8-0b1a-4296-a36a-5f6e03ed21e2-kube-api-access-5hnhg\") pod \"test-pod-1\" (UID: \"5b7776f8-0b1a-4296-a36a-5f6e03ed21e2\") " pod="default/test-pod-1" Jul 2 00:25:11.466373 kubelet[2415]: I0702 00:25:11.466302 2415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-69cd940b-58ef-4612-8c66-b53df7a58ef0\" (UniqueName: \"kubernetes.io/nfs/5b7776f8-0b1a-4296-a36a-5f6e03ed21e2-pvc-69cd940b-58ef-4612-8c66-b53df7a58ef0\") pod \"test-pod-1\" (UID: \"5b7776f8-0b1a-4296-a36a-5f6e03ed21e2\") " pod="default/test-pod-1" Jul 2 00:25:11.664571 kernel: FS-Cache: Loaded Jul 2 00:25:11.820909 kernel: RPC: Registered named UNIX socket transport module. Jul 2 00:25:11.821051 kernel: RPC: Registered udp transport module. Jul 2 00:25:11.821081 kernel: RPC: Registered tcp transport module. Jul 2 00:25:11.821109 kernel: RPC: Registered tcp-with-tls transport module. Jul 2 00:25:11.821136 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 00:25:12.344690 kernel: NFS: Registering the id_resolver key type Jul 2 00:25:12.345025 kernel: Key type id_resolver registered Jul 2 00:25:12.345085 kernel: Key type id_legacy registered Jul 2 00:25:12.396233 nfsidmap[4359]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 2 00:25:12.408866 nfsidmap[4360]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 2 00:25:12.410978 kubelet[2415]: E0702 00:25:12.410822 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:12.560411 containerd[1982]: time="2024-07-02T00:25:12.560366966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5b7776f8-0b1a-4296-a36a-5f6e03ed21e2,Namespace:default,Attempt:0,}" Jul 2 00:25:12.724924 (udev-worker)[4356]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:25:12.726617 systemd-networkd[1812]: cali5ec59c6bf6e: Link UP Jul 2 00:25:12.726904 systemd-networkd[1812]: cali5ec59c6bf6e: Gained carrier Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.632 [INFO][4361] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.19.189-k8s-test--pod--1-eth0 default 5b7776f8-0b1a-4296-a36a-5f6e03ed21e2 1203 0 2024-07-02 00:24:40 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.19.189 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.189-k8s-test--pod--1-" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.632 [INFO][4361] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.189-k8s-test--pod--1-eth0" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.672 [INFO][4372] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" HandleID="k8s-pod-network.b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Workload="172.31.19.189-k8s-test--pod--1-eth0" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.683 [INFO][4372] ipam_plugin.go 264: Auto assigning IP ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" HandleID="k8s-pod-network.b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Workload="172.31.19.189-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001ff840), Attrs:map[string]string{"namespace":"default", "node":"172.31.19.189", "pod":"test-pod-1", "timestamp":"2024-07-02 00:25:12.672005676 +0000 UTC"}, Hostname:"172.31.19.189", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.684 [INFO][4372] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.684 [INFO][4372] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.684 [INFO][4372] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.19.189' Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.686 [INFO][4372] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" host="172.31.19.189" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.692 [INFO][4372] ipam.go 372: Looking up existing affinities for host host="172.31.19.189" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.698 [INFO][4372] ipam.go 489: Trying affinity for 192.168.32.192/26 host="172.31.19.189" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.701 [INFO][4372] ipam.go 155: Attempting to load block cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.705 [INFO][4372] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.192/26 host="172.31.19.189" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.705 [INFO][4372] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.192/26 handle="k8s-pod-network.b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" host="172.31.19.189" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.707 [INFO][4372] ipam.go 1685: Creating new handle: k8s-pod-network.b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476 Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.712 [INFO][4372] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.192/26 handle="k8s-pod-network.b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" host="172.31.19.189" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.720 [INFO][4372] ipam.go 1216: Successfully claimed IPs: [192.168.32.197/26] block=192.168.32.192/26 handle="k8s-pod-network.b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" host="172.31.19.189" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.720 [INFO][4372] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.197/26] handle="k8s-pod-network.b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" host="172.31.19.189" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.720 [INFO][4372] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.720 [INFO][4372] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.32.197/26] IPv6=[] ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" HandleID="k8s-pod-network.b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Workload="172.31.19.189-k8s-test--pod--1-eth0" Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.722 [INFO][4361] k8s.go 386: Populated endpoint ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.189-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5b7776f8-0b1a-4296-a36a-5f6e03ed21e2", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:12.746489 containerd[1982]: 2024-07-02 00:25:12.722 [INFO][4361] k8s.go 387: Calico CNI using IPs: [192.168.32.197/32] ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.189-k8s-test--pod--1-eth0" Jul 2 00:25:12.747714 containerd[1982]: 2024-07-02 00:25:12.722 [INFO][4361] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.189-k8s-test--pod--1-eth0" Jul 2 00:25:12.747714 containerd[1982]: 2024-07-02 00:25:12.726 [INFO][4361] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.189-k8s-test--pod--1-eth0" Jul 2 00:25:12.747714 containerd[1982]: 2024-07-02 00:25:12.727 [INFO][4361] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.189-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.19.189-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"5b7776f8-0b1a-4296-a36a-5f6e03ed21e2", ResourceVersion:"1203", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 24, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.19.189", ContainerID:"b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"4a:bd:26:36:e9:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:25:12.747714 containerd[1982]: 2024-07-02 00:25:12.740 [INFO][4361] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.19.189-k8s-test--pod--1-eth0" Jul 2 00:25:12.782501 containerd[1982]: time="2024-07-02T00:25:12.780761298Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:25:12.782501 containerd[1982]: time="2024-07-02T00:25:12.780908150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:12.782501 containerd[1982]: time="2024-07-02T00:25:12.780951330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:25:12.782501 containerd[1982]: time="2024-07-02T00:25:12.780970794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:25:12.819760 systemd[1]: Started cri-containerd-b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476.scope - libcontainer container b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476. Jul 2 00:25:12.896849 containerd[1982]: time="2024-07-02T00:25:12.896803998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:5b7776f8-0b1a-4296-a36a-5f6e03ed21e2,Namespace:default,Attempt:0,} returns sandbox id \"b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476\"" Jul 2 00:25:12.919010 containerd[1982]: time="2024-07-02T00:25:12.918967326Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 00:25:13.252887 containerd[1982]: time="2024-07-02T00:25:13.252829405Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:25:13.256265 containerd[1982]: time="2024-07-02T00:25:13.255366266Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jul 2 00:25:13.263385 containerd[1982]: time="2024-07-02T00:25:13.263330127Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\", size \"70999878\" in 344.302538ms" Jul 2 00:25:13.263385 containerd[1982]: time="2024-07-02T00:25:13.263384395Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 00:25:13.268842 containerd[1982]: time="2024-07-02T00:25:13.268726333Z" level=info msg="CreateContainer within sandbox \"b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 00:25:13.320268 containerd[1982]: time="2024-07-02T00:25:13.320209631Z" level=info msg="CreateContainer within sandbox \"b95761cdd1abf26a3d25244064c398a02ee21f887a1a6693adf8c4da89b8c476\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"84d0454f9c39c4aeb9e71d55ba7e83ab6c9c9cc06c99a102f5d65bdadee0deca\"" Jul 2 00:25:13.321111 containerd[1982]: time="2024-07-02T00:25:13.320941628Z" level=info msg="StartContainer for \"84d0454f9c39c4aeb9e71d55ba7e83ab6c9c9cc06c99a102f5d65bdadee0deca\"" Jul 2 00:25:13.357122 systemd[1]: Started cri-containerd-84d0454f9c39c4aeb9e71d55ba7e83ab6c9c9cc06c99a102f5d65bdadee0deca.scope - libcontainer container 84d0454f9c39c4aeb9e71d55ba7e83ab6c9c9cc06c99a102f5d65bdadee0deca. Jul 2 00:25:13.411130 kubelet[2415]: E0702 00:25:13.411079 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:13.414838 containerd[1982]: time="2024-07-02T00:25:13.414789295Z" level=info msg="StartContainer for \"84d0454f9c39c4aeb9e71d55ba7e83ab6c9c9cc06c99a102f5d65bdadee0deca\" returns successfully" Jul 2 00:25:14.412119 kubelet[2415]: E0702 00:25:14.412065 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:14.670761 systemd-networkd[1812]: cali5ec59c6bf6e: Gained IPv6LL Jul 2 00:25:15.412359 kubelet[2415]: E0702 00:25:15.412307 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:16.413314 kubelet[2415]: E0702 00:25:16.413252 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:17.273734 ntpd[1940]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:25:17.274165 ntpd[1940]: 2 Jul 00:25:17 ntpd[1940]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:25:17.414512 kubelet[2415]: E0702 00:25:17.414445 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:18.415349 kubelet[2415]: E0702 00:25:18.415293 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:19.416468 kubelet[2415]: E0702 00:25:19.416406 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:20.417264 kubelet[2415]: E0702 00:25:20.417220 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:21.417802 kubelet[2415]: E0702 00:25:21.417740 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:22.418690 kubelet[2415]: E0702 00:25:22.418634 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:23.419422 kubelet[2415]: E0702 00:25:23.419369 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:24.419986 kubelet[2415]: E0702 00:25:24.419938 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:25.420361 kubelet[2415]: E0702 00:25:25.420299 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:26.420918 kubelet[2415]: E0702 00:25:26.420863 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:27.421498 kubelet[2415]: E0702 00:25:27.421429 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:28.421984 kubelet[2415]: E0702 00:25:28.421925 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:29.422344 kubelet[2415]: E0702 00:25:29.422288 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:30.423498 kubelet[2415]: E0702 00:25:30.423423 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:31.326957 kubelet[2415]: E0702 00:25:31.326905 2415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:31.424610 kubelet[2415]: E0702 00:25:31.424556 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:32.425068 kubelet[2415]: E0702 00:25:32.425010 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:33.425918 kubelet[2415]: E0702 00:25:33.425864 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:34.212335 kubelet[2415]: E0702 00:25:34.212285 2415 controller.go:193] "Failed to update lease" err="Put \"https://172.31.22.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.19.189?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:25:34.426070 kubelet[2415]: E0702 00:25:34.426013 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:35.108626 kubelet[2415]: E0702 00:25:35.108524 2415 kubelet_node_status.go:540] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2024-07-02T00:25:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-07-02T00:25:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-07-02T00:25:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2024-07-02T00:25:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\\\",\\\"ghcr.io/flatcar/calico/node:v3.28.0\\\"],\\\"sizeBytes\\\":115238612},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\\\",\\\"ghcr.io/flatcar/calico/cni:v3.28.0\\\"],\\\"sizeBytes\\\":94535610},{\\\"names\\\":[\\\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\\\",\\\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\\\"],\\\"sizeBytes\\\":91036984},{\\\"names\\\":[\\\"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\\\",\\\"ghcr.io/flatcar/nginx:latest\\\"],\\\"sizeBytes\\\":70999878},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.28.0\\\"],\\\"sizeBytes\\\":41869036},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\\\",\\\"registry.k8s.io/kube-proxy:v1.28.11\\\"],\\\"sizeBytes\\\":28117438},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\\\"],\\\"sizeBytes\\\":11595367},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\\\",\\\"ghcr.io/flatcar/calico/csi:v3.28.0\\\"],\\\"sizeBytes\\\":9088822},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\\\"],\\\"sizeBytes\\\":6588288},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\\\",\\\"registry.k8s.io/pause:3.8\\\"],\\\"sizeBytes\\\":311286}]}}\" for node \"172.31.19.189\": Patch \"https://172.31.22.201:6443/api/v1/nodes/172.31.19.189/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:25:35.426667 kubelet[2415]: E0702 00:25:35.426605 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:36.427040 kubelet[2415]: E0702 00:25:36.426983 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:37.427977 kubelet[2415]: E0702 00:25:37.427918 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:38.428820 kubelet[2415]: E0702 00:25:38.428769 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:39.429447 kubelet[2415]: E0702 00:25:39.429392 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:40.430297 kubelet[2415]: E0702 00:25:40.430240 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:41.177395 systemd[1]: run-containerd-runc-k8s.io-3b9e0fd126b55fde103b64486753bdaf57107bc1407421a74a6b4d9858b139c9-runc.xDpm52.mount: Deactivated successfully. Jul 2 00:25:41.431856 kubelet[2415]: E0702 00:25:41.431332 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:42.432868 kubelet[2415]: E0702 00:25:42.432801 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:43.434056 kubelet[2415]: E0702 00:25:43.433990 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:44.213449 kubelet[2415]: E0702 00:25:44.213360 2415 controller.go:193] "Failed to update lease" err="Put \"https://172.31.22.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.19.189?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:25:44.434248 kubelet[2415]: E0702 00:25:44.434195 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:45.109288 kubelet[2415]: E0702 00:25:45.109240 2415 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.19.189\": Get \"https://172.31.22.201:6443/api/v1/nodes/172.31.19.189?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:25:45.435427 kubelet[2415]: E0702 00:25:45.435370 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:46.436000 kubelet[2415]: E0702 00:25:46.435948 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:47.436801 kubelet[2415]: E0702 00:25:47.436744 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:48.437327 kubelet[2415]: E0702 00:25:48.437267 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:49.437780 kubelet[2415]: E0702 00:25:49.437727 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:50.438697 kubelet[2415]: E0702 00:25:50.438644 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:51.325997 kubelet[2415]: E0702 00:25:51.325947 2415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:51.439729 kubelet[2415]: E0702 00:25:51.439658 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:52.440594 kubelet[2415]: E0702 00:25:52.440541 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:53.440800 kubelet[2415]: E0702 00:25:53.440742 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:54.214406 kubelet[2415]: E0702 00:25:54.214356 2415 controller.go:193] "Failed to update lease" err="Put \"https://172.31.22.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.19.189?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:25:54.440993 kubelet[2415]: E0702 00:25:54.440932 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:55.109823 kubelet[2415]: E0702 00:25:55.109785 2415 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.19.189\": Get \"https://172.31.22.201:6443/api/v1/nodes/172.31.19.189?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:25:55.442128 kubelet[2415]: E0702 00:25:55.442089 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:56.442682 kubelet[2415]: E0702 00:25:56.442616 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:57.443875 kubelet[2415]: E0702 00:25:57.443818 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:58.444089 kubelet[2415]: E0702 00:25:58.444030 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:25:59.445422 kubelet[2415]: E0702 00:25:59.445371 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:00.446284 kubelet[2415]: E0702 00:26:00.446223 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:01.446518 kubelet[2415]: E0702 00:26:01.446401 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:02.447494 kubelet[2415]: E0702 00:26:02.447370 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:03.448628 kubelet[2415]: E0702 00:26:03.448573 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:04.215039 kubelet[2415]: E0702 00:26:04.214974 2415 controller.go:193] "Failed to update lease" err="Put \"https://172.31.22.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.19.189?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:26:04.303174 kubelet[2415]: E0702 00:26:04.302578 2415 controller.go:193] "Failed to update lease" err="Put \"https://172.31.22.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.19.189?timeout=10s\": read tcp 172.31.19.189:56746->172.31.22.201:6443: read: connection reset by peer" Jul 2 00:26:04.303174 kubelet[2415]: I0702 00:26:04.302619 2415 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jul 2 00:26:04.449580 kubelet[2415]: E0702 00:26:04.449533 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:05.318221 kubelet[2415]: E0702 00:26:05.318158 2415 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.19.189\": Get \"https://172.31.22.201:6443/api/v1/nodes/172.31.19.189?timeout=10s\": context deadline exceeded - error from a previous attempt: read tcp 172.31.19.189:56746->172.31.22.201:6443: read: connection reset by peer" Jul 2 00:26:05.319123 kubelet[2415]: E0702 00:26:05.319090 2415 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.19.189\": Get \"https://172.31.22.201:6443/api/v1/nodes/172.31.19.189?timeout=10s\": dial tcp 172.31.22.201:6443: connect: connection refused" Jul 2 00:26:05.319123 kubelet[2415]: E0702 00:26:05.319125 2415 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Jul 2 00:26:05.331644 kubelet[2415]: E0702 00:26:05.331602 2415 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.19.189?timeout=10s\": dial tcp 172.31.22.201:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.19.189:52154->172.31.22.201:6443: read: connection reset by peer" interval="200ms" Jul 2 00:26:05.450336 kubelet[2415]: E0702 00:26:05.449837 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:06.450259 kubelet[2415]: E0702 00:26:06.450203 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:07.451027 kubelet[2415]: E0702 00:26:07.450974 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:08.452182 kubelet[2415]: E0702 00:26:08.452132 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:09.452839 kubelet[2415]: E0702 00:26:09.452781 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:10.453830 kubelet[2415]: E0702 00:26:10.453773 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:11.326803 kubelet[2415]: E0702 00:26:11.326687 2415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:11.454473 kubelet[2415]: E0702 00:26:11.454415 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:12.454762 kubelet[2415]: E0702 00:26:12.454707 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:13.455034 kubelet[2415]: E0702 00:26:13.454981 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:14.456142 kubelet[2415]: E0702 00:26:14.456085 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:15.456415 kubelet[2415]: E0702 00:26:15.456238 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:15.533935 kubelet[2415]: E0702 00:26:15.533884 2415 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.22.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.19.189?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="400ms" Jul 2 00:26:16.456738 kubelet[2415]: E0702 00:26:16.456685 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:17.457139 kubelet[2415]: E0702 00:26:17.457087 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:18.457609 kubelet[2415]: E0702 00:26:18.457555 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:19.458284 kubelet[2415]: E0702 00:26:19.458239 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:20.458758 kubelet[2415]: E0702 00:26:20.458700 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:21.459610 kubelet[2415]: E0702 00:26:21.459554 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:26:22.460695 kubelet[2415]: E0702 00:26:22.460634 2415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"